com.alpine.plugin.core.spark.utils

BadDataReportingUtils

object BadDataReportingUtils

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. BadDataReportingUtils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def filterBadDataAndReport(inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils): (DataFrame, String)

    Given a dataFrame, the parameters and an instance of sparkRuntimeUtils, filters out all the rows containing null values.

    Given a dataFrame, the parameters and an instance of sparkRuntimeUtils, filters out all the rows containing null values. Writes those rows to a file according to the values of the 'dataToWriteParam' and the 'badDataPathParam' (provided in the HdfsParameterUtils class). The method returns the data frame which does not contain nulls as well as a string containing an HTML formatted table with the information about the what data was removed and if/where it was stored. The message is generated using the 'AddendumWriter' object in the Plugin Core module.

  11. def filterBadDataAndReportGeneral(isBad: (Row) ⇒ Boolean, inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils): (DataFrame, String)

    Same as 'filterBadDataAndReport' but rather than using the .

    Same as 'filterBadDataAndReport' but rather than using the .anyNull function in the dataFrame class allows the user to define a function which returns a boolean for each row is it contains bad data.

  12. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. def getBadDataToWriteAndMessage(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame]): (Option[DataFrame], String)

    Helper function which uses the AddendumWriter object to generate a message about the bad data and get the data, if any, to write to the bad data file.

  14. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  15. def handleBadDataAsDataFrame(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame], sparkRuntimeUtils: SparkRuntimeUtils, hdfsStorageFormat: HdfsStorageFormat = HdfsStorageFormat.TSV, overwrite: Boolean = true, operatorInfo: Option[OperatorInfo] = None): String

    Rather than filtering a DataFrame, use this method if you already have the bad data as a data frame.

  16. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  17. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  18. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  19. final def notify(): Unit

    Definition Classes
    AnyRef
  20. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  21. def removeDataFromDataFrame(rowIsBad: (Row) ⇒ Boolean, inputDataFrame: DataFrame, dataToWriteParam: Option[Long] = Some(Long.MaxValue)): (DataFrame, Option[DataFrame])

    Split a DataFrame according to the value of the rowIsBad parameter.

  22. def reportBadDataAsStringRDD(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[RDD[String]]): String

    Rather than filtering the data, just provide an RDD of Strings that contain the bad data and write the data and report according to the values of the other parameters.

  23. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  24. def toString(): String

    Definition Classes
    AnyRef → Any
  25. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped