com.alpine.plugin.core.spark.utils

BadDataReportingUtils

object BadDataReportingUtils

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. BadDataReportingUtils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val defaultDataRemovedMessage: String

  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def filterNullDataAndReport(inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils): (DataFrame, String)

    Given a dataFrame, the parameters and an instance of sparkRuntimeUtils, filters out all the rows containing null values.

    Given a dataFrame, the parameters and an instance of sparkRuntimeUtils, filters out all the rows containing null values. Writes those rows to a file according to the values of the 'dataToWriteParam' and the 'badDataPathParam' (provided in the HdfsParameterUtils class). The method returns the data frame which does not contain nulls as well as a string containing an HTML formatted table with the information about the what data was removed and if/where it was stored. The message is generated using the 'AddendumWriter' object in the Plugin Core module.

    Dirty Data: Spark SQL cannot process CSV files with dirty data (i.e. String values in numeric columns. We use the Drop Malformed option, so in the case of dirty data, the operator will not fail, but will silently remove those rows.

  12. def filterNullDataAndReportGeneral(removeRow: (Row) ⇒ Boolean, inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils, dataRemovedDueTo: String): (DataFrame, String)

    Same as 'filterNullDataAndReport' but rather than using the .

    Same as 'filterNullDataAndReport' but rather than using the .anyNull function in the dataFrame class allows the user to define a function which returns a boolean for each row is it contains data which should be removed.

    Dirty Data: Spark SQL cannot process CSV files with dirty data (i.e. String values in numeric columns. We use the Drop Malformed option, so in the case of dirty data, the operator will not fail, but will silently remove those rows.

  13. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  15. def getNullDataToWriteMessage(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame], dataRemovedDueTo: String): (Option[DataFrame], String)

    Helper function which uses the AddendumWriter object to generate a message about the bad data and* get the data, if any, to write to the bad data file.

    Helper function which uses the AddendumWriter object to generate a message about the bad data and* get the data, if any, to write to the bad data file. The data removed parameter is the message for what the bad data was removed. It will be of the form "data removed " + dataRemovedDueTo. I.e. if you put "due to zero values" then the message would read "Data removed due to zero values".

    Note: This method should NOT be called in the event that the user selected the "Do Not Count # of Rows Removed (Faster)" option.

  16. def handleNullDataAsDataFrame[T <: HdfsStorageFormatType](amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame], sparkRuntimeUtils: SparkRuntimeUtils, hdfsStorageFormatType: T, overwrite: Boolean, operatorInfo: Option[OperatorInfo], dataRemovedDueTo: String): String

    If specified by Params will write data containing null values to a file.

    If specified by Params will write data containing null values to a file. Regardless return a message about how much data was removed.

  17. def handleNullDataAsDataFrameDefault[T <: HdfsStorageFormatType](amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, nullData: Option[DataFrame], sparkRuntimeUtils: SparkRuntimeUtils, dataRemovedDueTo: String = defaultDataRemovedMessage): String

    If applicable writes bad data as a CSV with default attributes.

    If applicable writes bad data as a CSV with default attributes.

    returns

  18. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  19. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  20. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  23. def removeDataFromDataFrame(removeRow: (Row) ⇒ Boolean, inputDataFrame: DataFrame, dataToWriteParam: Option[Long] = Some(Long.MaxValue)): (DataFrame, Option[DataFrame])

    Split a DataFrame according to the value of the removeRow parameter.

    Split a DataFrame according to the value of the removeRow parameter.

    Dirty Data: Spark SQL cannot process CSV files with dirty data (i.e. String values in numeric columns). We use the Drop Malformed option, so in the case of dirty data, the operator will not fail, but will silently remove those rows.

    removeRow

    A function from spark.sql.Row to boolean. Should return true if the row is false.

    inputDataFrame

    Input data read without null or bad data removed.

    dataToWriteParam

    None if write no data. Some(n) if parameter value is write n rows.

    returns

  24. def reportNullDataAsStringRDD(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[RDD[String]], dataRemovedDueTo: String): String

    Rather than filtering the data, just provide an RDD of Strings that contain the null data and write the data and report according to the values of the other parameters.

    Rather than filtering the data, just provide an RDD of Strings that contain the null data and write the data and report according to the values of the other parameters.

    Dirty Data: Spark SQL cannot process CSV files with dirty data (i.e. String values in numeric columns. We use the Drop Malformed option, so in the case of dirty data, the operator will not fail, but will silently remove those rows.

  25. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  26. def toString(): String

    Definition Classes
    AnyRef → Any
  27. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def filterBadDataAndReport(inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils): (DataFrame, String)

    Annotations
    @deprecated
    Deprecated

    use filterNullDataAndReport

  2. def filterBadDataAndReportGeneral(isBad: (Row) ⇒ Boolean, inputDataFrame: DataFrame, operatorParameters: OperatorParameters, sparkRuntimeUtils: SparkRuntimeUtils): (DataFrame, String)

    Annotations
    @deprecated
    Deprecated

    Use filterNullDataAndReportGeneral

  3. def getBadDataToWriteAndMessage(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame]): (Option[DataFrame], String)

    Annotations
    @deprecated
    Deprecated

    Use getNullDataToWriteMessage

  4. def handleBadDataAsDataFrame(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[DataFrame], sparkRuntimeUtils: SparkRuntimeUtils, hdfsStorageFormat: HdfsStorageFormat = HdfsStorageFormat.TSV, overwrite: Boolean = true, operatorInfo: Option[OperatorInfo] = None): String

    Annotations
    @deprecated
    Deprecated

    Use signature with HdfsStorageFormatType or handelNullDataAsDataFrame

  5. def reportBadDataAsStringRDD(amountOfDataToWriteParam: Option[Long], badDataPath: String, inputDataSize: Long, outputSize: Long, badData: Option[RDD[String]]): String

    Annotations
    @deprecated
    Deprecated

    Use reportNullDataAsStringRDD

Inherited from AnyRef

Inherited from Any

Ungrouped