Class

com.alpine.plugin.core.spark.templates

InferredSparkDataFrameJob

Related Doc: package templates

Permalink

abstract class InferredSparkDataFrameJob extends SparkDataFrameJob

A class for plugins which will use Schema inference

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. InferredSparkDataFrameJob
  2. SparkDataFrameJob
  3. TemplatedSparkDataFrameJob
  4. SparkIOTypedPluginJob
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new InferredSparkDataFrameJob()

    Permalink

Abstract Value Members

  1. abstract def transform(operatorParameters: OperatorParameters, dataFrame: DataFrame, listener: OperatorListener): DataFrame

    Permalink

    Define the transformation from the input dataset, expressed as a dataframe, where the schema corresponds to the alpine column header to the output dataset, also as a dataframe.

    Define the transformation from the input dataset, expressed as a dataframe, where the schema corresponds to the alpine column header to the output dataset, also as a dataframe.

    dataFrame

    - the input data

    listener

    - the operator listener object which can be used to print messages to the GUI.

    Annotations
    @throws( ... )

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  10. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  11. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  13. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  14. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  15. def onExecution(alpineSparkEnvironment: AlpineSparkEnvironment, input: HdfsTabularDataset, operatorParameters: OperatorParameters, listener: OperatorListener): HdfsTabularDataset

    Permalink

    The driver function for the Spark job.

    The driver function for the Spark job. Unlike the corresponding function in the parent class, this function allows you to work with IOBase types directly. YOU MUST Override one of the two 'onExecution' methods.

    alpineSparkEnvironment

    Information about the spark job including the Spark session (unified spark context) created when job was submitted

    input

    the ioBase object which you have defined as the input to your plugin. For example, if the GUI node of the plugin takes an HDFSTabularDataset, this input parameter will be that dataset.

    operatorParameters

    -the parameter values set in the GUI node. Their value can be accessed via the "key" defined for each parameter added to the OperatorDialog in the GUI node.

    listener

    a listener object which allows you to send messages to the Alpine GUI during the Spark job

    returns

    the output of your plugin

    Definition Classes
    TemplatedSparkDataFrameJobSparkIOTypedPluginJob
    Annotations
    @throws( ... )
  16. def saveResults(transformedDataFrame: DataFrame, sparkUtils: SparkRuntimeUtils, storageFormat: HdfsStorageFormatType, compressionType: HdfsCompressionType, outputPath: String, overwrite: Boolean, addendum: Map[String, AnyRef] = Map[String, AnyRef](), tSVAttributes: TSVAttributes = TSVAttributes.defaultCSV): HdfsTabularDataset

    Permalink

    Writes the dataFrame to HDFS as either a Parquet dataset, Avro dataset, or tabular delimited dataset.

    Writes the dataFrame to HDFS as either a Parquet dataset, Avro dataset, or tabular delimited dataset.

    transformedDataFrame

    The data frame that is to be stored to HDFS.

    sparkUtils

    - contains utility methods to write data and to convert between Alpine header types and Spark SQL schemas

    storageFormat

    - Parquet, Avro, and CSV

    compressionType

    - HdfsCompressionType

    outputPath

    The location in HDFS to store the data frame.

    overwrite

    - If false will throw a "File Already Exists" exception if the output path already exists. If true will delete the existing results before trying to write the new ones.

    Definition Classes
    SparkDataFrameJobTemplatedSparkDataFrameJob
  17. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  18. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  19. def transform(operatorParameters: OperatorParameters, dataFrame: DataFrame, sparkUtils: SparkRuntimeUtils, listener: OperatorListener): DataFrame

    Permalink

    Define the transformation from the input dataset, expressed as a dataframe, where the schema corresponds to the alpine column header to the output dataset, also as a dataframe.

    Define the transformation from the input dataset, expressed as a dataframe, where the schema corresponds to the alpine column header to the output dataset, also as a dataframe. If you use this version schema inference will not work.

    dataFrame

    - the input data

    sparkUtils

    - a sparkUtils object including the spark context

    listener

    - the operator listener object which can be used to print messages to the GUI.

    Definition Classes
    InferredSparkDataFrameJobSparkDataFrameJob
    Annotations
    @throws( ... )
  20. def transformWithAddendum(operatorParameters: OperatorParameters, dataFrame: DataFrame, sparkUtils: SparkRuntimeUtils, listener: OperatorListener): (DataFrame, Map[String, AnyRef])

    Permalink

    Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, as a dataData '.

    Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, as a dataData '. In addition return a map of type String -> AnyRef (Object in java) which will be added to the output.

    dataFrame

    - the input data

    sparkUtils

    - a sparkUtils object including the Spark context

    listener

    - the operator listener object which can be used to print messages to the GUI.

    returns

    the output DataFrame and a map containing the keys and values to add to the output. (Default implementation returns the input DataFrame with no Addendum information)

    Definition Classes
    SparkDataFrameJobTemplatedSparkDataFrameJob
    Annotations
    @throws( ... )
  21. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  23. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from SparkDataFrameJob

Inherited from TemplatedSparkDataFrameJob[DataFrame, HdfsTabularDataset]

Inherited from AnyRef

Inherited from Any

Ungrouped