com.alpine.plugin.core.spark.templates

TemplatedSparkDataFrameJob

abstract class TemplatedSparkDataFrameJob[ReturnType, OutputType <: IOBase] extends SparkIOTypedPluginJob[HdfsTabularDataset, OutputType]

Templated base for Spark plugin jobs operating on DataFrames. Most jobs will want to use SparkDataFrameJob which takes and returns Spark DataFrames. This version does not support schema inference.

ReturnType

The return type of the transformation method (most commonly a DataFrame)

OutputType

The return type of the actual operator, extending IOBase. Most commonly will be an HDFS dataset of some flavor (see SparkDataFrame)

Linear Supertypes
SparkIOTypedPluginJob[HdfsTabularDataset, OutputType], AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. TemplatedSparkDataFrameJob
  2. SparkIOTypedPluginJob
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TemplatedSparkDataFrameJob()

Abstract Value Members

  1. abstract def saveResults(results: ReturnType, sparkUtils: SparkRuntimeUtils, storageFormat: String, path: String, overwrite: Boolean, sourceOperatorInfo: Option[OperatorInfo], addendum: Map[String, AnyRef] = Map[String, AnyRef]()): OutputType

    Write the results to the target path

    Write the results to the target path

    results

    - the data to write

    storageFormat

    - Parquet, Avro, and TSV

    path

    full HDFS output path

    overwrite

    Boolean indicating whether to overwrite existing results at that location.

    returns

  2. abstract def transformWithAddendum(operatorParameters: OperatorParameters, dataFrame: DataFrame, sparkUtils: SparkRuntimeUtils, listener: OperatorListener): (ReturnType, Map[String, AnyRef])

    Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, of type 'ReturnType'.

    Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, of type 'ReturnType'. In addition return a map of type String -> AnyRef (Object in java) which will be added to the output and used in the GUI node to return additional output or define visualization. Default implementation returns the input DataFrame with no Addendum information. If you use this version schema inference will not work.

    dataFrame

    - the input data

    sparkUtils

    - a sparkUtils object including the Spark context

    listener

    - the operator listener object which can be used to print messages to the GUI.

    returns

    the output DataFrame and a map containing the keys and values to add to the output

Concrete Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  17. def onExecution(sparkContext: SparkContext, appConf: Map[String, String], input: HdfsTabularDataset, operatorParameters: OperatorParameters, listener: OperatorListener): OutputType

    The driver function for the Spark job.

    The driver function for the Spark job. Unlike the corresponding function in the parent class, this function allows you to work with IOBase types directly.

    sparkContext

    Spark context created when the Spark job was submitted

    appConf

    a map containing system related parameters (rather than operator parameters) including all Spark parameters, workflow-level variables

    input

    the ioBase object which you have defined as the input to your plugin. For example, if the GUI node of the plugin takes an HDFSTabularDataset, this input parameter will be that dataset.

    listener

    a listener object which allows you to send messages to the Alpine GUI during the Spark job

    returns

    the output of your plugin

    Definition Classes
    TemplatedSparkDataFrameJobSparkIOTypedPluginJob
  18. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  19. def toString(): String

    Definition Classes
    AnyRef → Any
  20. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from SparkIOTypedPluginJob[HdfsTabularDataset, OutputType]

Inherited from AnyRef

Inherited from Any

Ungrouped