com.alpine.plugin.core.spark.templates

SparkDataFrameRuntime

abstract class SparkDataFrameRuntime[JobType <: SparkDataFrameJob] extends SparkRuntimeWithIOTypedJob[JobType, HdfsTabularDataset, HdfsTabularDataset]

A class controlling the runtime behavior of your plugin. To use the default implementation, which launches a Spark job according to the default Spark settings you will not need to add any code beyond the class definition with the appropriate type parameters.

JobType

your implementation of SparkDataFrameJob

Linear Supertypes
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SparkDataFrameRuntime
  2. SparkRuntimeWithIOTypedJob
  3. OperatorRuntime
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkDataFrameRuntime()

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def createVisualResults(context: SparkExecutionContext, input: HdfsTabularDataset, output: HdfsTabularDataset, params: OperatorParameters, listener: OperatorListener): VisualModel

    This is called to generate the visual output for the results console.

    This is called to generate the visual output for the results console. If the developer does not override it, we try OperatorGUINode#onOutputVisualization, which predated this, so we keep for compatibility.

    context

    Execution context of the operator.

    input

    The input to the operator.

    output

    The output from the execution.

    params

    The parameter values to the operator.

    listener

    The listener object to communicate information back to the console.

    returns

    Definition Classes
    OperatorRuntime
  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. def getAutoTuningOptions(parameters: OperatorParameters, input: HdfsTabularDataset): AutoTunerOptions

    Set the options passed to our Spark Auto Tuner which will choose optimal Spark configuration settings for values not provided by the user based on the size of the cluster, the input data and the type of computation.

    Set the options passed to our Spark Auto Tuner which will choose optimal Spark configuration settings for values not provided by the user based on the size of the cluster, the input data and the type of computation. See documentation for the AutoTunerOptions object for more details on what the settings in this object mean. Set only the auto tuning options by overriding this method. To change the parameters passed the Spark Configuration more comprehensively override 'getSparkJobConfiguration' and this method will be ignored.

    Definition Classes
    SparkRuntimeWithIOTypedJob
  13. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  14. def getSparkJobConfiguration(parameters: OperatorParameters, input: HdfsTabularDataset): SparkJobConfiguration

    The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions.

    The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions. If these are not provided we call out to Alpine's Spark Auto Tuning algorithm, which will determine them. The result of this method is an object which we will use to determine the Spark settings. The SparkJobConfiguration object contains three fields. 1. A map with the advanced parameters. This should be the parameters in the "Advanced Spark Parameters" box. However, if you would like to modify these values, or add your own Spark options here, you may do that by adding those values to this object. 2. A boolean "autoTuneMissingValues". If set to false, this will disable the auto tuning. In this case you must fill in the values of "spark.executor.memory" "spark.driver.memory" and "spark.executor.instances" in the "userDefinedParameters" object 3. Options that will be used for the Auto Tuning. See com.alpine.plugin.core.spark.SparkJobConfiguration for details

    parameters

    Parameters of the operator.

    input

    The input to the operator.

    returns

    The Spark job configuration that will be used to submit the Spark job.

    Definition Classes
    SparkRuntimeWithIOTypedJob
  15. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  17. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  18. final def notify(): Unit

    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  20. def onExecution(context: SparkExecutionContext, input: HdfsTabularDataset, params: OperatorParameters, listener: OperatorListener): HdfsTabularDataset

    The runtime behavior of the plugin.

    The runtime behavior of the plugin. This method is called when the user clicks 'run' or 'step run in the GUI'. The default implementation --configures the Spark job as defined by the getSparkJobConfiguration --submits a Spark job with the input dataType the parameters, the application context, and the listener --de-serializes the output returned by the Spark job --returns the de-serialized output of the Spark job as an IOBase output object.

    context

    A Spark specific execution context, includes Spark parameters.

    input

    The input to the operator.

    params

    The parameter values to the operator.

    listener

    The listener object to communicate information back to the console or the Alpine UI.

    returns

    The output from the execution.

    Definition Classes
    SparkRuntimeWithIOTypedJobOperatorRuntime
    Annotations
    @throws( ... )
  21. def onStop(context: SparkExecutionContext, listener: OperatorListener): Unit

    This is called when the user clicks on 'stop'.

    This is called when the user clicks on 'stop'. If the operator is currently running, this function gets called while 'onExecution' is still running. So it's the developer's responsibility to properly stop whatever is going within 'onExecution'.

    context

    Execution context of the operator.

    listener

    The listener object to communicate information back to the console.

    Definition Classes
    SparkRuntimeWithIOTypedJobOperatorRuntime
  22. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  23. def toString(): String

    Definition Classes
    AnyRef → Any
  24. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped