com.alpine.plugin.core.spark

SparkRuntimeWithIOTypedJob

abstract class SparkRuntimeWithIOTypedJob[J <: SparkIOTypedPluginJob[I, O], I <: IOBase, O <: IOBase] extends OperatorRuntime[SparkExecutionContext, I, O]

A descendant of SparkRuntime which handles the most straightforward runtime behavior, submitting a Spark job with the input of your plugin and returning the output type. It takes an implementation of SparkIOTypedPluginJob as a generic parameter, and it is in that class where you will define the logic of the Spark job. Use this class if all you want to do is submit a Spark job since it takes care of submitting the Spark job and serializing/ de-serializing the outputs.

J

your implementation of the SparkIOTypedPluginJob class, whose type parameters must align with I an O here

I

the IOBase input type of your plugin (must be consistent with the input type of the GUINode class implementation, and the plugin signature implementation.

O

the output of your plugin.

Note: In order to do more at runtime than just submit a Spark job, but use our serialization logic, you could use this class with SparkIOTypedPluginJob and simply override the 'onExecution' method (see SparkIOTypedPluginJob documentation for more).

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SparkRuntimeWithIOTypedJob
  2. OperatorRuntime
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkRuntimeWithIOTypedJob()

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  12. def getSparkJobConfiguration(parameters: OperatorParameters, input: I): SparkJobConfiguration

    The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions().

    The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions(). This covers: -- Number of Spark Executors -- Memory per Executor in MB. -- Driver Memory in MB. -- Cores per executor. If those parameters are not present, it uses the default values (3, 2048, 2048, 1) respectively.

    Override this method to change the default Spark job configuration (to add additional parameters or change how the standard ones are set).

    parameters

    Parameters of the operator.

    input

    The input to the operator.

    returns

    The Spark job configuration that will be used to submit the Spark job.

  13. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  15. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  16. final def notify(): Unit

    Definition Classes
    AnyRef
  17. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  18. def onExecution(context: SparkExecutionContext, input: I, params: OperatorParameters, listener: OperatorListener): O

    The runtime behavior of the plugin.

    The runtime behavior of the plugin. This method is called when the user clicks 'run' or 'step run in the GUI'. The default implementation --configures the Spark job as defined by the getSparkJobConfiguration --submits a Spark job with the input dataType the parameters, the application context, and the listener --de-serializes the output returned by the Spark job --notifies the UI when the Spark job has finished and the weather it was successful --returns the de-serialized output of the Spark job as an IOBase output object.

    context

    A Spark specific execution context, includes Spark parameters.

    input

    The input to the operator.

    params

    The parameter values to the operator.

    listener

    The listener object to communicate information back to the console or the Alpine UI.

    returns

    The output from the execution.

    Definition Classes
    SparkRuntimeWithIOTypedJobOperatorRuntime
  19. def onStop(context: SparkExecutionContext, listener: OperatorListener): Unit

    This is called when the user clicks on 'stop'.

    This is called when the user clicks on 'stop'. If the operator is currently running, this function gets called while 'onExecution' is still running. So it's the developer's responsibility to properly stop whatever is going within 'onExecution'.

    context

    Execution context of the operator.

    listener

    The listener object to communicate information back to the console.

    Definition Classes
    SparkRuntimeWithIOTypedJobOperatorRuntime
  20. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  21. def toString(): String

    Definition Classes
    AnyRef → Any
  22. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  23. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from OperatorRuntime[SparkExecutionContext, I, O]

Inherited from AnyRef

Inherited from Any

Ungrouped