The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions.
The default implementation looks for the parameter values that would be included by com.alpine.plugin.core.utils.SparkParameterUtils.addStandardSparkOptions. This covers: -- Number of Spark Executors -- Memory per Executor in MB. -- Driver Memory in MB. -- Cores per executor. If those parameters are not present, it uses the default values (3, 2048, 2048, 1) respectively.
Override this method to change the default Spark job configuration (to add additional parameters or change how the standard ones are set).
Parameters of the operator.
The input to the operator.
The Spark job configuration that will be used to submit the Spark job.
The runtime behavior of the plugin.
The runtime behavior of the plugin. This method is called when the user clicks 'run' or 'step run in the GUI'. The default implementation --configures the Spark job as defined by the getSparkJobConfiguration --submits a Spark job with the input dataType the parameters, the application context, and the listener --de-serializes the output returned by the Spark job --notifies the UI when the Spark job has finished and the weather it was successful --returns the de-serialized output of the Spark job as an IOBase output object.
A Spark specific execution context, includes Spark parameters.
The input to the operator.
The parameter values to the operator.
The listener object to communicate information back to the console or the Alpine UI.
The output from the execution.
This is called when the user clicks on 'stop'.
This is called when the user clicks on 'stop'. If the operator is currently running, this function gets called while 'onExecution' is still running. So it's the developer's responsibility to properly stop whatever is going within 'onExecution'.
Execution context of the operator.
The listener object to communicate information back to the console.
A class controlling the runtime behavior of your plugin. To use the default implementation, which launches a Spark job according to the default Spark settings you will not need to add any code beyond the class definition with the appropriate type parameters.
your implementation of SparkDataFrameJob