Write the results to the target path
Write the results to the target path
- the data to write
- Parquet, Avro, and TSV
full HDFS output path
Boolean indicating whether to overwrite existing results at that location.
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, of type 'ReturnType'.
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, of type 'ReturnType'. In addition return a map of type String -> AnyRef (Object in java) which will be added to the output and used in the GUI node to return additional output or define visualization. Default implementation returns the input DataFrame with no Addendum information. If you use this version schema inference will not work.
- the input data
- a sparkUtils object including the Spark context
- the operator listener object which can be used to print messages to the GUI.
the output DataFrame and a map containing the keys and values to add to the output
The driver function for the Spark job.
The driver function for the Spark job. Unlike the corresponding function in the parent class, this function allows you to work with IOBase types directly.
Spark context created when the Spark job was submitted
a map containing system related parameters (rather than operator parameters) including all Spark parameters, workflow-level variables
the ioBase object which you have defined as the input to your plugin. For example, if the GUI node of the plugin takes an HDFSTabularDataset, this input parameter will be that dataset.
-the parameter values set in the GUI node. Their value can be accessed via the "key" defined for each parameter added to the OperatorDialog in the GUI node.
a listener object which allows you to send messages to the Alpine GUI during the Spark job
the output of your plugin
Templated base for Spark plugin jobs operating on DataFrames. Most jobs will want to use SparkDataFrameJob which takes and returns Spark DataFrames. This version does not support schema inference.
The return type of the transformation method (most commonly a DataFrame)
The return type of the actual operator, extending IOBase. Most commonly will be an HDFS dataset of some flavor (see SparkDataFrame)