The driver function for the Spark job.
The driver function for the Spark job. Unlike the corresponding function in the parent class, this function allows you to work with IOBase types directly.
Spark context created when the Spark job was submitted
a map containing system related parameters (rather than operator parameters) including all Spark parameters, workflow-level variables
the ioBase object which you have defined as the input to your plugin. For example, if the GUI node of the plugin takes an HDFSTabularDataset, this input parameter will be that dataset.
a listener object which allows you to send messages to the Alpine GUI during the Spark job
the output of your plugin
Writes the dataFrame to HDFS as either a Parquet dataset, Avro dataset, or tabular delimited dataset.
Writes the dataFrame to HDFS as either a Parquet dataset, Avro dataset, or tabular delimited dataset.
The data frame that is to be stored to HDFS.
- Parquet, Avro, and TSV
The location in HDFS to store the data frame.
- If false will throw a "File Already Exists" exception if the output path already exists. If true will delete the existing results before trying to write the new ones.
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, also as a dataFrame.
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, also as a dataFrame.
Override this method to define a DataFrame transformation, if you do not want to save any additional output (the default is to output the data frame and show a preiew of the data frame as a visualization). dataset). To define an addendum to create additional output use the 'TransformWithAddendum' method.
- the input data
- a sparkUtils object including the Spark context
- the operator listener object which can be used to print messages to the GUI.
your transformed DataFrame (Default implementation returns the input DataFrame)
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, as a dataData '.
Define the transformation from the input dataset, expressed as a dataFrame, where the schema corresponds to the Alpine column header to the output dataset, as a dataData '. In addition return a map of type String -> AnyRef (Object in java) which will be added to the output.
- the input data
- a sparkUtils object including the Spark context
- the operator listener object which can be used to print messages to the GUI.
the output DataFrame and a map containing the keys and values to add to the output. (Default implementation returns the input DataFrame with no Addendum information)
Job base for Spark plugin jobs taking and returning DataFrames. Note: This WILL NOT work with hive.