Append contents to the given HDFS path.
Append contents to the given HDFS path.
The HDFS path that we want to append to.
OutputStream corresponding to the path.
Create a HDFS path for writing.
Create a HDFS path for writing.
The HDFS path that we want to create and write to.
Whether to overwrite the given path if it exists.
OutputStream corresponding to the path.
Delete the given HDFS path.
Delete the given HDFS path.
The HDFS path that we want to delete.
If it's a directory, whether we want to delete the directory recursively.
true if successful, false otherwise.
Determine whether the given path exists in the HDFS or not.
Determine whether the given path exists in the HDFS or not.
The path that we want to check.
true if it exists, false otherwise.
Create the directory path.
Create the directory path.
The directory path that we want to create.
true if it succeeds, false otherwise.
Open a HDFS path for reading.
Open a HDFS path for reading.
The HDFS path that we want to read from.
InputStream corresponding to the path.
This is to be the function to submit the IO typed job to Spark.
This is to be the function to submit the IO typed job to Spark. IO typed Spark jobs will automatically serialize/deserialize input/outputs. TODO: Not supported as of yet.
Input type.
Output type.
The job type.
IO typed job class.
Input to the job. This automatically gets serialized.
Parameters into the job.
Spark job configuration.
Listener to pass to the job. The spark job should be able to communicate directly with Alpine as it's running.
A submitted job object.
:: AlpineSdkApi ::