Class

com.alpine.plugin.test.mock

SparkExecutionContextMock

Related Doc: package mock

Permalink

class SparkExecutionContextMock extends SparkExecutionContext

This is a mock version of SparkExecutionContext, for use in tests. This defines the HDFSVisualModelHelper as HDFSVisualModelHelperMock, and chorusAPICaller according to the argument.

It can be extended for different behaviour (e.g. mocking the file system).

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkExecutionContextMock
  2. SparkExecutionContext
  3. ExecutionContext
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkExecutionContextMock(chorusAPICallerMock: ChorusAPICaller)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def appendPath(path: String): OutputStream

    Permalink

    Append contents to the given HDFS path.

    Append contents to the given HDFS path.

    path

    The HDFS path that we want to append to.

    returns

    OutputStream corresponding to the path.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def chorusAPICaller: ChorusAPICaller

    Permalink
  7. def chorusUserInfo: ChorusUserInfo

    Permalink
  8. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. def config: CustomOperatorConfig

    Permalink
  10. def createPath(path: String, overwrite: Boolean): OutputStream

    Permalink

    Create a HDFS path for writing.

    Create a HDFS path for writing.

    path

    The HDFS path that we want to create and write to.

    overwrite

    Whether to overwrite the given path if it exists.

    returns

    OutputStream corresponding to the path.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  11. def deletePath(path: String, recursive: Boolean): Boolean

    Permalink

    Delete the given HDFS path.

    Delete the given HDFS path.

    path

    The HDFS path that we want to delete.

    recursive

    If it's a directory, whether we want to delete the directory recursively.

    returns

    true if successful, false otherwise.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  12. def doHdfsAction[T](fs: (FileSystem) ⇒ T): T

    Permalink
  13. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  15. def exists(path: String): Boolean

    Permalink

    Determine whether the given path exists in the HDFS or not.

    Determine whether the given path exists in the HDFS or not.

    path

    The path that we want to check.

    returns

    true if it exists, false otherwise.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  16. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  18. def getSparkAutoTunedParameters[I <: IOBase, JOB <: SparkIOTypedPluginJob[I, _]](jobClass: Class[JOB], input: I, params: OperatorParameters, sparkConf: SparkJobConfiguration, listener: OperatorListener): Nothing

    Permalink

    Returns the map of Spark parameters after autoTuning algorithm is applied.

    Returns the map of Spark parameters after autoTuning algorithm is applied. This is the final set of Spark properties ready to be passed with Spark job submission (except spark.job.name) It leverages: - user-defined Spark properties at the operator level (Spark Advanced Settings box), workflow level and data source level (in this order of precedence). - AutoTunerOptions set in the SparkJobConfiguration (@param sparkConf) - auto-tuned parameters that were not user-specified

    I

    Input type.

    jobClass

    IO typed job class.

    input

    Input to the job. This automatically gets serialized.

    params

    Parameters into the job.

    sparkConf

    Spark job configuration.

    listener

    Listener to pass to the job. The spark job should be able to communicate directly with Alpine as it's running.

    returns

    The map of relevant Spark properties after auto tuning algorithm was applied.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  19. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  20. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  21. def mkdir(path: String): Boolean

    Permalink

    Create the directory path.

    Create the directory path.

    path

    The directory path that we want to create.

    returns

    true if it succeeds, false otherwise.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  22. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  23. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  24. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. def openPath(path: String): InputStream

    Permalink

    Open a HDFS path for reading.

    Open a HDFS path for reading.

    path

    The HDFS path that we want to read from.

    returns

    InputStream corresponding to the path.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  26. def recommendedTempDir: File

    Permalink
  27. def submitJob[I <: IOBase, O <: IOBase, JOB <: SparkIOTypedPluginJob[I, O]](jobClass: Class[JOB], input: I, params: OperatorParameters, sparkConf: SparkJobConfiguration, listener: OperatorListener): SubmittedSparkJob[O]

    Permalink

    This is to be the function to submit the IO typed job to Spark.

    This is to be the function to submit the IO typed job to Spark. IO typed Spark jobs will automatically serialize/deserialize input/outputs. TODO: Not supported as of yet.

    I

    Input type.

    O

    Output type.

    JOB

    The job type.

    jobClass

    IO typed job class.

    input

    Input to the job. This automatically gets serialized.

    params

    Parameters into the job.

    sparkConf

    Spark job configuration.

    listener

    Listener to pass to the job. The spark job should be able to communicate directly with Alpine as it's running.

    returns

    A submitted job object.

    Definition Classes
    SparkExecutionContextMockSparkExecutionContext
  28. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  29. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  30. def visualModelHelper: HDFSVisualModelHelper

    Permalink
  31. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. def workflowInfo: WorkflowInfo

    Permalink

Inherited from SparkExecutionContext

Inherited from ExecutionContext

Inherited from AnyRef

Inherited from Any

Ungrouped