com.alpine.plugin.core.spark.templates

SparkDataFrameGUINode

abstract class SparkDataFrameGUINode[Job <: SparkDataFrameJob] extends TemplatedSparkDataFrameGUINode[HdfsTabularDataset]

Control the GUI of your Spark job, through this you can specify any visualization for the output of your job, and what params the user will need to specify.

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SparkDataFrameGUINode
  2. TemplatedSparkDataFrameGUINode
  3. OperatorGUINode
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkDataFrameGUINode()

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def defineEntireOutputSchema(inputSchema: TabularSchema, params: OperatorParameters): TabularSchema

    Override this method to define an output schema in some way other than by defining an array of the fixed column definitions.

  9. def defineOutputSchemaColumns(inputSchema: TabularSchema, params: OperatorParameters): Seq[ColumnDef]

    Override this method to define the output schema by assigning fixed column definitions.

    Override this method to define the output schema by assigning fixed column definitions. If you want to have a variable number of output columns, simply override the defineEntireOutputSchema method The default implementation of this method returns the same columns as the input data.

    inputSchema

    - the Alpine 'TabularSchema' for the input DataFrame

    params

    The parameters of the operator, including values set by the user.

    returns

    A list of Column definitions used to create the output schema

  10. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  12. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  14. def getOperatorStatus(context: OperatorDesignContext): OperatorStatus

    Since Alpine 6.

    Since Alpine 6.3, SDK 1.9.

    This is called to get the current status of the operator, i.e. whether it is valid, information about the expected runtime output, and error messages to display in the properties window.

    This is intended to replace onInputOrParameterChange, as we want to be able to pass more general metadata between operators instead of only TabularSchema.

    The default implementation calls onInputOrParameterChange, to maintain compatibility with old operators.

    context

    contains information about the input operators, the current parameters, and the available data-sources.

    returns

    the current status of the operator.

    Definition Classes
    OperatorGUINode
  15. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  17. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  18. final def notify(): Unit

    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  20. def onInputOrParameterChange(inputSchemas: Map[String, TabularSchema], params: OperatorParameters, operatorSchemaManager: OperatorSchemaManager): OperatorStatus

    Calls 'updateOutputSchema' when the parameters are changed

    Calls 'updateOutputSchema' when the parameters are changed

    inputSchemas

    If the connected inputs contain tabular schemas, this is where they can be accessed, each with unique Ids.

    params

    The current parameter values of the operator.

    operatorSchemaManager

    This should be used to change the input/output schema, etc.

    returns

    A status object about whether the inputs and/or parameters are valid. The default implementation assumes that the connected inputs and/or parameters are valid.

    Definition Classes
    SparkDataFrameGUINodeOperatorGUINode
  21. def onOutputVisualization(params: OperatorParameters, output: HdfsTabularDataset, visualFactory: VisualModelFactory): VisualModel

    This is kept only for old operators.

    This is kept only for old operators. New ones should implement OperatorRuntime#createVisualResults, which has more things available. If neither are implemented, Alpine will generate default a visualization.

    This is invoked for GUI to customize the operator output visualization after the operator finishes running. Each output should have associated default visualization, but the developer can customize it here.

    params

    The parameter values to the operator.

    output

    This is the output from running the operator.

    visualFactory

    For creating visual models.

    returns

    The visual model to be sent to the GUI for visualization.

    Definition Classes
    OperatorGUINode
  22. def onPlacement(operatorDialog: OperatorDialog, operatorDataSourceManager: OperatorDataSourceManager, operatorSchemaManager: OperatorSchemaManager): Unit

    Defines the params the user will be able to select.

    Defines the params the user will be able to select. The default asks for desired output format & output location.

    operatorDialog

    The operator dialog where the operator could add input text boxes, etc. to define UI for parameter inputs.

    operatorDataSourceManager

    This contains that available data-sources (filtered for hadoop or database depending on the operator runtime class) that could be used by the operator at runtime.

    operatorSchemaManager

    This can be used to provide information about the nature of the output/input schemas. E.g., provide the output schema.

    Definition Classes
    SparkDataFrameGUINodeOperatorGUINode
  23. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  24. def toString(): String

    Definition Classes
    AnyRef → Any
  25. def updateOutputSchema(inputSchemas: Map[String, TabularSchema], params: OperatorParameters, operatorSchemaManager: OperatorSchemaManager): Unit

    Attributes
    protected
  26. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

internals

Ungrouped