It specifies the type of caslibs to show. You can choose from 'ALL', 'DNFS', 'ESP', 'LASR', 'PATH', or 'S3'. This parameter is ignored if a specific caslib is named. The default is 'ALL'.
To specify the object, you must use the 'ObjectSelector' parameter. Within this parameter, you define the 'objType' (such as "TABLE", "CASLIB", "COLUMN", "ACTION", or "ACTIONSET") and provide the corresponding identifying parameters like 'caslib', 'table', or 'actionSet'.
To check out a column, you must set 'objType' to "COLUMN" within the ObjectSelector. You are required to provide the 'caslib' and 'table' names. The 'column' name itself is optional but recommended for specificity.
You must use the `table` parameter, which is required. Within this parameter, you specify the name of the table and optionally the caslib where it is located.
To save the estimated model, you use the 'store' parameter. You provide a name and optionally a caslib for the output item store. This stored model can then be used by other actions, like 'countreg.countregViewStore', or for scoring new data.
The output table, which contains the tagged data, is specified using the 'casOut' parameter. You must provide a name for the table and can optionally specify a caslib.
The 'caslib' parameter is required. It specifies the caslib containing the data source options.
The alias for the 'caslib' parameter is 'datasourceFromCasLib'.
The syntax is: sparkEmbeddedProcess.executeProgram / caslib="string", program="string", programFile="string";
Yes, you can use the 'casOut' parameter to specify the output table settings, such as the table name and caslib, to store the expected range values.