Job types

The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.

Job:Command

The following example shows how to use the Job:Command to run operating system commands.

	"JobName": {
		"Type" : "Job:Command",
    	"Command" : "echo hello",
        "PreCommand": "echo before running main command",
        "PostCommand": "echo after running main command",
    	"Host" : "myhost.mycomp.com",
    	"RunAs" : "user1"  
	}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating system user that will run the job.
PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.

Back to top

Job:Script

 The following example shows how to use Job:Script to run a script from a specified script file.

    "JobWithPreAndPost": {
        "Type" : "Job:Script",
        "FileName" : "task1123.sh",
        "FilePath" : "/home/user1/scripts",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1",
	 	"Arguments":[
			"arg1",
        	"arg2" 
		]  
    }
FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".

PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
Arguments

(Optional) An array of strings that are passed to the script.

Back to top

Job:EmbeddedScript

The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.

    "EmbeddedScriptJob":{
        "Type":"Job:EmbeddedScript",
        "Script":"#!/bin/bash\\necho \"Hello world\"",
        "Host":"myhost.mycomp.com",
        "RunAs":"user1",
        "FileName":"myscript.sh",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script"
    }
ScriptFull content of the script, up to 64 kilobytes.
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
FileName

Name of a script file. This property is used for the following purposes:

  • The file extension provides an indication of how to interpret the script. If this is the only purpose of this property, the file does not have to exist.
  • If you specify an alternative script override using the OverridePath job property, the FileName property indicates the name of the alternative script file.
PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.

Back to top

Job:FileTransfer

The following example shows a Job:FileTransfer for a file transfer from a local filesystem to an SFTP server:

{
  "FileTransferFolder" :
  {
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTP" :
	{
		"Type" : "Job:FileTransfer",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
        "NumberOfRetries": "3",
		"Host": "AgentHost",
		"FileTransfers" :
		[
			{
				"Src" : "/home/controlm/file1",
				"Dest" : "/home/controlm/file2",
				"TransferType": "Binary",
				"TransferOption": "SrcToDest"
			},
			{
				"Src" : "/home/controlm/otherFile1",
				"Dest" : "/home/controlm/otherFile2",
				"TransferOption": "DestToSrc"
			}
		]
	}
  }
}

Here is another example for a file transfer from an S3 storage service to a local filesystem:

{
 "MyS3AftFolder": {
   "Type": "Folder",
   "Application": "aft",
   "TransferFromS3toLocal": 
   {
      "Type": "Job:FileTransfer",
      "ConnectionProfileSrc": "amazonConn",
      "ConnectionProfileDest": "LocalConn",
      "NumberOfRetries": "4",
      "S3BucketName": "bucket1",
      "Host": "agentHost",
      "FileTransfers": [
         {
			"Src" : "folder/sub_folder/file1",
			"Dest" : "folder/sub_folder/file2"
         }
      ]
   }
 }
}

Here is another example for a file transfer from an S3 storage service to another S3 storage service:

{
 "MyS3AftFolder": {
   "Type": "Folder",
   "Application": "aft",
   "TransferFromS3toS3": 
   {
      "Type": "Job:FileTransfer",
      "ConnectionProfileSrc": "amazonConn",
      "ConnectionProfileDest": "amazon2Conn",
      "NumberOfRetries": "6",
      "S3BucketNameSrc": "bucket1",
      "S3BucketNameDest": "bucket2",
      "Host": "agentHost",
      "FileTransfers": [
         {
			"Src" : "folder/sub_folder/file1",
			"Dest" : "folder/sub_folder/file2"
         }
      ]
   }
 }
}

And here is another example for a file transfer from a local filesystem to an AS2 server.

Note: File transfers that use the AS2 protocol are supported only in one direction — from a local filesystem to an AS2 server.

{
  "MyAs2AftFolder": {
    "Type": "Folder",
    "Application": "AFT",
    "MyAftJob_AS2": 
    {
      "Type": "Job:FileTransfer",
      "ConnectionProfileSrc": "localAConn",
      "ConnectionProfileDest": "as2Conn",
      "NumberOfRetries": "Default",
      "Host": "agentHost",
      "FileTransfers": [
        {
          "Src": "/dev",
          "Dest": "/home/controlm/",
          "As2Subject": "Override subject",
          "As2Message": "Override conntent type"
        }
      ]
    }
  }
}

The following parameters were used in the examples above:

ParameterDescription
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition, Control-M File Transfer plug-in version 8.0.00 or later must be installed.

ConnectionProfileSrcThe connection profile to use as the source
ConnectionProfileDestThe connection profile to use as the destination
ConnectionProfileDualEndpoint

If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest.

A dual-endpoint connection profile can be used for FTP, SFTP, and Local filesystem transfers. For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint.

NumberOfRetries

Number of connection attempts after a connection failure

Range of values: 0–99 or "Default" (to inherit the default)

Default: 5 attempts

S3BucketNameFor file transfers between a local filesystem and an Amazon S3 or S3-compatible storage service: The name of the S3 bucket
S3BucketNameSrcFor file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the source
S3BucketNameDestFor file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the destination
FileTransfersA list of file transfers to perform during job execution, each with the following properties:
   SrcFull path to the source file
   DestFull path to the destination file
   TransferType

(Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file).

Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file.

Default: "Binary"

   TransferOption

(Optional) The following is a list of the transfer options:

  • SrcToDest - transfer file from source to destination
  • DestToSrc - transfer file from destination to source
  • SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met
  • DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met
  • FileWatcher - watch a file, and if successful, the succeeding job will run.
  • 9.0.20.000 DirectoryListing– list the source and destination files
  • 9.0.20.000 SyncSrcToDest– scan source and destination, transfer only new or modified files from source to destination, and delete destination files that do not exist on the source
  • 9.0.20.000 SyncDestToSrc– scan source and destination, transfer only new or modified files from destination to source, and delete source files that do not exist on the destination

Default: "SrcToDest"

   As2Subject

Optional for AS2 file transfer: A text to use to override the subject of the AS2 message.

   As2MessageOptional for AS2 file transfer: A text to use to override the content type in the AS2 message.

The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:

{
  "FileTransferFolder" :
  {
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTPBasedOnEvent" :
	{
		"Type" : "Job:FileTransfer",
		"Host" : "AgentHost",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
        "NumberOfRetries": "3",
		"FileTransfers" :
		[
			{
				"Src" : "/home/sftp/file1",
				"Dest" : "/home/sftp/file2",
				"TransferType": "Binary",
				"TransferOption" : "SrcToDestFileWatcher",
				"PreCommandDest" :
				{
					"action" : "rm",
					"arg1" : "/home/sftp/file2"
				},
				"PostCommandDest" :
				{
					"action" : "chmod",
					"arg1" : "700",
					"arg2" : "/home/sftp/file2"
				},
				"FileWatcherOptions":
				{
					"MinDetectedSizeInBytes" : "200",
					"TimeLimitPolicy" : "WaitUntil",
					"TimeLimitValue" : "2000",
					"MinFileAge" : "3Min",
					"MaxFileAge" : "10Min",
					"AssignFileNameToVariable" : "FileNameEvent",
					"TransferAllMatchingFiles" : true
				}
			}
		]
	}
  }
}

This example contains the following additional optional parameters: 

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

Defines commands that occur before and after job execution.
Each command can run only one action at a time.

Action

Description

chmod

Change file access permission:

arg1: mode

arg2: file name

mkdir

Create a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

FileWatcherOptions

Additional options for watching the transferred file using the File Watcher utility:

    MinDetectedSizeInBytesDefines the minimum number of bytes transferred before checking if the file size is static
    TimeLimitPolicy/
    TimeLimitValue

Defines the time limit to watch a file:
TimeLimitPolicy options:”WaitUntil”, "MinutesToWait"

TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM.
If TimeLimitPolicy: MinutesToWait, the TimeLimitValue is the number of minutes to wait.

    MinFileAge

Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    MaxFileAge

Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    AssignFileNameToVariableDefines the variable name that contains the detected file name
    TransferAllMatchingFiles

Whether to transfer all matching files (value of true) or only the first matching file (value of false) after waiting until the watching criteria is met.

Valid values: true | false
Default value: false

Back to top

Job:FileWatcher

A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete

    "FWJobCreate" : {
	    "Type" : "Job:FileWatcher:Create",
		"RunAs":"controlm",
 	    "Path" : "C:/path*.txt",
	    "SearchInterval" : "45",
	    "TimeLimit" : "22",
	    "StartTime" : "201705041535",
	    "StopTime" : "201805041535",
	    "MinimumSize" : "10B",
	    "WildCard" : true,
	    "MinimalAge" : "1Y",
	    "MaximalAge" : "1D2H4MIN"
    },
    "FWJobDelete" : {
        "Type" : "Job:FileWatcher:Delete",
        "RunAs":"controlm",
        "Path" : "C:/path.txt",
        "SearchInterval" : "45",
        "TimeLimit" : "22",
        "StartTime" : "201805041535",
        "StopTime" : "201905041535"
    }

This example contains the following parameters:

Path

Path of the file to be detected by the File Watcher

You can include wildcards in the path — * for any number of characters, and ? for any single character.

SearchIntervalInterval (in seconds) between successive attempts to detect the creation/deletion of a file
TimeLimit

Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code.

Default: 0 (no time limit)

StartTime

The time at which to start watching the file

The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM.
Alternatively, to specify a time on the current date, use the HHMM format.

StopTime

The time at which to stop watching the file.

Format: yyyymmddHHMM or HHMM (for the current date)

MinimumSize

Minimum file size to monitor for, when watching a created file

Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes.

If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true.

Wildcard

Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards

Values: true | false
Default: false

MinimalAge

(Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. 

For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched.

MaximalAge

(Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified.

For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched.

Back to top

Job:Database

The following types of database jobs are available:

Job:Database:EmbeddedQuery

The following example shows how to create a database job that runs an embedded query.

{
    "PostgresDBFolder": {  
		"Type": "Folder",
        "EmbeddedQueryJobName": {
          "Type": "Job:Database:EmbeddedQuery",
          "ConnectionProfile": "POSTGRESQL_CONNECTION_PROFILE",
          "Query": "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
          "Host": "${agentName}",
          "RunAs": "PostgressCP",
          "Variables": [
             {
			   "firstParamName": "firstParamValue"
             }
          ],
		  "Autocommit": "N",
		  "OutputExecutionLog": "Y",
		  "OutputSQLOutput": "Y",
		  "SQLOutputFormat": "XML"
     } 
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plug-in version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Query

The embedded SQL query that you want to run.

The SQL query can contain auto edit variables. During job run, these variables are replaced by the values that you specify in Variables parameter (next row).

For long queries, you can specify delimiters using \\n (new line) and \\t (tab).

VariablesVariables are pairs of name and value. Every name that appears in the embedded script will be replaced by its value pair.

The following optional parameters are also available for all types of database jobs:

Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Job:Database:SQLScript

The following example shows how to create a database job that runs a SQL script from a file system.

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "AgentHost",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
			"Parameters": [
				{"firstParamName": "firstParamValue"},
				{"secondParamName": "secondParamValue"}
			]
		}
	}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plug-in version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine. 

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

ParametersParameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair.

For additional optional parameters, see above.

Another example:

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
		}
	}
}

Job:Database:StoredProcedure

The following example shows how to create a database job that runs a program that is stored on the database.

{
	"storeFolder": {
		"Type": "Folder",
		"jobStoredProcedure": {
			"Type": "Job:Database:StoredProcedure",
			"Host": "myhost.mycomp.com",
			"StoredProcedure": "myProcedure",
			"Parameters": [ "value1","variable1",["value2","variable2"]],
			"ReturnValue":"RV",
			"Schema": "public",
			"ConnectionProfile": "DB-PG-CON"
		}
	}
}

9.0.21.030The following example provides more definitions for the parameters and return value. This format enables you to use the connection profile to deploy stored procedures without accessing the database during deployment.

Note

To support the previous JSON notation, API commands will continue to GET Stored Procedure definitions in the old JSON format. Tenable getting the Stored Procedure object in this new format, see Controlling the Database Connection for Stored Procedure Data.

{
	"storeFolder": {
		"Type": "Folder",
		"jobStoredProcedure": {
			"Type": "Job:Database:StoredProcedure",
			"Host": "myhost.mycomp.com",
			"StoredProcedure": "myProcedure",
			"Parameters" : [ {
			  "Name" : "table_name",
			  "ParameterType" : "text",
			  "Direction" : "In",
			  "Value": "TestTable"
			}, {
			  "Name" : "chunksize",
			  "ParameterType" : "int4",
			  "Direction" : "Out",
			  "Value": "4"
			}, {
			  "Name" : "rows_deleted",
			  "ParameterType" : "int4",
			  "Direction" : "InOut",
			  "ValueIn" : "1",
			  "ValueOut" : "2"
			} ],
        	"ReturnValue": {
          		"Name" : "returnValue",
          		"ValueType" : "int4",
          		"Value" : "RV"
        	},
			"Schema": "public",
			"ConnectionProfile": "DB-PG-CON"
		}
	}
}

These examples contain the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plug-in version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

StoredProcedureName of stored procedure that the job runs
Parameters

Definitions of all parameters in the procedure, in the order of their appearance in the procedure.

The format depends on the version of Automation API and on whether a connection to the database is required for the evaluation of Stored Procedures, as described in Controlling the Database Connection for Stored Procedure Data. 

  • For deployment with access to the database: A comma-separated list of values or variables for all parameters in the procedure.
    In the first example above, three parameters are listed, in the following order: [In,Out,Inout]. The value that you specify for any specific parameter in the procedure depends on the type of parameter. 
    • For an In parameter, specify an input value.
    • For an Out parameter, specify an output variable.
    • For an In/Out parameter, specify a pair of input value + output variable, enclosed in brackets: [value,variable]
  • 9.0.21.030For deployment without access to the database: Definitions of each parameter are provided on the next level, as in the second example above.
    • Name
    • ParameterType
    • Direction - In, Out, or InOut
    • Value - the input value of an In parameter or the output variable of an Out parameter.
      For an InOut parameter, two properties are used instead of one, ValueIn and ValueOut.
ReturnValue

A variable for the Return parameter (if the procedure contains such a parameter)

The format depends on the version of Automation API and on whether a connection to the database is required for the evaluation of Stored Procedures, as described in Controlling the Database Connection for Stored Procedure Data. 

  • For deployment with access to the database: The value for the ReturnValue variable, as in the first example above.
  • 9.0.21.030For deployment without access to the database: Definitions of the variable are provided on the next level, as in the second example above.
    • Name
    • ValueType
    • Value
Schema

The database schema where the stored procedure resides

Package

(Oracle only) Name of a package in the database where the stored procedure resides

The default is "*", that is, any package in the database.

ConnectionProfileName of a connection profile that contains the details of the connection to the database

For additional optional parameters, see above.

Job:Database:MSSQL:AgentJob

9.0.19.210 The following example shows how to create an MSSQL Agent job, for management of a job defined in the SQL server.

{
    "MSSQLFolder": {
        "Type": "Folder",
        "ControlmServer": "LocalControlM",
        "MSSQLAgentJob": {
            "Type": "Job:Database:MSSQL:AgentJob",
            "ConnectionProfile": "MSSQL-WE-EXAMPLE",
            "Host": "agentHost",
            "JobName": "get_version",
            "Category": "Data Collector"
        }
    }
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plug-in version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobNameThe name of the job defined in the SQL server
CategoryThe category of the job, as defined in the SQL server

For additional optional parameters, see above.

Job:Database:MSSQL:SSIS

9.0.19.220 The following example shows how to create SSIS Package jobs for execution of SQL Server Integration Services (SSIS) packages:

{
   "MSSQLFolder": {
       "Type": "Folder",
       "ControlmServer": "LocalControlM",
       "SSISCatalog": {
            "Type": "Job:Database:MSSQL:SSIS",
            "ConnectionProfile": "MSSQL-CP-NAME",
            "Host": "agentHost",
            "PackageSource": "SSIS Catalog",
            "PackageName": "\\Data Collector\\SqlTraceCollect",
            "CatalogEnv": "ENV_NAME",
            "ConfigFiles": [
                "C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
                "C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
            ],
            "Properties": [
                {
                    "PropertyName": "PropertyValue"
                },
                {
                    "PropertyName2": "PropertyValue2"
                }
            ]
        },
        "SSISPackageStore": {
            "Type": "Job:Database:MSSQL:SSIS",
            "ConnectionProfile": "MSSQL-CP-NAME",
            "Host": "agentHost",
            "PackageSource": "SSIS Package Store",
            "PackageName": "\\Data Collector\\SqlTraceCollect",
            "ConfigFiles": [
                "C:\\Users\\dbauser\\Desktop\\test.dtsConfig",
                "C:\\Users\\dbauser\\Desktop\\test2.dtsConfig"
            ],
            "Properties": [
                {
                    "PropertyName": "PropertyValue"
                },
                {
                    "PropertyName2": "PropertyValue2"
                }
            ]
        }    
    }
} 

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plug-in version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine. 

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

PackageSource

The source of the SSIS package, one of the following:

  • SQL Server — Package stored on an MSSQL database.
  • File System — Package stored on the Control-M/Agent's local file system.
  • SSIS Package Store — Package stored on a file system that is managed by an SSIS service.
  • SSIS Catalog — Package stored on a file system that is managed by an SSIS Catalog service.
PackageNameThe name of the SSIS package.
CatalogEnv

If PackageSource is 'SSIS Catalog': The name of the environment on which to run the package.

Use this optional parameter if you want to run the package on a different environment from the one that you are currently using.

ConfigFiles(Optional) Names of configuration files that contain specific data that you want to apply to the SSIS package
Properties

(Optional) Pairs of names and values for properties defined in the SSIS package.

Each property name is replaced by its defined value during SSIS package execution.

For additional optional parameters, see above.

Back to top

Job:Hadoop

Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:

Job:Hadoop:Spark:Python

The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
        "Type": "Job:Hadoop:Spark:Python",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",

        "SparkScript": "/home/user/processData.py"
    }
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
    "Type": "Job:Hadoop:Spark:Python",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",
    "SparkScript": "/home/user/processData.py",            
    "Arguments": [
        "1000",
        "120"
    ],            
    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },
    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },
    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Spark:ScalaJava

The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",
    "ProgramJar": "/home/user/ScalaProgram.jar",
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",
    "ProgramJar": "/home/user/ScalaProgram.jar"
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",
    "Arguments": [
        "1000",
        "120"
    ],            
    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },
    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },
    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Pig

The following example shows how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
    "Type" : "Job:Hadoop:Pig",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",
    "PigScript" : "/home/user/script.pig" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
        "Type" : "Job:Hadoop:Pig",
        "ConnectionProfile": "DEV_CLUSTER",
        "PigScript" : "/home/user/script.pig",            
        "Host" : "edgenode",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ],            
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Sqoop

The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
      "Type" : "Job:Hadoop:Sqoop",
	  "Host" : "edgenode",
      "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
      "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfileSee Sqoop ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
        "Type" : "Job:Hadoop:Sqoop",
        "Host" : "edgenode",
        "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",
        "SqoopCommand" : "import --table foo",
		"SqoopOptions" : [
			{"--warehouse-dir":"/shared"},
			{"--default-character-set":"latin1"}
		],
        "SqoopArchives" : "",
        "SqoopFiles": "",
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

SqoopOptionsThese are passed as the specific sqoop tool args
SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFilesIndicates the location of the Sqoop files.

Back to top

Job:Hadoop:Hive

The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
    {
      "Type" : "Job:Hadoop:Hive",
      "Host" : "edgenode",
      "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
      "HiveScript" : "/home/user1/hive.script"
    }

 

ConnectionProfileSee Hive ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
    {
        "Type" : "Job:Hadoop:Hive",
        "Host" : "edgenode",
        "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",
        "HiveScript" : "/home/user1/hive.script", 
        "Parameters" : [
            {"ammount": "1000"},
            {"topic": "food"}
        ],
        "HiveArchives" : "",
        "HiveFiles": "",
        "HiveOptions" : [
            {"hive.root.logger": "INFO,console"}
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

HiveSciptParametersPassed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFilesPassed to beeline as --hiveconf mapred.cache.files=”value”.

Back to top

Job:Hadoop:DistCp

The following example shows how to use Job:Hadoop:DistCp to run a DistCp job.  DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
        {
            "Type" : "Job:Hadoop:DistCp",
            "Host" : "edgenode",
            "ConnectionProfile": "DEV_CLUSTER",
            "TargetPath" : "hdfs://nns2:8020/foo/bar",
            "SourcePaths" :
            [
                "hdfs://nn1:8020/foo/a"
            ]
        }  
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine .

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
    {
        "Type" : "Job:Hadoop:DistCp",
        "Host" : "edgenode",
        "ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
        "TargetPath" : "hdfs://nns2:8020/foo/bar",
        "SourcePaths" :
        [
            "hdfs://nn1:8020/foo/a",
            "hdfs://nn1:8020/foo/b"
        ],
        "DistcpOptions" : [
            {"-m":"3"},
            {"-filelimit ":"100"}
        ]
    }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>.

Back to top

Job:Hadoop:HDFSCommands

The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Host" : "edgenode",
            "ConnectionProfile": "DEV_CLUSTER",
            "Commands": [
                {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Back to top

Job:Hadoop:HDFSFileWatcher

The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
    {
        "Type" : "Job:Hadoop:HDFSFileWatcher",
        "Host" : "edgenode",
        "ConnectionProfile" : "DEV_CLUSTER",
        "HdfsFilePath" : "/inputs/filename",
        "MinDetecedSize" : "1",
        "MaxWaitTime" : "2"
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePathSpecifies the full path of the file being watched.
MinDetecedSizeDefines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.
MaxWaitTimeDefines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Back to top

Job:Hadoop:Oozie

The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
        "Type" : "Job:Hadoop:Oozie",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",
        "JobPropertiesFile" : "/home/user/job.properties",
        "OozieOptions" : [
          {"inputDir":"/usr/tucu/inputdir"},
          {"outputDir":"/usr/tucu/outputdir"}
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false , that is, the job will complete successfully even if any post-command fails.

OozieOptionsSet or override values for given job property.

Back to top

Job:Hadoop:MapReduce

 The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
    {
       "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",
        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"]
    }
ConnectionProfile

See ConnectionProfile:Hadoop  

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
    {
        "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",
        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }    
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:MapredStreaming

The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
        "Type": "Job:Hadoop:MapredStreaming",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",
        "InputPath": "/user/robot/input/*",
        "OutputPath": "/tmp/output",
        "MapperCommand": "mapper.py",
        "ReducerCommand": "reducer.py",
        "GeneralOptions": [
            {"-D": "fs.permissions.umask-mode=000"},
            {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

GeneralOptionsAdditional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options.

Back to top

Job:Hadoop:Tajo:InputFile

The following example shows how to execute a Hadoop Tajo job based on an input file.

    "HadoopTajo_InputFile_Job" :
    {
        "Type" : "Job:Hadoop:Tajo:InputFile",
        "ConnectionProfile" : "TAJO_CONNECTION_PROFILE",
        "Host" : "edgenode",
        "FullFilePath" : "/home/user/tajo_command.sh",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ]
    } 
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

FullFilePathThe full path to the input file used as the Tajo command source
ParametersOptional parameters for the script, expressed as name:value pairs

Additional optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Tajo:Query

The following example shows how to execute a Hadoop Tajo job based on a query.

    "HadoopTajo_Query_Job" :
    {
        "Type" : "Job:Hadoop:Tajo:Query",
        "ConnectionProfile" : "TAJO_CONNECTION_PROFILE",
        "Host" : "edgenode",
        "OpenQuery" : "SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC",
    } 
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. 

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

OpenQuery

An ad-hoc query to the Apache Tajo warehouse system

Additional optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:SAP

SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the Control-M for SAP plug-in installed in your Control-M environment.

The following JSON objects are available for creating SAP-type jobs:

Job:SAP:R3:CREATE

This job type enables you to create a new SAP R3 job.

The following example is a simple job that relies mostly on default settings and contains one step that executes an external command. 

    "SAPR3_external_command": {
      "Type": "Job:SAP:R3:CREATE",
      "ConnectionProfile": "SAPCP",
      "SapJobName": "SAP_job",
      "CreatedBy": "user1",
      "Steps": [
        {
          "StepType": "ExternalCommand",
          "UserName": "user01",
          "TargetHost": "host01",
          "ProgramName": "PING"
        }
      ],
      "SpoolListRecipient": {
        "ReciptNoForwarding": false
      }
    }

The following example is a more complex job that contains two steps that run ABAP programs. Each of the ABAP steps has an associated variant that contains variable definitions.

  "SapR3CreateComplete": {
      "Type": "Job:SAP:R3:CREATE",
      "ConnectionProfile": "SAPCP",
      "SapJobName": "SAP_job2",
      "StartCondition": "Immediate",
      "RerunFromStep": "3",
      "Target": "controlmserver",
      "CreatedBy": "user1",
      "Steps": [
        {
          "StepType": "ABAP",
          "TimeToPrint": "PrintLater",
          "CoverPrintPage": true,
          "OutputDevice": "prt",
          "UserName": "user",
          "SpoolAuthorization": "Auth",
          "CoverDepartment": "dpt",
          "SpoolListName": "spoolname",
          "OutputNumberRows": "62",
          "NumberOfCopies": "5",
          "NewSpoolRequest": false,
          "PrintArchiveMode": "PrintAndArchive",
          "CoverPage": "Print",
          "ArchiveObjectType": "objtype",
          "SpoolListTitles": "titles",
          "OutputLayout": "layout",
          "CoverSheet": "Print",
          "ProgramName": "ABAP_PROGRAM",
          "Language": "e",
          "ArchiveInformationField": "inf",
          "DeleteAfterPrint": true,
          "PrintExpiration": "3",
          "OutputNumberColumns": "88",
          "ArchiveDocumentType": "doctype",
          "CoverRecipient": "recipient",
          "VariantName": "NameOfVariant",
          "VariantParameters": [
            {
              "Type": "Range",
              "High": "2",
              "Sign": "I",
              "Option": "BT",
              "Low": "1",
              "Name": "var1",
              "Modify": false
            },
            {
              "Low": "5",
              "Type": "Range",
              "Option": "BT",
              "Sign": "I",
              "Modify": true,
              "High": "6",
              "Name": "var3"
            }
          ]
        },
        {
          "StepType": "ABAP",
          "PrintArchiveMode": "Print",
          "ProgramName": "ABAP_PROGRAM2",
          "VariantName": "Myvar_with_temp",
          "TemporaryVariantParameters": [
            {
              "Type": "Simple",
              "Name": "var",
              "Value": "P11"
            },
            {
              "Type": "Simple",
              "Name": "var2",
              "Value": "P11"
            }
          ]
        }
      ],
      "PostJobAction": {
        "JobLog": "CopyToFile",
        "JobCompletionStatusWillDependOnApplicationStatus": true,
        "SpoolSaveToPDF": true,
        "JobLogFile": "fileToCopy.txt"
      },
      "SpoolListRecipient": {
        "ReciptNoForwarding": false
      }
    }

The following table lists the parameters that can be used in SAP jobs of this type:

ConnectionProfileName of the SAP connection profile to use for the connection
SapJobNameName of SAP job to be monitored or submitted
Exec

Type of execution target where the SAP job will run, one of the following:

  • Server — an SAP application server (the default)
  • Group — an SAP group
Target

The name of the SAP application server or SAP group (depending on the value specified in the previous parameter)

JobClass

Job submission priority in SAP, one of the following options:

  • A — high priority 
  • B — medium priority
  • C — low priority
StartCondition

Specifies when the job should run, one of the following:

  • ASAP — Job runs as soon as a background work process is available for it in SAP (the default). If the job cannot start immediately, it is transformed in SAP into a time-based job.
  • Immediate — Job runs immediately. If there are no work processes available to run it, the job fails.
  • AfterEvent — Job waits for an event that you specify (in the next two parameters) to be triggered.

AfterEvent

The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent)
AfterEventParameters

Parameters in the SAP event to watch for.

Use space characters to separate multiple parameters.

RerunFromPointOfFailure

Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default)

Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun.

CopyFromStep

The number of a specific step in the SAP R/3 job from which to rerun

The default is step 1 (that is, the beginning of the job).

Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored.

StepsAn object that groups together the definitions of SAP R/3 job steps
StepType

The type of program to execute in this step, one of the following options:

  • ABAP (the default)
  • ExternalCommand
  • ExternalProgram
ProgramName

The name of the program or command

UserNameThe authorized owner of the step
DescriptionA textual description or comment for the step

Further parameters for each individual step depend on the type of program that is executed in the step. These parameters are listed in separate tables:

PostJobActionThis object groups together several parameters that control post-job actions for the SAP R/3 job.
Spool

How to manage spool output, one of the following options:

  • DoNotCopy (the default)
  • CopyToOutput
  • CopyToFile
SpoolFileThe file to which to copy the job's spool output (if Spool is set to CopyToFile)
SpoolSaveToPDFWhether to save the job's spool output in PDF format (if Spool is set to CopyToFile)
JobLog

How to manage job log output, one of the following options:

  • DoNotCopy
  • CopyToOutput (the default)
  • CopyToFile
JobLogFileThe file to which to copy the job's log output (if JobLog is set to CopyToFile)
JobCompletionStatusWillDependOnApplicationStatus

Whether job completion status depends on SAP application status, either true or false (the default)

DetectSpawnedJobThis object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job
DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default)
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify
JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

StartSpawnedJob

Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)

JobEndInControlMOnlyAftreChildJobsCompleteOnSapWhether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)
JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

SpoolListRecipientThis object groups together several parameters that define recipients of Print jobs
RecipientType

Type of recipient of the print job, one of the following:

  • ExternalAddress

  • SapUserName (the default)

  • SharedDistributionList

  • PrivateDistributionList

  • FaxNumber

  • TelexNumber

  • InternetAddress

  • X400Address

  • RemoteMailAddress

RecipientNameRecipient of the print job (of the type defined by the previous parameter)
RecipientCopyWhether this recipient is a copied (CC) recipient, either true or false (the default)
RecipientBlindCopyWhether this recipient is a blind copied (BCC) recipient, either true or false (the default)
RecipientExpressFor a CC or BCC recipient: Whether to send in express mode, either true or false (the default)
ReciptNoForwardingFor a CC or BCC recipient: Whether to set the recipient to "No Forwarding", either true or false (the default)

The following additional parameters are available f or steps that involve the execution of an ABAP program. Most of these parameters are optional.

Language

SAP internal one-character language code for the ABAP step

For example, German is D and Serbian (using the Latin alphabet) is d.

For the full list of available language codes, see SAP Knowledge Base Article 2633548.

VariantNameThe name of a variant for the specified ABAP program or Archiving Object
VariantDescriptionA textual description or comment for the variant
VariantParameters

This object groups together the variables defined in the variant. For each variable, you can set the following parameters:

  • Name — name of the variable
  • Modify — whether to modify the variable value when the job executes, either true (the default) or false
  • Type — type of variable, one of the following: Simple (the default), Selection, Range
  • Value — value to set for a Simple variable or a Selection variable
  • Low — lowest value in a Range variable
  • High — highest value in a Range variable
  • Option — operator to use in a range or selection
    For a Range: either BT (between) or NB (not between)
    For a Selection: one of the following:
    • EQ: Single value
    • NE: Not Equal to
    • GT: Greater than
    • LT: Less than
    • GE: Greater than or Equal to
    • LE: Less than or Equal to
    • CP: Include pattern
    • NP: Exclude pattern
  • Sign — whether to include or exclude the Range or Selection — either I (the default) or E
TemporaryVariantParameters

This object groups together the variables defined in a temporary variant.

For each variable, you can set the same parameters listed above, except for Modify (which is not supported by a temporary variant).

OutputDeviceThe logical name of the designated printer
NumberOfCopies

Number of copies to be printed

The default is 1.

PrintArchiveMode

Whether the spool of the step is printed to an output device, to the archive, or both.

Choose from the following available values:

  • Print (the default)
  • Archive
  • PrintAndArchive
TimeToPrint

When to print the job output, one of the following options:

  • PrintLater
  • PrintImmediately
  • SendToSAPSpooler (the default)
PrintExpiration

Number of days until a print job expires

Valid values are single-digit numbers:

  • 1–8 days
  • 9 — no expiration

The default is 8 days.

NewSpoolRequestWhether to request a new spool, either true (the default) or false
DeleteAfterPrintWhether to delete the report after printing, either true or false (the default)
OutputLayoutPrint layout format
OutputNumberRows

(Mandatory) Maximum number of rows per page

Valid values:

  • Any integer between 1 and 90
  • -1 — use ABAP program default (the default)
OutputNumberColumns

(Mandatory) Maximum number of characters in an output line

Valid values:

  • Any integer between 1 and 255
  • -1 — use ABAP program default (the default)
CoverRecipient

Name of the recipient of the job output on the cover sheet

The name can be up to 12 characters.

CoverDepartment

Name of the spool department on the cover sheet

The department name can be up to 12 characters.

CoverPage

Type of cover page for output, one of the following options:

  • DefaultSetting — use the default setting from SAP (the default)
  • Print — print the cover page
  • DoNotPrint — do not print the cover page
CoverSheetType of cover sheet for output, one of the following options:
  • DefaultSetting — use the default setting from SAP (the default)
  • Print — print the cover page
  • DoNotPrint — do not print the cover page
CoverPrintPage

Whether to use a cover page, either true or false

The default is false.

SpoolListName

Name of the spool list

The name can be up to 12 characters.

SpoolListTitles

The spool list titles

SpoolAuthorization

Name of a user with print authorization

The name can be up to 12 characters.

ArchiveId

SAP ArchiveLink Storage system ID

Values are two carachters long. The default is ZZ.

Note that Archive parameters are relevant only when you set PrintArchiveMode to Archive or PrintAndArchive.

ArchiveText

Free text description of the archive location, up to 40 characters

ArchiveObjectType

Archive object type

Valid values are up to 10 characters.

ArchiveDocumentType

Archive object document type

Valid values are up to 10 characters.

ArchiveInformationField

Archive information

Values can be 1–3 characters.

The following additional parameters are available for steps that involve the execution of an external program or an external command:

TargetHostHost computer on which the program or command runs
OperatingSystem

Operating system on which the external command runs

The default is ANYOS.

WaitExternalTermination

Whether SAP waits for the external program or external command to end before starting the next step, or before exiting.

Values are either true (the default) or false.

LogExternalOutput

Whether SAP logs external output in the joblog

Values are either true (the default) or false.

LogExternalErrors

Whether SAP logs external errors in the joblog

Values are either true (the default) or false.

ActiveTrace

Whether SAP activates traces for the external program or external command

Values are either true or false (the default).

Job:SAP:R3:COPY

This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:

"JobSapR3Copy" : {
    "Type" : "Job:SAP:R3:COPY",
    "ConnectionProfile":"SAP-CON",
    "SapJobName" : "CHILD_1",
    "Exec": "Server",
    "Target" : "Server-name",
    "JobCount" : "SpecificJob",
    "JobCountSpecificName" : "sap-job-1234",
    "NewJobName" : "My-New-Sap-Job",
    "StartCondition" : "AfterEvent",
    "AfterEvent" : "HOLA",
    "AfterEventParameters" : "parm1 parm2",
    "RerunFromPointOfFailure": true,
    "CopyFromStep" : "4",
    "PostJobAction" : {
        "Spool" : "CopyToFile",
        "SpoolFile": "spoolfile.log",
        "SpoolSaveToPDF" : true,
        "JobLog" : "CopyToFile",
        "JobLogFile": "Log.txt",
        "JobCompletionStatusWillDependOnApplicationStatus" : true
    },
    "DetectSpawnedJob" : {
        "DetectAndCreate": "SpecificJobDefinition",
        "JobName" : "Specific-Job-123",
        "StartSpawnedJob" : true,
        "JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
        "JobCompletionStatusDependsOnChildJobsStatus" : true
    }
}

This SAP job object uses the following parameters:

ConnectionProfileName of the SAP connection profile to use for the connection
SapJobNameName of SAP job to copy
Exec

Type of execution target where the SAP job will run, one of the following:

  • Server — an SAP application server (the default)
  • Group — an SAP group
Target

The name of the SAP application server or SAP group (depending on the value specified in the previous parameter)

JobCount

How to define a unique ID number for the SAP job, one of the following options:

  • FirstScheduled (the default)
  • LastScheduled
  • First
  • Last
  • SpecificJob

If you specify SpecificJob, you must provide the next parameter.

JobCountSpecificNameA unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob)

NewJobName

Name of the newly created job
StartCondition

Specifies when the job should run, one of the following:

  • ASAP — Job runs as soon as a background work process is available for it in SAP (the default). If the job cannot start immediately, it is transformed in SAP into a time-based job.
  • Immediate — Job runs immediately. If there are no work processes available to run it, the job fails.
  • AfterEvent — Job waits for an event that you specify (in the next two parameters) to be triggered.

AfterEvent

The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent)
AfterEventParameters

Parameters in the SAP event to watch for.

Use space characters to separate multiple parameters.

RerunFromPointOfFailure

Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default)

Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun.

CopyFromStep

The number of a specific step in the SAP R/3 job from which to rerun or copy

The default is step 1 (that is, the beginning of the job).

Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored.

PostJobActionThis object groups together several parameters that control post-job actions for the SAP R/3 job.
Spool

How to manage spool output, one of the following options:

  • DoNotCopy (the default)
  • CopyToOutput
  • CopyToFile
SpoolFileThe file to which to copy the job's spool output (if Spool is set to CopyToFile)
SpoolSaveToPDFWhether to save the job's spool output in PDF format (if Spool is set to CopyToFile)
JobLog

How to manage job log output, one of the following options:

  • DoNotCopy
  • CopyToOutput (the default)
  • CopyToFile
JobLogFileThe file to which to copy the job's log output (if JobLog is set to CopyToFile)
JobCompletionStatusWillDependOnApplicationStatus

Whether job completion status depends on SAP application status, either true or false (the default)

DetectSpawnedJobThis object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job
DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default)
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify
JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

StartSpawnedJob

Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)

JobEndInControlMOnlyAftreChildJobsCompleteOnSapWhether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)
JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

Job:SAP:BW:ProcessChain

This job type runs and monitors a Process Chain in SAP Business Warehouse (SAP BW).

NOTE: For the job that you define through Control-M Automation API to work properly, ensure that the Process Chain defined in the SAP BW system has Start Using Meta Chain or API as the start condition for the trigger process (Start Process) of the Process Chain. To configure this parameter, from the SAP transaction RSPC, right-click the trigger process and select Maintain Variant.

The following example shows how to use Job:SAP:BW:ProcessChain:

"JobSapBW": {
    "Type": "Job:SAP:BW:ProcessChain",
    "ConnectionProfile": "PI4-BW",
    "ProcessChainDescription": "SAP BW Process Chain",
    "Id": "123456",
    "RerunOption": "RestartFromFailiurePoint",
    "EnablePeridoicJob": true,
    "ConsiderOnlyOverallChainStatus": true,
    "RetrieveLog": false,
    "DetectSpawnedJob": {
        "DetectAndCreate": "SpecificJobDefinition",
        "JobName": "ChildJob",
        "StartSpawnedJob": false,
        "JobEndInControlMOnlyAftreChildJobsCompleteOnSap": false,
        "JobCompletionStatusDependsOnChildJobsStatus": false
        }
    }

This SAP job object uses the following parameters:

ConnectionProfileName of the SAP connection profile to use for the connection.
ProcessChainDescription

The description of the Process Chain that you want to run and monitor, as defined in SAP BW.

Maximum length of the textual description: 60 characters

IdID of the Process Chain that you want to run and monitor.
RerunOption

The rerun policy to apply to the job after job failure, one of the following values:

  • RestartFromFailiurePoint — Restart the job from the point of failure (the default)
  • RerunFromStart — Rerun the job from the beginning
EnablePeridoicJob

Whether the first run of the Process Chain prepares for the next run and is useful for reruns when big Process Chains are scheduled.

Values are either true (the default) or false.

ConsiderOnlyOverallChainStatus

Whether to view only the status of the overall Process Chain.

Values are either true or false (the default) .

RetrieveLog

Whether to add the Process Chain logs to the job output.

Values are either true (the default) or false.

DetectSpawnedJobThis object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job
    DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default).
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify.
    JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

    StartSpawnedJobWhether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)
    JobEndInControlMOnlyAftreChildJobsCompleteOnSapWhether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)
    JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

  Job:SAP:BW:InfoPackage

This job type runs and monitors an InfoPackage that is pre-defined in SAP Business Warehouse (SAP BW).

The following example shows how to use Job:SAP:BW:InfoPackage

"JobSapBW": {
	"Type": "Job:SAP:BW:InfoPackage",
	"ConnectionProfile": "PI4-BW",
	"CreatedBy": "emuser1",
	"Description": "description of the job",
	"RunAs": "ProductionUser",
	"InfoPackage": {
		"BackgroundJobName": "Background job name",
		"Description": "description of the InfoPackage",
		"TechName": "LGXT565_TGHBNS453BGHJ784"
	}
}

This SAP job object uses the following parameters:

ConnectionProfile

Defines the name of the SAP connection profile to use for the job.

1-30 characters. Case sensitive. No blanks.

CreatedByDefines the name of the user that creates the job.
Description(Optional) Describes the job.
RunAs(Optional) Defines a  Run  as user—an account that is used to log in to the host.
InfoPackageAn object that groups together the parameters that describe the InfoPackage.
    BackgroundJobName

Defines the InfoPackage background job name. 1-25 characters.

    Description(Optional) Describes the InfoPackage.
    TechNameDefines a unique SAP BW generated InfoPackage ID.


Back to top

Job:PeopleSoft

PeopleSoft-type jobs enable you to manage PeopleSoft jobs and processes through the Control-M environment. To manage PeopleSoft-type jobs, you must have the Control-M for PeopleSoft plug-in installed in your Control-M environment.

The following example shows the JSON code used to define a PeopleSoft job.

"PeopleSoft_job": {
     "Type": "Job:PeopleSoft",
     "ConnectionProfile": "PS_CONNECT",
     "User": "PS_User3",
     "ControlId": "ControlId",
     "ServerName": "ServerName",
     "ProcessType": "ProcessType",
     "ProcessName": "ProcessName",
     "AppendToOutput": false,
     "BindVariables": ["value1","value2"],
     "RunAs": "controlm"
}

This PeopleSoft job object uses the following parameters:

ConnectionProfileName of the PeopleSoft connection profil e to use for the connection
UserA PeopleSoft user ID that exists in the PeopleSoft Environment
ControlId

Run Control ID for access to run controls at runtime

ServerNameThe name of the server on which to run the PeopleSoft job or process
ProcessTypeA PeopleSoft process type that the user is authorized to perform
ProcessNameThe name of the PeopleSoft process to run

AppendToOutput

Whether to include PeopleSoft job output in the Control-M job output, either true or false

The default is false.

BindVariablesValues of up to 20 USERDEF variables for sharing data between Control-M and the PeopleSoft job or process

Back to top

Job:ApplicationIntegrator

Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see Application Integrator.

The following example shows the JSON code used to define a job type named AI Monitor Remote Job:

"JobFromAI" : {
    "Type": "Job:ApplicationIntegrator:AI Monitor Remote Job",
    "ConnectionProfile": "AI_CONNECTION_PROFILE",
    "AI-Host": "Host1",
    "AI-Port": "5180",
    "AI-User Name": "admin",
    "AI-Password": "*******",
    "AI-Remote Job to Monitor": "remoteJob5",
    "RunAs": "controlm"
}

In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.

The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.

  • The name of the job type appears in the Name field in the job type details.
  • Job properties appear in the Job Type Designer, in the Connection Profile View and the Job Properties View.
    When defining these properties through the .json code, you prefix them with "AI-", except for the property that specifies the name of the connection profile.


Back to top

Job:Informatica

Informatica-type jobs enable you to automate Informatica workflows through the Control-M environment. To manage Informatica-type jobs, you must have the Control-M for Informatica plug-in installed in your Control-M environment.

The following example shows the JSON code used to define an Informatica job.

"InformaticaApiJob": {
    "Type": "Job:Informatica",
    "ConnectionProfile": "INFORMATICA_CONNECTION",
    "RepositoryFolder": "POC",
    "Workflow": "WF_Test",
    "InstanceName": "MyInstamce",
    "OsProfile": "MyOSProfile",
    "WorkflowExecutionMode": "RunSingleTask",
    "RunSingleTask": "s_MapTest_Success",
    "WorkflowRestartMode": "ForceRestartFromSpecificTask",
    "RestartFromTask": "s_MapTest_Success",
    "WorkflowParametersFile": "/opt/wf1.prop"
}

This Informatica job object uses the following parameters:

ConnectionProfileName of the Informatica connection profile to use for the connection
RepositoryFolderThe Repository folder that contains the workflow that you want to run
WorkflowThe workflow that you want to run in Control-M for Informatica
InstanceName(Optional) The specific instance of the workflow that you want to run
OsProfile(Optional) The operating system profile in Informatica

WorkflowExecutionMode

The mode for executing the workflow, one of the following:

  • RunWholeWorkflow — run the whole workflow
  • StartFromTask — start running the workflow from a specific task, as specified by the StartFromTask parameter
  • RunSingleTask — run a single task in the workflow, as specified by the RunSingleTask parameter
  StartFromTask

The task from which to start running the workflow

This parameter is required only if you set WorkflowExecutionMode to StartFromTask.

  RunSingleTask

The workflow task that you want to run

This parameter is required only if you set WorkflowExecutionMode to RunSingleTask.

Depth

The number of levels within the workflow task hierarchy for the selection of workflow tasks

Default: 10 levels

EnableOutput

Whether to include the workflow events log in the job output (either true or false)

Default: true

EnableErrorDetails

Whether to include a detailed error log for a workflow that failed (either true or false)

Default: true

WorkflowRestartMode

The operation to execute when the workflow is in a suspended satus, one of the following:

  • Recover — recover the suspended workflow
  • ForceRestart — force a restart of the suspended workflow
  • ForceRestartFromSpecificTask — force a restart of the suspended workflow from a specific task, as specified by the RestartFromTask parameter
  RestartFromTask

The task from which to restart a suspended workflow

This parameter is required only if you set WorkflowRestartMode to ForceRestartFromSpecificTask

WorkflowParametersFile

(Optional) The path and name of the workflow parameters file

This enables you to use the same workflow for different actions.

Back to top

Job: Informatica CS

Informatica Cloud Services (CS) jobs enable you to automate your Informatica workflows for multi-cloud and on-premises data integration through the Control-M environment.

To deploy and run an Informatica CS job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Informatica CS integration using the deploy jobtype command.

The following example shows the JSON code used to define an Informatica CS job:

"InformaticaCloudCSJob": {
    "Type": "Job:Informatica CS",
    "ConnectionProfile": "INFORMATICA_CS_CONNECTION", 
    "Task Type": "Synchronization task", 
    "Task Name": "Synchronization Task1", 
    "Call Back URL": "", 
    "Verification Poll Interval (in seconds)": "10"
  }

The following example shows the JSON code used to define an Informatica CS job for a taskflow:

"InformaticaCloudCSJob": {
    "Type": "Job:Informatica CS",
    "ConnectionProfile": "INFORMATICA_CS_CONNECTION", 
    "Task Type": "Taskflow",  
    "TaskFlow URL": "https://xxx.dm-xx.informaticacloud.com/active-bpel/rt/xyz",
    "Input Fields": "input1=val1&input2=val2&input3=val3",
    "Call Back URL": "", 
    "Verification Poll Interval (in seconds)": "10"
  }

This Informatica CS job object uses the following parameters:

ConnectionProfileName of the Informatica CS connection profile to use for the connection to Informatica Cloud
Task Type

The type of task to run on Informatica Cloud. The available options are: 

  • Mapping task 
  • Masking task 
  • PowerCenter task 
  • Replication task 
  • Synchronization task 
  • Linear taskflow
  • Taskflow
Task Name

The name of the task to execute on Informatica Cloud

This parameter is not relevant for a taskflow.

TaskFlow URL

(For taskflow) The Service URL of the taskflow to execute on Informatica Cloud

In Informatica Data Integration, you can obtain this Service URL through the Properties Detail option of the taskflow.

Input Fields

(For taskflow) Input fields for a taskflow, expressed as input=value pairs separated by the & character

Call Back URL(Optional) A publicly available URL to which to post the job status 
Verification Poll Interval (in seconds) Number of seconds between polls for job status verification

Back to top

Job:AWS

AWS-type jobs enable you to automate a select list of AWS services through Control-M Automation API. To manage AWS-type jobs, you must have the Control-M for AWS plug-in installed in your Control-M environment.

The following JSON objects are available for creating AWS-type jobs:

For the following additional job types you must have the Control-M Application Integrator plug-in installed and you must deploy the relevant integrations using the deploy jobtype command.

Job:AWS:Lambda

The following example shows how to define a job that executes an AWS Lambda service on an AWS server.

"AwsLambdaJob": {
    "Type": "Job:AWS:Lambda",
    "ConnectionProfile": "AWS_CONNECTION",
    "FunctionName": "LambdaFunction",
    "Version": "1",
    "Payload" : "{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" 
    "AppendLog": true
}

This AWS job object uses the following parameters :

FunctionName

The Lambda function to execute

Version

(Optional) The Lambda function version

The default is $LATEST (the latest version).

Payload

(Optional) The Lambda function payload, in JSON format

Escape all special characters.

AppendLogWhether to add the log to the job’s output, either true (the default) or false

Job:AWS:StepFunction

The following example shows how to define a job that executes an AWS Step Function service on an AWS server.

"AwsLambdaJob": {
    "Type": "Job:AWS:StepFunction",
    "ConnectionProfile": "AWS_CONNECTION",
    "StateMachine": "StateMachine1",
    "ExecutionName": "Execution1",
    "Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
    "AppendLog": true
}

This AWS job object uses the following parameters :

StateMachine

The State Machine to use

ExecutionNameA name for the execution
Input

The Step Function input in JSON format

Escape all special characters.

AppendLogWhether to add the log to the job’s output, either true (the default) or false

Job:AWS:Batch

The following example shows how to define a job that executes an AWS Batch service on an AWS server.

"AwsLambdaJob": {
    "Type": "Job:AWS:Batch",
    "ConnectionProfile": "AWS_CONNECTION",
    "JobName": "batchjob1",
    "JobDefinition": "jobDef1",
    "JobDefinitionRevision": "3",
    "JobQueue": "queue1",
    "AWSJobType": "Array",
    "ArraySize": "100",
    "DependsOn": {
        "DependencyType": "Standard",
        "JobDependsOn": "job5"
        },
    "Command": [ "ffmpeg", "-i" ],
    "Memory": "10",
    "vCPUs": "2",
    "JobAttempts": "5",
    "ExecutionTimeout": "60",
    "AppendLog": false
}

This AWS job object uses the following parameters :

JobName

The name of the batch job

JobDefinitionThe job definition to use
JobDefinitionRevisionThe job definition revision
JobQueueThe queue to which the job is submitted
AWSJobTypeThe type of job, either Array or Single
ArraySize

(For a job of type Array) The size of the array (that is, the number of items in the array)

Valid values: 2–10000

DependsOnParameters that determine a job dependency
    DependencyType

(For a job of type Array) Type of dependency, one of the following values:

  • Standard
  • Sequential
  • N-to-N
   JobDependsOn

The JobID upon which the Batch job depends

This parameter is mandatory for a Standard or N-to-N dependency, and optional for a Sequential dependency.

CommandA command to send to the container that overrides the default command from the Docker image or the job definition
Memory

The number of megabytes of memory reserved for the job

Minimum value: 4 megabytes

vCPUsThe number of vCPUs to reserve for the container
JobAttempts

The number of retry attempts

Valid values: 1–10

ExecutionTimeoutThe timeout duration in seconds
AppendLogWhether to add the log to the job’s output, either true (the default) or false

Job:AWS Glue

The following example shows how to define a job that executes Amazon Web Services (AWS) Glue, a serverless data integration service.

To deploy and run an AWS Glue job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the AWS Glue integration using the deploy jobtype command.

"AwsGlueJob": {
  "Type": "Job:AWS Glue",
  "ConnectionProfile": "GLUECONNECTION",
  "Glue Job Name": "AwsGlueJobName",
  "Glue Job Arguments": "checked",
  "Arguments": "{\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"}",
  "Status Polling Frequency": "20"
}

The AWS Glue job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to the AWS Glue service

Glue Job NameThe name of the AWS Glue job that you want to execute. 
Glue Job Arguments

Whether to enable specification of arguments to be passed when running the AWS Glue job (see next property).

Values are checked or unchecked. The default is unchecked.

Arguments

(Optional) Specific arguments to pass when running the AWS Glue job

Format: {\"--myArg1\": \"myVal1\", \"--myArg2\": \"myVal2\"}

For more information about the available arguments, see Special Parameters Used by AWS Glue in the AWS documentation.

Status Polling Frequency

(Optional) Number of seconds to wait before checking the status of the job.

Default: 30

Job:AWS Glue DataBrew

The following example shows how to define an AWS Glue DataBrew job that you can use to visualize your data and publish it to the Amazon S3 Data Lake. 

To deploy and run an AWS Glue DataBrew job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the AWS Glue DataBrew integration using the deploy jobtype command.

"AWS Glue DataBrew_Job": {
	"Type": "Job:AWS Glue DataBrew",
	"ConnectionProfile": "AWSDATABREW",
	"Job Name": "databrew-job",
	"Output Job Logs": "checked",
	"Status Polling Frequency": "10",
	"Failure Tolerance": "2"
}

The AWS Glue DataBrew job object uses the following parameters:

ConnectionProfile

Defines the name of a connection profile to use to connect to AWS Glue DataBrew.

Job Name

Defines the AWS Glue DataBrew job name.

Output Job Logs

Determines whether the DataBrew job logs are included in the Control-M output.

Values: checked | unchecked

Default: unchecked

Status Polling Frequency

Determines the number of seconds to wait before checking the status of the DataBrew job.

Default: 10 seconds

Failure Tolerance

Determines the number of times the job tries to run before ending Not OK.

Default: 2

Job:AWS EMR

The following example shows how to define a job that executes Amazon Web Services (AWS) EMR to run  big data frameworks .

To deploy and run an AWS EMR job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the AWS EMR integration using the deploy jobtype command.

"AWS EMR_Job_2": {
  "Type": "Job:AWS EMR",
  "ConnectionProfile": "AWS_EMR",
  "Cluster ID": "j-21PO60WBW77GX",
  "Notebook ID": "e-DJJ0HFJKU71I9DWX8GJAOH734",
  "Relative Path": "ShowWaitingAndRunningClusters.ipynb",
  "Notebook Execution Name": "TestExec",
  "Service Role": "EMR_Notebooks_DefaultRole",
  "Use Advanced JSON Format": "unchecked",
}

The AWS EMR job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to the AWS EMR service.

Cluster ID

Defines the name of the AWS EMR cluster to connect to the Notebook.

Also known as the Execution Engine ID (in the EMR API).

Notebook ID

Determines which Notebook ID executes the script. 

Also known as the Editor ID (in the EMR API).

Relative PathDefines the full path and name of the script file in the Notebook.
Notebook Execution NameDefines the job execution name.
Service RoleDefines the service role to connect to the Notebook.
Use Advanced JSON Format

Enables you to provide Notebook execution information through JSON code.

Values: checked or unchecked (the default)

When you set this parameter to checked, the JSON Body parameter (see below) replaces several other parameters discussed above (Cluster ID, Notebook ID, Relative Path, Notebook Execution Name, and Service Role).

JSON Body

Defines Notebook execution settings in JSON format. For a description of the syntax of this JSON, see the description of StartNotebookExecution in the Amazon EMR API Reference.

Example:

{
"EditorId": "e-DJJ0HFJKU71I9DWX8GJAOH734",
"RelativePath": "ShowWaitingAndRunningClustersTest2.ipynb",
"NotebookExecutionName":"Tests",
"ExecutionEngine": {
   "Id": "j-AR2G6DPQSGUB"
},
"ServiceRole": "EMR_Notebooks_DefaultRole"
}

Job:AWSEC2

The following example shows how to define a job that performs operations on an AWS EC2 Virtual Machine (VM).

To deploy and run an AWS EC2 job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the AWS EC2 integration using the deploy jobtype command.

}
"AWSEC2_create": {
   "Type": "Job:AWSEC2",
   "ConnectionProfile": "AWSEC2",
   "Operations": "Create",
   "Placement Availability Zone": "us-west-2c",
   "Instance Type": "m1.small",
   "Subnet ID": "subnet-00aa899a7db25494d",
   "Key Name": "ksu-aws-ec2-key-pair",
   "Get Instances logs": "unchecked"
}

The AWS EC2 job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to the AWS EC2 Virtual Machine.

Operations

Determines one of the following operations to perform on the AWS EC2 Virtual Machine:

  • Create: Create a new virtual machine.
  • Create from Instance Template: Create a new virtual machine based on a template.    
  • Start: Start an existing virtual machine.    
  • Stop: Stop a running virtual machine.    
  • Reboot: Reset a virtual machine.    
  • Terminate: Delete an existing virtual machine.   
Launch Template IDDefines the template to use to create a VM from a template.
Instance ID

Defines the name of the VM instance where you want to run the operation.

This parameter is available for all operations except for the Create operations.

Instance NameDefines the name of a new VM instance for Create operations.
Placement Availability ZoneDetermines which AWS EC2 zone to use for a Create operation.
Instance TypeDetermines the  software requirements of the host computer when you create a new AWS EC2 Virtual Machine.
Subnet ID

Defines the Subnet ID that is required to launch the instance in a Create operation.

Key Name

Defines the security credential key set for a Create operation.

Image ID 

Defines the ID of the Amazon Machine Image (AMI) that is required to launch the instance in a Create operation.

Number of copies

Number of copies of the VM to create in a Create operation.

Default: 1

Get Instance logs

Determines whether to display logs from the AWS EC2 instance at the end of the job output.

This parameter is available for all operations except for the Terminate operation.

Values:   checked|unchecked

Default: unchecked

Verification Poll Interval

Determines the number of seconds to wait before job status verification.

Default: 15 seconds

Tolerance 

Determines the number of retries to rerun a job.

Default: 2 times

Back to top

Job:Azure

9.0.19.220 The Azure job type enables you to automate workflows that include a select list of Azure services. To manage Azure-type jobs, you must have the Control-M for Azure plug-in installed in your Control-M environment.

The following JSON objects are available for creating Azure-type jobs:

Additional job types are provided for the following Azure services. To support these job types, you must have each of the relevant Control-M Application Integrator plug-ins installed and you must deploy each integration using the deploy jobtype command.

Job:Azure:Function

The following example shows how to define a job that executes an Azure function service.

"AzureFunctionJob": {
  "Type": "Job:Azure:Function",
  "ConnectionProfile": "AZURE_CONNECTION",
  "AppendLog": false,
  "Function": "AzureFunction",
  "FunctionApp": "AzureFunctionApp",
  "Parameters": [
      {"firstParamName": "firstParamValue"},
	  {"secondParamName": "secondParamValue"}
  ]
}

This Azure job object uses the following parameters :

Function

The name of the Azure function to execute

FunctionAppThe name of the Azure function app
Parameters(Optional) Function parameters defined as pairs of name and value. 
AppendLog(Optional) Whether to add the log to the job’s output, either true (the default) or false

Job:Azure:LogicApps

The following example shows how to define a job that executes an Azure Logic App service.

"AzureLogicAppJob": {
  "Type": "Job:Azure:LogicApps",
  "ConnectionProfile": "AZURE_CONNECTION",
  "LogicAppName": "MyLogicApp",
  "RequestBody": "{\\n  \"name\": \"BMC\"\\n}",
  "AppendLog": false
}

This Azure job object uses the following parameters :

LogicAppName

The name of the Azure Logic App

RequestBody(Optional) The JSON for the expected payload
AppendLog(Optional) Whether to add the log to the job’s output, either true (the default) or false

Job:Azure:BatchAccount

The following example shows how to define a job that executes an Azure Batch Accounts service.

"AzureBatchJob": {
  "Type": "Job:Azure:BatchAccount",
  "ConnectionProfile": "AZURE_CONNECTION",
  "JobId": "AzureJob1",
  "CommandLine": "echo \"Hello\"",
  "AppendLog": false,
  "Wallclock": {
    "Time": "770",
    "Unit": "Minutes"
  },
  "MaxTries": {
    "Count": "6",
    "Option": "Custom"
  },
  "Retention": {
    "Time": "1",
    "Unit": "Hours"
  }
}

This Azure job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to Azure Data Factory

JobId

The ID of the batch job

CommandLineA command line that the batch job runs
AppendLog(Optional) Whether to add the log to the job’s output, either true (the default) or false
Wallclock

(Optional) Maximum limit for the job's run time

If you do not include this parameter, the default is unlimited run time.

Use this parameter to set a custom time limit. Include the following next-level parameters:

  • Time — number (of the specified time unit), 0 or higher
  • Unit — time unit, one of the following: Seconds, Minutes, Hours, Days
MaxTries

(Optional) The number of times to retry running a failed task

If you do not include this parameter, the default is none (no retries).

Use this parameter to choose between the following options:

  • Unlimited number of retries. For this option, include the following next-level parameter:
    "Option": "Unlimited"
  • Custom number of retries, 1 or higher. For this option, include the following next-level parameters:
    • "Count": number
    • "Option": "Custom"
Retention

(Optional) File retention period for the batch job

If you do not include this parameter, the default is an unlimited retention period.

Use this parameter to set a custom time limit for retention. Include the following next-level parameters:

  • Time — number (of the specified time unit), 0 or higher
  • Unit — time unit, one of the following: Seconds, Minutes, Hours, Days

Job:ADF

The following example shows how to define a job that executes an Azure Data Factory (ADF) service, a cloud-based ETL and data integration service that allows you to create data-driven workflows to automate the movement and transformation of data.

To deploy and run an ADF job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the ADF integration using the deploy jobtype command.

"AzureDataFactoryJob": {
      "Type": "Job:ADF",
      "ConnectionProfile": "DataFactoryConnection",
      "Resource Group Name": "AzureResourceGroupName",
      "Data Factory Name": "AzureDataFactoryName",
      "Pipeline Name": "AzureDataFactoryPipelineName",
      "Parameters": "{\"myVar\":\"value1\", \"myOtherVar\": \"value2\"}",
      "Status Polling Frequency": "20"
}

The ADF job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to Azure Data Factory

Resource Group Name


The Azure Resource Group that is associated with a specific data factory pipeline. A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.
Data Factory Name
The Azure Data Factory Resource to use to execute the pipeline
Pipeline Name
The data pipeline to run when the job is executed
Parameters

Specific parameters to pass when the Data Pipeline runs, defined as pairs of name and value

Format: {\"var1\":\"value1\", \"var2\":\"value2\"}

Status Polling Frequency

(Optional) Number of seconds to wait before checking the status of the job.

Default: 30

Job:Azure Databricks

The following example shows how to define a job that executes the Azure Databricks service, a cloud-based data analytics platform that enables you to process large workloads of data.

To deploy and run an Azure Databricks job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure Databricks integration using the deploy jobtype command.

"Azure Databricks notebook": {
      "Type": "Job:Azure Databricks",
      "ConnectionProfile": "AZURE_DATABRICKS",
      "Databricks Job ID: "65",
      "Parameters": "\"notebook_params\":{\"param1\":\"val1\", \"param2\":\"val2\"}",
      "Idempotency Token": "Control-M-Idem_%%ORDERID",
      "Status Polling Frequency": "30"
}

The Azure Databricks job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to the Azure Databricks workspace

Databricks Job ID

The job ID created in your Databricks workspace
Parameters

Task parameters to override when the job runs, according to the Databricks convention. The list of parameters must begin with the name of the parameter type. For example:

  • "notebook_params":{"param1":"val1", "param2":"val2"}
  •  "jar_params": ["param1", "param2"]

For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided through the Azure Databricks documentation.

For no parameters, specify a value of "params": {}. For example:
"Parameters": "params": {}

Idempotency Token

(Optional) A token to use to rerun job runs that timed out in Databricks

Values:

  • Control-M-Idem_%%ORDERID — With this token, upon rerun, Control-M invokes the monitoring of the existing job run in Databricks. Default.
  • Any other value  — Replaces the Control-M idempotency token. When you rerun a job using a different token, Databricks creates a new job run with a new unique run ID.
Status Polling Frequency

(Optional) Number of seconds to wait before checking the status of the job.

Default: 30

Job: AzureFunctions

The following example shows how to define a job that executes a cloud-based Azure Function for serverless application development.

To deploy and run an Azure Functions job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure Functions integration using the deploy jobtype command.

"AzureFunction": {
      "Type": "Job:AzureFunction", 
      "ConnectionProfile": "AZUREFUNCTIONS", 
      "Function App": "new-function", 
      "Function Name": "Hello", 
      "Optional Input Parameters": "\"{\"param1\":\"val1\", \"param2\":\"val2\"}\"",  
}

The Azure Functions job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to the Azure Functions workspace

Function App

The name of function application that contains your function 
Function NameThe name of the function that you want to run
Optional Input Parameters

Specific parameters to pass when the function runs, defined as pairs of name and value

Format: {\"param1\":\"val1\", \"param2\":\"val2\"}

For no parameters, specify {}.

Job:Azure Batch Accounts

The following example shows how to define a job that executes cloud-based Azure Batch Accounts for large-scale compute-intensive tasks.

To deploy and run an Azure Batch Accounts job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure Batch Accounts integration using the deploy jobtype command.

"Azure Batch Accounts_Job_2": {
  "Type": "Job:Azure Batch Accounts",
  "ConnectionProfile": "AZURE_BATCH",
  "Batch Job ID": "abc-jobid",
  “Task ID Prefix”: "ctm",
  "Task Command Line": "cmd /c echo hello from Control-M",
  "Max Wall Clock Time": "Custom",
  "Max Wall Time Digits": "3",
  "Max Wall Time Unit": "Minutes",
  "Max Task Retry Count": "Custom",
  "Retry Number": "3",
  "Retention Time": "Custom",
  "Retention Time Digits": "4",
  "Retention Time Unit": "Days",
  "Append Log to Output": "checked",
  "Status Polling Interval": "20"
  }

The Azure Batch Accounts job object uses the following parameters :

ConnectionProfile

Determines which connection profile to use to connect to Azure Batch.

Batch Job ID

Defines the name of the Batch Account Job created in Azure Portal.

Task ID PrefixDefines a prefix string to append to the task ID.
Task Command Line

Defines the command line that runs your application or script on the compute node. The task is added to the job at runtime.

Max Wall Clock Time

Defines a maximum time limit for the job run, with the following possible values:

  • Unlimited
  • Custom

Default: Unlimited

Max Wall Time Digits

Defines the number (of the specified time unit) for a custom maximum time limit.

Default: 1

Max Wall Time Unit

Defines one of the following time units for a custom maximum time limit:

  • Seconds
  • Minutes
  • Hours
  • Days

Default: Minutes

Max Task Retry Count

Defines a maximum number of times to retry running a failed task, with the following possible values:

  • Unlimited
  • Custom
  • None

Default: None

Retry Number

Defines the number of retries for a custom task retry count.

Default: 1

Retention Time

Defines a minimum period of time for retention of the Task directory of the batch job, with the following possible values:

  • Unlimited — according to the Azure default (7 days, unless the compute node is removed or the job is deleted)
  • Custom

Default: Unlimited

Retention Time Digits

Defines the number (of the specified time unit) for a custom retention period.

Default: 1

Retention Time Unit

Defines one of the following time units for a custom retention period:

  • Seconds
  • Minutes
  • Hours
  • Days

Default: Hours

Append Log to Output

Whether to add task stdout.txt content to the plugin job output.

Values: checked|unchecked

Default: checked

Status Polling Interval

Number of seconds to wait before checking the status of the job.

Default: 20

Job:Azure Logic Apps

The following example shows how to define a job that executes an Azure Logic Apps service, which enables you to design and automate cloud-based workflows and integrations.

To deploy and run an Azure Logic Apps job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure Logic Apps integration using the deploy jobtype command.

"Azure Logic Apps Job": {
	"Type": "Job:Azure Logic Apps",
	"ConnectionProfile": "AZURE_LOGIC_APPS",
	"Workflow": "tb-logic",
	"Parameters": "{\"bodyinfo\":\"hello from CM\",\"param2\":\"value2\"}",
	"Get Logs": "unchecked",
	"Failure Tolerance": "2",
	"Status Polling Frequency": "20",
}

This Azure job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to Azure.

Workflow

Determines which of the Consumption logic app workflows to run from your predefined set of workflows.

Note: This job does not run Standard logic app workflows.

Parameters

Defines parameters that enable you to control the presentation of data.

Rules:

  • Characters: 2–4,000
  • Format: JSON
  • If you are not adding parameters, type {}
Get LogsDetermines whether to display the job output when the job ends.
Failure Tolerance

Determines the number of times the job tries to run before ending Not OK.

Default: 2

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 20

Job:Azure Synapse

The following example shows how to define a job that performs data integration and analytics using the Azure Synapse Analytics service.

To deploy and run an Azure Synapse job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure Synapse integration using the deploy jobtype command.

"Azure Synapse_Job": {
    "Type": "Job:Azure Synapse",
    "ConnectionProfile": "AZURE_SYNAPSE",
    "Pipeline Name": "ncu_synapse_pipeline",
    "Parameters": "{\"periodinseconds\":\"40\", \"param2\":\"val2\"}",
    "Status Polling Interval": "20"
}

The Azure Synapse job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to the Azure Synapse workspace.

Pipeline NameDefines the name of a pipeline that you defined in your Azure Synapse workspace.
Parameters

Defines pipeline parameters to override when the job runs, defined in JSON format as pairs of name and value.

Format: {\"param1\":\"val1\", \"param2\":\"val2\"}

For no parameters, specify {}.

Status Polling Interval

(Optional) Defines the number of seconds to wait before checking the status of the job.

Default: 20 seconds

Job:Azure HDInsight 

The following example shows how to define a job that collaborates with Azure HDInsight to run an Apache Spark batch job for big data analytics

To deploy and run an Azure HDInsight job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure HDInsight integration using the deploy jobtype command.

"Azure HDInsight_Job": {
    "Type": "Job:Azure HDInsight",
    "ConnectionProfile": "AZUREHDINSIGHT",
    "Parameters": "{    
	   "file" : "wasb://<BlobStorageContainerName>@<StorageAccountName>.blob.core.windows.net/sample.jar",  
	   "args" : ["arg0", "arg1"],  
	   "className" : "com.sample.Job1",  
	   "driverMemory" : "1G",  
	   "driverCores" : 2,  
	   "executorMemory" : "1G",  
	   "executorCores" : 10,  
	   "numExecutors" : 10
     }",
    "Polling Intervals": "10",
    "Bring logs to output": "checked" 
}

The Azure HDInsight job object uses the following parameters:

ConnectionProfile

Defines the name of a connection profile to use to connect to the Azure HDInsight workspace.

Parameters

Defines parameters to be passed on the Apache Spark Application during job execution, in JSON format (name:value pairs).

This JSON must include the file and className elements.

For more information about common parameters, see Batch Job in the Azure HDInsight documentation.

Polling Intervals

Defines the number of seconds to wait before verification of the Apache Spark batch job.

Default: 10 seconds

Bring Logs to output

Determines whether logs from Apache Spark are shown in the job output.

Values: checked | unchecked

Default: unchecked

Job:Azure VM

The following example shows how to define a job that performs operations on an Azure Virtual Machine (VM).

To deploy and run an Azure VM job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Azure VM integration using the deploy jobtype command.

"Azure VM_update": {
    "Type": "Job:Azure VM",
    "ConnectionProfile": "AZUREVM",
    "VM Name": "tb-vm1",
    "Operation": "Create\\Update",
    "Input Parameters": "{\"key\": \"val\"}", 
    "Get Logs": "checked",  
    "Verification Poll Interval": "10",
    "Tolerance": "3"
}

The Azure VM job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to the Azure Virtual Machine.

VM Name

Defines the name of the Azure Virtual Machine to run the operation.

Operation

Determines one of the following operations to perform on the Azure Virtual Machine

  • Create\Update: Create a new virtual machine.
  • Delete: Delete an existing virtual machine.
  • Deallocate: Empty a virtual machine.
  • Reset: Reset a virtual machine.
  • Start: Start a virtual machine.
  • Stop: Stop a running virtual machine.
Input Parameters

Defines the input parameters in JSON format for a Create operation.

Format: {\"param1\":\"val1\", \"param2\":\"val2\"}

Get Logs

Determines whether to display logs from Azure VM at the end of the job output.

This parameter is available for all operations except for the Delete operation.

Values: checked|unchecked

Default: unchecked

Delete VM disk

Determines whether to delete the Azure Virtual Machine disk when you delete an Azure Virtual Machine.

Values: checked|unchecked

Default: unchecked

Verification Poll Interval

Determines the number of seconds to wait before job status verification.

Default: 15 seconds

Tolerance

Determines the number of retries to rerun a job.

Default: 2 times

Back to top

Job:WebServices

9.0.20.220The following examples shows how to define a Web Services job for execution of standard web services, servlets, or RESTful web services. To manage Web Services jobs, you must have the Control-M for Web Services, Java, and Messaging (Control-M for WJM) plug-in installed in your Control-M environment.

The following example presents a Web Services job that receives input for a calculator service and outputs the result of a simple calculation:

"WebServices_Job": {
  "Type": "Job:WebServices",
  "Location": http://www.dneonline.com/calculator.asmx?WSDL,
  "SoapHeaderFile": "c:\\myheader.txt",
  "Service": "Calculator(Port:CalculatorSoap)",
  "Operation": "Add",
  "RequestType": "Parameter",
  "OverrideUrlEndpoint": http://myoverridehost.com,
  "OverrideContentType": "*/*",
  "HttpConnectionTimeout": "2345",
  "PreemptiveHttpAuthentication": abc@bmc.com,
  "IncludeTitleInOutput": true,
  "ExcludeJobOutput": false,
  "ConnectionProfile": "CALCULATOR",
  "Host": "host1",
  "OutputParameters": [
	{
	  "Element": "AddResponse.AddResult",
	  "HttpCode": "*",
	  "Destination": "testResultAdd",
	  "Type": "string"
	}
  ],
  "InputParameters": [
	{
	  "Name": "Add.intA",
	  "Value": "97",
	  "Type": "string"
	},
	{
	  "Name": "Add.intB",
	  "Value": "345",
	  "Type": "string"
	}
  ],
}

The following example presents a Web Services job that receives all input through a SOAP request to a calculator service and outputs the result of a simple calculation:

"WSSoapRequest": {
  "Type": "Job:WebServices",
  "SoapHeaderFile": "c:\\myheader.txt",
  "Location": http://www.dneonline.com/calculator.asmx?WSDL,
  "Service": "Calculator(Port:CalculatorSoap)",
  "Operation": "Add",
  "RequestType": "FreeText",
  "OverrideUrlEndpoint": http://myoverridehost.com,
  "OverrideContentType": "*/*",
  "HttpConnectionTimeout": "2345",
  "PreemptiveHttpAuthentication": abc@bmc.com,
  "IncludeTitleInOutput": true,
  "ExcludeJobOutput": false,
  "ConnectionProfile": "CALCULATOR",
  "Host": "host1",
  "OutputParameters": [
	{
	  "Element": "AddResponse.AddResult",
	  "HttpCode": "*",
	  "Destination": "testResultAdd",
	  "Type": "string"
	}
  ],
  "Request": [
	"<soapenv:Envelope xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/ xmlns:tem=http://tempuri.org/>
	   <soapenv:Header/>
	   <soapenv:Body>
		  <tem:Add>
			 <tem:intA>98978</tem:intA>
			 <tem:intB>75675</tem:intB>
		  </tem:Add>
	   </soapenv:Body>
	</soapenv:Envelope>"
  ],
}

The following example presents a Web Services job that receives a SOAP request thorugh an input file. It then uses the input file to submit a SOAP request to a calculator service, and outputs the result of a simple calculation:

"WSSoapRequest_InputFile": {
  "Type": "Job:WebServices",
  "SoapHeaderFile": "c:\\myheader.txt",
  "Location": http://www.dneonline.com/calculator.asmx?WSDL,
  "Service": "Calculator(Port:CalculatorSoap)",
  "Operation": "Add",
  "RequestType": "InputFile",
  "OverrideUrlEndpoint": http://myoverridehost.com,
  "OverrideContentType": "*/*",
  "HttpConnectionTimeout": "2345",
  "PreemptiveHttpAuthentication": abc@bmc.com,
  "IncludeTitleInOutput": true,
  "ExcludeJobOutput": false,
  "ConnectionProfile": "CALCULATOR",
  "Host": "host1",
  "OutputParameters": [
	{
	  "Element": "AddResponse.AddResult",
	  "HttpCode": "*",
	  "Destination": "testResultAdd",
	  "Type": "string"
	}
  ],
  "InputFile": "/home/usr/soap.xml"
}

The following example presents a job that receives input for a calculator REST service and outputs the result of a simple calculation:

"REST_Service_Job": {
  "Type": "Job:WebServices",
  "Location": "http://www.dneonline.com",
  "Service": "/restAPI/calculator.asmx",
  "Operation": "PUT",
  "RequestType": "Parameter",
  "OverrideContentType": "*/*",
  "HttpConnectionTimeout": "2345",
  "PreemptiveHttpAuthentication": abc@bmc.com,
  "IncludeTitleInOutput": true,
  "ExcludeJobOutput": false,
  "ConnectionProfile": "CALCULATOR_REST",
  "Host": "host1",
  "OutputParameters": [
	{
	  "Element": "$AddResponse.AddResult",
	  "HttpCode": "*",
	  "Destination": "testResultAdd",
	  "Type": "string"
	}
  ],
  "InputParameters": [
	{
	  "Name": "intA",
	  "Value": "97",
	  "Type": "string"
	},
	{
	  "Name": "intB",
	  "Value": "345",
	  "Type": "string"
	},
	{
	  "Name": "accept-encoding",
	  "Value": "*/*",
	  "Type": "header"
	}
  ],
}

The Web Services job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to the web service.

SoapHeaderFile

Web service only: The path to a file that contains a predefined SOAP Header to add to the invocation of the target web service SOAP message.

LocationA URL (for either web service or REST service) or fully qualified filename (for a web service) that points to the WSDL of the web service.
Service

A service provided by the company or business.

For a Local File System, this means any service specified in the WSDL file.

For a Web Service URL, this means any service specified in the WSDL URL.

For a REST service, this means the path to the specific REST API.

Operation

For a Web service: An operation available for the specified service.

For a REST service: The HTTP method for REST job execution (GET, POST, PUT, DELETE, HEAD, or OPTIONS)

RequestType

The source of the payload request to submit. The payload can be either a SOAP request (in the case of a web service) or a JSON/XML string (in the case of a REST service). The source can be one of the following:

  • FreeText - Complete request entered in free text. See the Request parameter below.
  • Parameter - Request parameters (for a web service) or query parameters (for a REST service) provided through the InputParameters parameter.
  • InputFile - Complete request provided through a local file. See the InputFile parameter below.
OverrideUrlEndpoint

Web service only: The URL endpoint at the job definition level.

Upon job submission Control-M for Web Services uses the job definition's Endpoint URL, rather than the address location in the WSDL.

OverrideContentTypeA preferred HTTP header Content-Type to be used to execute the job.
HttpConnectionTimeoutThe maximum number of seconds to wait for the web service to respond, before disconnecting
PreemptiveHttpAuthentication

HTTP Basic Authentication information in the format of <user>@<realm>

This information must match the HTTP Basic Authentication information defined through the connection profile (not including the password).

IncludeTitleInOutput

Whether an Output banner is written to the Output at the end of job execution.

Values are true or false. The default is true.

ExcludeJobOutput

Whether to exclude information about job output from the Output at the end of job execution.

Values are true or false. The default is false.

OutputParameters

Details of the outcome of selected output parameters. For each output parameter, define the following subparameters:

  • Element - An element in the job output response, specified by its name or using an XPath or JSONPath expression.
  • HttpCode - HTTP code of the job response. Capture of the element to the destination is performed only for the specified HTTP code.
  • Destination - A fully qualified path in URI format or an AutoEdit variable to be assigned to the element.
  • Type - Format type of the element value (for example, string or integer)
InputParameters

Details of the input parameters required by the web service.

For each input parameter, define the following subparameters:

  • Name
  • Value
  • Type - Format type of the parameter value. For an HTTP header, enter header. For any other type, enter the type name that you defined (such as string or i nteger).
Request

Free-text request to submit to the service, one of the following:

  • For a web service: Full SOAP request
  • For a RESTful web service: JSON or XML string

This parameter is relevant only if RequestType is set to FreeText.

InputFile

The fully qualified path to an input file that contains a complete request to submit to the service — either a full SOAP request (for a web service), or a JSON or XML string (for a RESTful service).

This parameter is relevant only if RequestType is set to InputFile.

Back to top

Job:SLAManagement

SLA Management jobs enable you to identify a chain of jobs that comprise a critical service and must complete by a certain time. The SLA Management job is always defined as the last job in the chain of jobs.

To manage SLA Management jobs, you must have the SLA Management add-on (previously known as Control-M Batch Impact Manager) installed in your Control-M environment.

The following example shows the JSON code of a simple chain of jobs that ends with an SLA Management job. In this chain of jobs:

  • The first job is a Command job that prints Hello and then adds an event named Hello-TO-SLA_Job_for_SLA-GOOD.
  • The second (and last) job is an SLA Management job for a critical service named SLA-GOOD. This job waits for the event added by the first job and then deletes it.
{
  "SLARobotTestFolder_Good": {
    "Type": "SimpleFolder",
    "ControlmServer": "LocalControlM",
    "Hello": {
      "Type": "Job:Command",
      "CreatedBy": "emuser",
      "RunAs": "controlm",
      "Command": "echo \"Hello\"",
      "eventsToAdd": {
        "Type": "AddEvents",
        "Events": [
          {
            "Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
          }
        ]
      }
    },
    "SLA": {
      "Type": "Job:SLAManagement",
      "ServiceName": "SLA-GOOD",
      "ServicePriority": "1",
      "CreatedBy": "emuser",
      "RunAs": "DUMMYUSR",
      "JobRunsDeviationsTolerance": "2",
      "CompleteIn": {
        "Time": "00:01"
      },
      "eventsToWaitFor": {
        "Type": "WaitForEvents",
        "Events": [
          {
            "Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
          }
        ]
      },
      "eventsToDelete": {
        "Type": "DeleteEvents",
        "Events": [
          {
            "Event": "Hello-TO-SLA_Job_for_SLA-GOOD"
          }
        ]
      }
    }
  }
}

The following table lists the parameters that can be included in an SLA Management job:

ParameterDescription
ServiceName

A logical name, from a user or business perspective, for the critical service. BMC recommends that the service name be unique.

Names can contain up to 64 alphanumeric characters.

ServicePriority

The priority level of this service, from a user or business perspective.

Values range from 1 (highest priority) to 5 (lowest priority).

Default: 3

CreatedByThe Control‑M/EM user who defined the job.
RunAsThe operating system user that will run the job.
JobRunsDeviationsTolerance

Extent of tolerated deviation from the average completion time for a job in the service, expressed as a number of standard deviations based on percentile ranges.

If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early.

Select one of the following values:

  • 2 — 95.5% (highest confidence in the completion time)
  • 3 — 99.73%
  • 4 — 99.99% (lowest confidence)

Note: The JobRunsDeviationsTolerance parameter and the AverageRunTimeTolerance parameter are mutually exclusive. Specify only one of these two parameters.

AverageRunTimeTolerance

Extent of tolerated deviation from the average completion time for a job in the service, expressed as a percentage of the average time or as the number of minutes that the job can be early or late.

If the run time falls within the tolerance set, it is considered on time, otherwise it has run too long or ended too early.

The following example demonstrates how to set this parameter based on a percentage of the average run time:

      "AverageRunTimeTolerance": {
        "Units": "Percentage",
        "AverageRunTime": "94"

The following example demonstrates how to set this parameter based on a number of minutes:

      "AverageRunTimeTolerance": {
        "Units": "Minutes",
        "AverageRunTime": "10"

Note: The AverageRunTimeTolerance parameter and the JobRunsDeviationsTolerance parameter are mutually exclusive. Specify only one of these two parameters.

CompleteBy

Defines by what time (in HH:MM) and within how many days the critical service must be completed to be considered on time.

In the following example, the critical service must complete by 11:51 PM, 3 days since it began running.

     "CompleteBy": {
       "Time": "23:51",
       "Days": "3"
     }

The default number of days is 0 (that is, on the same day).

Note: The CompleteBy parameter and the CompleteIn parameter are mutually exclusive. Specify only one of these two parameters.

CompleteIn

Defines the number of hours and minutes for the critical service to complete and be considered on time, as in the following example:

      "CompleteIn": {
        "Time": "15:21"
      }

Note: The CompleteIn parameter and the CompleteBy parameter are mutually exclusive. Specify only one of these two parameters.

ServiceActions

Defines automatic interventions (actions, such as rerunning a job or extending the service due time) in response to specific occurrences (If statements, such as a job finished too quickly or a service finished late).

For more information, see Service Actions.

Service Actions

The following example demonstrates a series of Service Actions that are triggered in response to specific occurrences (If statements). Note that this example includes only a select group of If statements and a select group of actions; for the full list, see the tables that follow.

      "ServiceActions": {
        "If:SLA:ServiceIsLate_0": {
          "Type": "If:SLA:ServiceIsLate",
          "Action:SLA:Notify_0": {
            "Type": "Action:SLA:Notify",
            "Severity": "Regular",            
            "Message": "this is a message"    
          },
          "Action:SLA:Mail_1": {
            "Type": "Action:SLA:Mail",
            "Email": "email@okmail.com",       
            "Subject": "this is a subject",   
            "Message": "this is a message"    
        },
        "If:SLA:JobFailureOnServicePath_1": {
          "Type": "If:SLA:JobFailureOnServicePath",
          "Action:SLA:Order_0": {
            "Type": "Action:SLA:Order",       
            "Server": "LocalControlM",        
            "Folder": "folder",               
            "Job": "job",                     
            "Date": "OrderDate",              
            "Library": "library"              
          }
        },
        "If:SLA:ServiceEndedNotOK_5": {
          "Type": "If:SLA:ServiceEndedNotOK",
          "Action:SLA:Set_0": {
            "Type": "Action:SLA:Set",
            "Variable": "varname",            
            "Value": "varvalue"               
          },
          "Action:SLA:Increase_2": {
            "Type": "Action:SLA:Increase",
            "Time": "04:03"                  
          }
        },
        "If:SLA:ServiceLatePastDeadline_6": {
          "Type": "If:SLA:ServiceLatePastDeadline",
          "Action:SLA:Event:Add_0": {
            "Type": "Action:SLA:Event:Add",
            "Server": "LocalControlM",        
            "Name": "addddd",                 
            "Date": "AnyDate"          
        }

The following If statements can be used to define occurrences for which you want to take action:

If statementDescription
If:SLA:ServiceIsLateThe service will be late according to SLA Management calculations.
If:SLA:JobFailureOnServicePath

One or more of the jobs in the service failed and caused a delay in the service.

An SLA Management service is considered OK even if one of its jobs fails, provided that another job, with an Or relationship to the failed job, runs successfully.

If:SLA:JobRanTooLong

One of the jobs in the critical service is late. Lateness is calculated according to the average run time and Job Runtime Tolerance settings.

A service is considered on time even if one of its jobs is late, provided that the service itself is not late.

If:SLA:JobFinishedTooQuickly

One of the jobs in the critical service is early. The end time is calculated according to the average run time and Job Runtime Tolerance settings.

A service is considered on time even if one of its jobs is early.

If:SLA:ServiceEndedOKThe service ended OK.
If:SLA:ServiceEndedNotOKThe service ended late, after the deadline.
If:SLA:ServiceLatePastDeadlineThe service is late, and passed its deadline.

For each If statement, you define one or more actions to be triggered. The following table lists the available Service Actions:

ActionDescriptionSub-parameters
Action:SLA:NotifySend notification to the Alerts Window
  • Severity — (optional) severity level: Regular (default), Urgent, or VeryUrgent
  • Message — notification text
    You can include any of the following variables in your message:
    • %%PROBLEMATIC_JOBS
    • %%SERVICE_DUE_TIME
    • %%SERVICE_EXPECTED_END_TIME
    • %%SERVICE_NAME
    • %%SERVICE_PRIORITY
Action:SLA:MailSend an email to a specific email recipient.
  • Email — email address
  • Subject — subject line
  • Message — (optional) message body text
    You can include any of the following variables in your message:
    • %%PROBLEMATIC_JOBS
    • %%SERVICE_DUE_TIME
    • %%SERVICE_EXPECTED_END_TIME
    • %%SERVICE_NAME
    • %%SERVICE_PRIORITY
Action:SLA:RemedyOpen a ticket in the Remedy Help Desk.
  • Urgency — (optional) urgency level: Low (default), Medium, High, or Urgent
  • Summary — summary line
  • Message — message body text
    You can include any of the following variables in your message:
    • %%PROBLEMATIC_JOBS
    • %%SERVICE_DUE_TIME
    • %%SERVICE_EXPECTED_END_TIME
    • %%SERVICE_NAME
    • %%SERVICE_PRIORITY
Action:SLA:OrderRun a job, regardless of its scheduling criteria.
  • Server — Control-M/Server
  • Folder — name of folder that contains the job
  • Job — name of job
  • Date — (optional) when to run, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format
  • Library — (z/OS job only) name of the z/OS library that contains the job
Action:SLA:SetToOKSet the job's completion status to OK, regardless of its actual completion status.
  • Server — Control-M/Server
  • Folder — name of folder that contains the job
  • Job — name of job
  • Date — (optional) schedule for setting to OK, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format
Action:SLA:SetToOK:ProblematicJobSet the completion status to OK for a job that is not running on time and will impact the service.No parameters
Action:SLA:RerunRerun the job, regardless of its scheduling criteria
  • Server — Control-M/Server
  • Folder — name of folder that contains the job
  • Job — name of job
  • Date — (optional) when to rerun, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format
Action:SLA:Rerun:ProblematicJobRerun a job that is not running on time and will impact the service.No parameters
Action:SLA:KillKill a job while it is still executing.
  • Server — Control-M/Server
  • Folder — name of folder that contains the job
  • Job — name of job
  • Date — (optional) when to kill the job, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format
Action:SLA:Kill:ProblematicJobKill a problematic job (a job that is not running on time in the service) while it is still executing.No parameters
Action:SLA:SetAssign a value to a variable for use in a rerun of the job.
  • Variable — name of variable
  • Value — value to assign to the variable
Action:SLA:SIMSend early warning notification regarding the critical service to BMC Service Impact Manager.
  • ConnectTo — target ProactiveNet Server/Cell, defined as hostname[:port]
    The default port is 1828.
  • Message — notification text, up to 211 characters.
    You can include any of the following variables in your message:
    • %%PROBLEMATIC_JOBS
    • %%SERVICE_DUE_TIME
    • %%SERVICE_EXPECTED_END_TIME
    • %%SERVICE_NAME
    • %%SERVICE_PRIORITY
Action:SLA:IncreaseAllow the job or critical service to continue running by extending (by hours and/or minutes) the deadline until which the job or service can run and still be considered on time.
  • Time — amount of time to add to the service, in HH:MM format


Action:SLA:Event:AddAdd an event.
  • Server — Control-M/Server
  • Name — name of the event
  • Date — (optional) when to kill the job, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format
Action:SLA:Event:DeleteDelete an event.
  • Server — Control-M/Server
  • Name — name of the event
  • Date — (optional) when to kill the job, one of the following:
    NextOrderDate
    , PrevOrderDate, NoDate, OrderDate  (default), AnyDate, or a specific date in mm/dd format


Back to top

Job:UI Path

The following example shows how to define a UiPath job, which performs robotic process automation (RPA).

To deploy and run a UiPath job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the UiPath integration using the deploy jobtype command.

"UI Path_Job": {
  "Type": "Job:UI Path",
  "ConnectionProfile": "UIPATH_Connect",
  "Folder Name": "Default",
  "Folder Id": "374999",
  "Process Name": "control-m-process",
  "packagekey": "209c467e-1704-4b6y-b613-6c5a2c9acbea",
  "Robot Name": "abc-ctm-bot",
  "Robot Id": "153999",
  "Optional Input Parameters": "{
     "parm1": "Value1",
     "parm2": "Value2", 
     "parm3": "Value3"
     }",
  "Status Polling Frequency": "30",
  "Host": "host1"
  }

The UiPath job object uses the following parameters :

ConnectionProfile

Name of a connection profile to use to connect to the UiPath Robot service

Folder Name

Name of the UiPath folder where UiPath projects are stored

Folder IdIdentification number for the UiPath folder
Process NameName of a UiPath process associated with the UiPath folder
packagekeyUiPath package published from the UiPath Studio to the UiPath Orchestrator
Robot Name

UiPath Robot name

Robot IdUiPath Robot identification number
Optional Input Parameters

(Optional) Input parameters to be passed on to job execution, in the following format:
{"parm1": "val1", "parm2": "val2", "parm3": "val3"}

Status Polling Frequency

(Optional) Number of seconds to wait before checking the status of the job.

Default: 15

Host

Name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.


Back to top

Job:Automation Anywhere

The following example shows how to define an Automation Anywhere job, which performs robotic process automation (RPA).

To deploy and run an Automation Anywhere job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Automation Anywhere integration using the deploy jobtype command.

"Automation Anywhere_Job_2": {
  "Type": "Job:Automation Anywhere",
  "ConnectionProfile": "AACONN",
  "Automation Type": "Bot",
  "Bot to run": "bot123",
  "Bot Input Parameters": "{
	"Param1":{
	   "type": "STRING",
	   "string": "Hello world"
	   },
	"NumParam":{
	   "type": "NUMBER",
	   "integer": 11
	   }
	},
  "Connection timeout": 10,
  "Status Polling Frequency": 5
}

The Automation Anywhere job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to the Automation Anywhere.

Automation TypeDetermines the type of automation to run, either a Bot or a Process.
Bot to run(For Bot automation) Defines the Bot name.
Process to run(For Process automation) Defines the Process name.
Process URI Path

(For Process automation) Defines the URI path of the folder that contains the process to run.

Use the slash character (/) as the separator in this path (not the backslash). 

Example:   Bots/TEST/Folder1

Connection timeout

Defines the maximum number of seconds to wait for REST API requests to respond, before disconnecting.

Default: 10 seconds

Status Polling Frequency

(Optional) Defines the number of seconds to wait before checking the status of the job.

Default: 5 seconds

Bot Input Parameters

(Optional, for Bot automation) Defines optional input parameters to use during bot execution, defined in JSON format.

You can define a variety of types of parameters (STRING, NUMBER, BOOLEAN, LIST, DICTIONARY, or DATETIME). For more information about the syntax of the JSON-format input parameters, see the description of the botInput element in a Bot Deploy request in the Automation Anywhere API documentation.

For no parameters, specify {}.


Back to top

Job:Google DataFlow

The following example shows how to define a Google Dataflow job, which performs cloud-based  data processing for batch and real-time data streaming applications .

To deploy and run a Google Dataflow job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Google Dataflow integration using the deploy jobtype command.

"Google DataFlow_Job_1": { 
  "Type": "Job:Google DataFlow", 
  "ConnectionProfile": "GCPDATAFLOW", 
  "Project ID": "applied-lattice-11111", 
  "Location": "us-central1", 
  "Template Type": "Classic Template", 
  "Template Location (gs://)": "gs://dataflow-templates-us-central1/latest/Word_Count", 
  "Parameters (JSON Format)": { 
    "jobName": "wordcount", 
    "parameters": { 
        "inputFile": "gs://dataflow-samples/shakespeare/kinglear.txt", 
        "output": "gs://controlmbucket/counts" 
        } 
    }
  "Verification Poll Interval (in seconds)": "10", 
  "output Level": "INFO",
  "Host": "host1" 
}

The Google Dataflow job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to Google Cloud Platform.

Project ID

Defines the project ID for your Google Cloud project.

Location Defines the Google Compute Engine region to create the job.
Template Type

Defines one of the following types of Google Dataflow templates:

  • Classic Template - Developers run the pipeline and create a template. The Apache Beam SDK stages files in Cloud Storage, creates a template file (similar to job request), and saves the template file in Cloud Storage.
  • Flex Template - Developers package the pipeline into a Docker image and then use the Google Cloud CLI to build and save the Flex Template spec file in Cloud Storage.
Template Location (gs://)

Defines the path for temporary files. This must be a valid Google Cloud Storage URL that begins with gs://.

The pipeline option tempLocation is used as the default value, if it has been set.

Parameters (JSON Format)

Defines input parameters to be passed on to job execution, in JSON format (name:value pairs).

This JSON must include the jobname and parameters elements.

Verification Poll Interval (in seconds)

(Optional) Determines the number of seconds to wait before checking the status of the job.

Default: 10

output Level

Determines one of the following levels of details to retrieve from the GCP outputs in the case of job failure:

  • TRACE
  • DEBUG
  • INFO
  • WARN
  • ERROR
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.


Back to top

Job:Google Dataproc

The following examples show how to define a Google Dataproc job, which performs cloud-based big data processing and machine learning.

To deploy and run a Google Dataproc job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Google Dataproc integration using the deploy jobtype command.

The following example shows a job for a Dataproc task of type Workflow Template:

"Google Dataproc_Job": {  
  "Type": "Job:Google Dataproc",
  "ConnectionProfile": "GCPDATAPROC",
  "Project ID": "gcp_projectID",
  "Account Region": "us-central1",
  "Dataproc task type": "Workflow Template",
  "Workflow Template": "Template2",
  "Verification Poll Interval (in seconds)": "20",
  "Tolerance": "2"
}

The following example shows a job for a Dataproc task of type Job:

"Google Dataproc_Job": { 
      "Type": "Job:Google Dataproc", 
      "ConnectionProfile": "GCPDATAPROC", 
      "Project ID": "gcp_projectID", 
      "Account Region": "us-central1",  
      "Dataproc task type": "Job", 
      "Parameters (JSON Format)": {
		"job": { 
		  "placement": {}, 
		  "statusHistory": [], 
		  "reference": { 
		  "jobId": "job-e241f6be", 
		  "projectId": "gcp_projectID" 
		   }, 
		  "labels": { 
            "goog-dataproc-workflow-instance-id": "44f2b59b-a303-4e57-82e5-e1838019a812", 
            "goog-dataproc-workflow-template-id": "template-d0a7c" 
           }, 
          "sparkJob": { 
            "mainClass": "org.apache.spark.examples.SparkPi", 
            "properties": {}, 
            "jarFileUris": [ 
               "file:///usr/lib/spark/examples/jars/spark-examples.jar" 
            ], 
            "args": [ 
               "1000" 
            ] 
          } 
       } 
   }
   "Verification Poll Interval (in seconds)": "20",
   "Tolerance": "2" 
} 

The Google Dataproc job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to Google Cloud Platform.

Project ID

Defines the project ID for your Google Cloud project.

Account Region Defines the Google Compute Engine region to create the job.
Dataproc task type

Defines one of the following Dataproc task types to execute:

  • Workflow Template - a reusable workflow configuration that defines a graph of jobs with information on where to run those jobs
  • Job - a single Dataproc job 
Workflow Template

(For a Workflow Template task type)  Defines the ID of a Workflow Template.

Parameters (JSON Format)

(For a Job task type) Defines input parameters to be passed on to job execution, in JSON format. 

You retrieve this JSON content from the GCP Dataproc UI, using the  EQUIVALENT REST  option in job settings.

Verification Poll Interval (in seconds)

(Optional) Determines the number of seconds to wait before checking the status of the job.

Default: 20

Tolerance

Defines the number of call retries during the status check phase.

Default:  2 times


Back to top

Job:GCP BigQuery

The following examples show how to define a GCP BigQuery job. GCP BigQuery is a Google Cloud Platform computing service that you can use for data storage, processing, and analysis.

To deploy and run a GCP BigQuery job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the GCP BigQuery integration using the deploy jobtype command.

The following example shows a job for a Query action in GCP BigQuery:

"GCP BigQuery_query": {
	"Type": "Job:GCP BigQuery",
	"ConnectionProfile": "BIGQSA",
	"Action": "Query",
	"Project Name": "proj",
	"Dataset Name": "Test",
	"Run Select Query and Copy to Table": "checked",
	"Table Name": "IFTEAM",
	"SQL Statement": "select user from IFTEAM2",
	"Query Parameters": {
		"name": "IFteam",
		"paramterType": { 
			"type": "STRING"
		},
		"parameterValue": {
			"value": "BMC"
		}
	},   
	"Job Timeout": "30000",
	"Connection Timeout": "10",
	"Status Polling Frequency": "5"
}

The GCP BigQuery job object uses the following parameters:

ParameterActions

Description

ConnectionProfile
All Actions

Determines which connection profile to use to connect to GCP BigQuery.

ActionN/A

Determines one of the following GCP BigQuery actions to perform:

  • Query:  Runs one or more SQL statements that are supported by GCP BigQuery.
  • Copy: Creates a copy of an existing table.
  • Load: Loads source data into an existing table.
  • Extract: Exports data from an existing table into Google Cloud Storage.
  • Routine: Runs a stored procedure, table function, or previously defined function.
Project NameAll Actions

Determines the project that the job uses.

Dataset Name
  • Query
  • Extract
  • Routine

Determines the database that the job uses.

Run Select Query and Copy to TableQuery

(Optional) Determines whether to paste the results of a SELECT statement into a new table.

Table Name
  • Query
  • Extract

Defines the new table name.

SQL StatementQuery

Defines one or more  SQL statements supported by GCP BigQuery.

Rule: It must be written in a single line, with character strings separated by one space only.

Query ParametersQuery

Defines the query parameters, which enables you to control the presentation of the data.

Example
{
	"name": "IFteam",
	"paramterType": { 
		"type": "STRING"
	},
	"parameterValue": {
		"value": "BMC"
	}
}
Copy Operation TypeCopy

Determines one of the following copy operations:

  • Clone: Creates a copy of a base table that has write access.
  • Snapshot: Creates a read-only copy of a base table.
  • Copy: Creates a copy of a snapshot.
  • Restore:  Creates a writable table from a snapshot.
Source Table PropertiesCopy

Defines the properties of the table that is cloned, backed up, or copied, in JSON format.

You can copy or back up one or more tables at a time.

Example
{ 
	{
		"datasetID": "Test1", 
		"projectID": "SomeProj1",
		"tableID": "IFteam1"
	}
	{
		"datasetID": "Test2", 
		"projectId": "SomepProj2", 
		"tableID": "IFteam2"
	}
}
Destination Table Properties
  • Copy
  • Load

Defines the properties of a new table, in JSON format.

Example
{ 
  "datasetID": "Test3", 
  "projectID": "SomeProj3", 
  "tableID": "IFteam3" 
}
Destination/Source Bucket URIs
  • Load
  • Extract

Defines the source or destination data URI for the table that you are loading or extracting.

You can load or extract multiple tables.

Rule: Use commas to distinguish elements from each other.

Example: "gs://source1_site1/source1.json"

Show Load OptionsLoad

Determines whether to add more fields to a table that you are loading.

Load OptionsLoad

Defines additional fields for the table that you are loading.

Example
"schema": 
	{
		"fields": 
		[
			{
				"name": "name1",
				"type": "STRING1"
			}
			{
				"name": "name2",
				"type": "STRING2"
			}
			{
				"name": "name3",
				"type": "STRING3"
			}
		]
	}
Extract AsExtract

Determines one of the following file formats to export the data to:

  • CSV
  • JSON
RoutineRoutine

Defines a routine and the values that it must run.

Example
Call new_r(‘value1’)
Job TimeoutAll Actions

Determines the maximum number of milliseconds to run the GCP BigQuery job.

Connection TimeoutAll Actions

Determines the number of seconds to wait before the job ends NOT OK.

Default: 10

Status Polling FrequencyAll Actions

Determines the number of seconds to wait before checking the status of the job.

Default: 5 seconds


Back to top

Job:GCP VM

The following example shows how to define a job that performs operations on a Google Virtual Machine (VM).

To deploy and run a Google VM job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Google VM integration using the deploy jobtype command.

"GCP VM_create": {
     "Type": "Job:GCP VM",
     "ConnectionProfile": "GCPVM",
     "Project ID": "applied-lattice",
     "Zone": "us-central1-f",     
     "Operation": "Create",
     "Parameters": "{ \"key\": \"value\"}”,
     "Instance Name": "tb-mastercluster-m", 
     "Get Logs": "checked", 
     "Verification Poll Interval": "20",
     "Tolerance": "3"
}

The Google VM job object uses the following parameters :

ConnectionProfile

Defines the name of a connection profile to use to connect to Google VM.

Project ID

Defines the project ID of the  Google Cloud Project Virtual Machine.

ZoneDefines the name of the zone for the request.
Operation

Determines one of the following operations to perform on the Google Virtual Machine:

  • Create: Create a virtual machine
  • Create from template: Create a new virtual machine based on a template.
  • Start: Start an existing virtual machine.
  • Stop: Stop a running virtual machine.
  • Reset: Reset a virtual machine.
  • Delete: Delete an existing virtual machine.
Template NameDefines the name of a template for creation of a new Google Virtual Machine from a template. 
Instance Name

Defines the name of the VM instance where you want to run the operation.

This parameter is available for all operations except for the Create operations.

Parameters

Defines the input parameters in JSON format for a Create operation.

Format: {"param1":"value1", "param2", "value2", …}

Get Logs

Determines whether to display logs from Google VM at the end of the job output.

This parameter is available for all operations except for the Delete operation.

Values: checked|unchecked

Default: unchecked

Verification Poll Interval

Determines the number of seconds to wait before job status verification.

Default: 15 seconds

Tolerance

Determines the number of retries to rerun a job.

Default: 2 times


Back to top

Job:Boomi

The following example shows how to define a Boomi job, which enables the integration of Boomi processes with your existing Control-M workflows.

To deploy and run a Boomi job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Boomi integration using the deploy jobtype command.

"Boomi_Job_2": {
  "Type": "Job:Boomi",
  "ConnectionProfile": "BOOMI_CCP",
  "Atom Name": "Atom1",
  "Process Name": "New Process",
  "Polling Intervals": "20",
  "Tolerance": "3"
}

The Boomi job object uses the following parameters:

ConnectionProfile

Defines the name of a connection profile to use to connect to the Boomi endpoint.

Atom NameDefines the name of a Boomi Atom associated with the Boomi process.
Process NameDefines the name of a Boomi process associated with the Boomi Atom.
Polling Intervals

(Optional) Number of seconds to wait before checking the status of the job.

Default: 20 seconds

Tolerance

Defines the number of API call retries during the status check phase. If the API call that checks the status fails due to the Boomi limitation of a maximum of 5 calls per second, it will retry again according to the number in the Tolerance field.

Default: 3 times

Back to top

Job:Databricks

The following example shows how to define a Databricksjob, which enables the integration of jobs created in the Databricks environment with your existing Control-M workflows.

To deploy and run a Databricks job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Databricks integration using the deploy jobtype command.

"Databricks_Job": {
  "Type": "Job:Databricks",
  "ConnectionProfile": "DATABRICKS",
  "Databricks Job ID": "91",
  "Parameters": "\"notebook_params\":{\"param1\":\"val1\", \"param2\":\"val2\"}",
  "Idempotency Token": "Control-M-Idem_%%ORDERID", 
  "Status Polling Frequency": "30"
}

The Databricks job object uses the following parameters:

ConnectionProfile

Determines which connection profile to use to connect to the Databricks workspace.

Databricks Job IDDetermines the job ID created in your Databricks workspace.
Parameters

Defines task parameters to override when the job runs, according to the Databricks convention. The list of parameters must begin with the name of the parameter type. For example:

  • "notebook_params":{"param1":"val1", "param2":"val2"}
  •  "jar_params": ["param1", "param2"]

For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided through the Azure Databricks documentation.

For no parameters, specify a value of "params": {}. For example:
"Parameters": "params": {}

Idempotency Token

(Optional) Defines a token to use to rerun job runs that timed out in Databricks.

Values:

  • Control-M-Idem_%%ORDERID — With this token, upon rerun, Control-M invokes the monitoring of the existing job run in Databricks. Default.
  • Any other value  — Replaces the Control-M idempotency token. When you rerun a job using a different token, Databricks creates a new job run with a new unique run ID.
Status Polling Frequency

(Optional) Determines the number of seconds to wait before checking the status of the job.

Default: 30


Back to top

Job:Microsoft Power BI

The following examples show how to define a Power BI job, which enables integration of Power BI workflows with your existing Control-M workflows.

To deploy and run a Power BI job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Power BI integration using the deploy jobtype command.

The following example shows a job for refreshing a dataset in Power BI:

"Microsoft Power BI_Job_2": { 
  "Type": "Job:Microsoft Power BI", 
  "ConnectionProfile": "POWERBI",
  "Dataset Refresh/ Pipeline Deployment": "Dataset Refresh",
  "Workspace Name": "Demo", 
  "Workspace ID": "a7989345-8cfe-44e7-851d-81560e67973f", 
  "Dataset ID": "9976ce6c-e21a-4c33-9b8c-37c8303231cf",
  "Parameters": "{\"type\":\"Full\",\"commitMode\":\"transactional\",\"maxParallelism\":20,\"retryCount\":2}",
  "Connection Timeout": "10", 
  "Status Polling Frequency": "10"
}

The following example shows a job for deploying a Power BI Pipeline from dev to test to production:

"Microsoft Power BI_Job_2": { 
  "Type": "Job:Microsoft Power BI", 
  "ConnectionProfile": "POWERBI",
  "Dataset Refresh/ Pipeline Deployment": "Pipeline Deployment",
  "Pipeline ID": "83f36385-4e38-43g4-8263-10aa12e3175c",
  "Connection Timeout": "10", 
  "Status Polling Frequency": "10"  
}

The Power BI job object uses the following parameters:

ConnectionProfile

Defines the name of a connection profile to use to connect to the Power BI endpoint.

Dataset Refresh/ Pipeline Deployment

Determines one of the following options for execution in Power BI: 

  • Dataset Refresh
  • Pipeline Deployment 
Workspace Name

(For Dataset) Defines a Power BI workspace where you want to refresh data.

Workspace ID

(For Dataset) Defines the ID for the specified Power BI workspace (defined in Group Name).

Dataset ID

Defines a Power BI data set that you want to refresh under the specified workspace.

Parameters

(For Dataset) Defines specific parameters to pass when the job runs, defined as JSON pairs of parameter name and value.

For more information about available parameters, see  Datasets - Refresh Dataset  in the Microsoft Power BI documentation.

To specify parameters, the dataset must be in Premium group.

Format: {"param1":"value1", "param2":"value2"}

For no parameters, specify {}.

Example:

{
  "type":"Defragment",
  "commitMode":"transactional",
  "maxParallelism":20,
  "retryCount":2,
  "objects":[
      {"table":"Contributions"},
	  {"table":"Traffic_Popular"}
	]
}
Connection Timeout

(Optional) Determines the maximum number of seconds to wait for REST API requests to respond, before disconnecting. 

Default: 10 seconds 

Status Polling Frequency

(Optional) Determines the number of seconds to wait before checking the status of the job.  

Default : 10 seconds

Pipeline ID

Defines the ID of a Power BI pipeline that you want to deploy from dev to test and then to production. 


Back to top

Job:Qlik Cloud

The following example shows how to define a Qlik Cloud job, which enables integration with Qlik Cloud Data Services for data visualization through Qlik Sense. 

To deploy and run a Qlik Cloud job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Qlik Cloud integration using the deploy jobtype command.

"Qlik Cloud_Job": {
  "Type": "Job:Qlik Cloud",
  "ConnectionProfile": "QLIK-TEST",
  "Reload Type": "Full",
  "App Name": "Demo1",
  "Print Log to Output": "Yes",
  "Status Polling Frequency": "10",
  "Tolerance": "2"
}

The Qlik Cloud job object uses the following parameters:

ConnectionProfile

Defines the name of a connection profile to use to connect to the Qlik endpoint.

Reload Type

Determines one of the following options  to load data into the environment:

  • Full: deletes all the tables in the current environment and reloads the data from the data sources.
  • Partial: keeps the tables in the current environment and executes the Load and Select statements, preceded by an Add, Merge, or Replace prefix.
App NameDefines the Qlik Sense app name, which contains one or more workspaces, called sheets.
Print Log to Output

Determines whether the job logs are included in the Control-M output.

Values: Yes|No

Default: Yes

Status Polling Frequency

(Optional) Determines the number of seconds to wait before checking the status of the job.

Default : 10 seconds

Tolerance 

Determines the number of retries to rerun a job.

Default: 2 times


Back to top

Job:Snowflake

The following examples show how to define a Snowflake job, which enables integration with Snowflake, a cloud computing platform that you can use for data storage, processing, and analysis. 

To deploy and run a Snowflake job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Snowflake integration using the deploy jobtype command.

The following example shows a job for a SQL Statement action in Snowflake:

"Snowflake_Job": { 
  "Type": "Job:Snowflake", 
  "ConnectionProfile": "SNOWFLAKE_CONNECTION_PROFILE", 
  "Database": "FactoryDB", 
  "Schema": "Public", 
  "Action": "SQL Statement", 
  "Snowflake SQL Statement": "Select * From Table1", 
  "Statement Timeout": "60", 
  "Show More Options": "unchecked",
  "Show Output": "unchecked",
  "Polling Interval": "20"
}

The Snowflake job object uses the following parameters:

ParameterActions

Description

ConnectionProfile
All Actions

Determines which connection profile to use to connect to Snowflake.

DatabaseAll ActionsDetermines the database that the job uses.
SchemaAll Actions

Determines the schema that the job uses.

A schema is an organizational model that describes layout and definition of the fields and tables, and their relationships to each other, in a database.

ActionN/A

Determines one of the following Snowflake actions to perform:  

  • SQL Statement: Runs any number of Snowflake-supported  SQL commands, such as queries, calling or creating procedures, database maintenance tasks, and creating and editing tables.
  • Copy from Query: Copies a queried database and schema into an existing or new file in cloud storage.      
  • Copy from Table: Copies from an existing table.
  • Create Table and Query: Creates a table, populated by a query, in the specified database and schema.
  • Create Snowpipe: Creates a Snowpipe and saves it to a file in cloud storage.
  • Start or Pause Snowpipe: Starts or pauses an existing Snowpipe.
  • Stored Procedure: Calls an existing procedure and its arguments.
  • Snowpipe Load Status: Monitors the status of a Snowpipe for a set period of time.
Snowflake SQL StatementSQL Statement

Determines one or more Snowflake-supported SQL commands.

Rule: Must be written in a single line, with strings separated by one space only.

Query to LocationCopy from QueryDefines the cloud storage location.
Query InputCopy from QueryDefines the query used for copying the data.
Storage Integration
  • Copy from Query
  • Copy from Table
Defines the storage integration object.
Overwrite
  • Copy from Query
  • Copy from Table

Determines whether to overwrite an existing file in the cloud storage, as follows:

  • Yes
  • No
File Format
  • Copy from Query
  • Copy from Table
  • Create Snowpipe

Determines one of the following file formats for the saved file:

  • JSON
  • CSV
Copy DestinationCopy from Table

Defines where the JSON or CSV file is saved.

You can save to Amazon Web Services, Google Cloud Platform, or Microsoft Azure. 

Example: s3://<bucket name>/

From TableCopy from TableDefines the name of the copied table.
Create Table NameCreate Table and QueryDefines the name of the new or existing table where the data is queried.
QueryCreate Table and QueryDefines the query used for the copied data.
Snowpipe Name
  • Create Snowpipe
  • Start or Pause Snowpipe
  • Snowpipe Load Status

Defines the name of the Snowpipe.

A Snowpipe loads data from files when they are ready, or staged.

Copy into TableCreate SnowpipeDefines the table that the data is copied into.
Copy Data from StageCreate SnowpipeDefines the stage from where the data is copied.
Start or Pause SnowpipeStart or Pause Snowpipe

Determines whether to start or pause the Snowpipe, as follows:

  • Start Snowpipe
  • Pause Snowpipe
Stored Procedure NameStored ProcedureDefines the name of the stored procedure. 
Procedure ArgumentStored ProcedureDefines the value of the argument in the stored procedure.
Table NameSnowpipe Load StatusDefines the table that is monitored when loaded by the Snowpipe.
Stage LocationSnowpipe Load Status

Defines the cloud storage location.

A stage is a pointer that indicates where data is stored, or staged.

Example: s3://CloudStorageLocation/

Days BackSnowpipe Load StatusDetermines the number of days to monitor the Snowpipe load status.
Status File Cloud Location PathSnowpipe Load Status

Defines the cloud storage location where a CSV file log is created. 

The CSV file log details the load status for each Snowpipe.

Storage IntegrationSnowpipe Load Status

Defines the Snowflake configuration for the cloud storage location (as defined in the previous parameter, Status File Cloud Location Path).

Example: S3_INT

Statement TimeoutAll ActionsDetermines the maximum number of seconds to run the job in Snowflake.
Show More OptionsAll Actions

Determines whether the following parameters are included in the job definitions:

  • Parameters
  • Role
  • Bindings
  • Warehouse
ParametersAll Actions

Defines Snowflake-provided parameters that let you control how data is presented.

Format: {"param1":"value1", "param2":"value2"}

RoleAll Actions

Determines the Snowflake role used for this Snowflake job. 

A role is an entity that can be assigned privileges on secure objects. You can be assigned one or more roles from a limited selection.

BindingsAll Actions

Defines the values to bind to the variables used in the Snowflake job, in JSON format. For more information about bindings, see the Snowflake documentation

Example:

The following JSON defines two binding variables:

"1": { 
      "type": "FIXED", 
      "value": "123" 
    } 
"2": { 
      "type": "TEXT", 
      "value": "String" 
    }
WarehouseAll Actions

Determines the warehouse used in the Snowflake job.

A warehouse is a cluster of virtual machines that processes a Snowflake job.

Show OutputAll Actions

Determines whether to show a full JSON response in the log output.

Values: checked|unchecked

Default: unchecked

Status Polling FrequencyAll Actions

Determines the number of seconds to wait before checking the status of the job.

Default: 20 seconds


Back to top

Job:Talend Data Management

The following examples show how to define a Talend Data Management job, which enables the integration of data management and data integration tasks or plans from Talend with your existing Control-M workflows.

To deploy and run a Talend Data Management job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Talend Data Management integration using the deploy jobtype command.

The following example shows a job for a Talend task:

"Talend Data Management": {
  "Type": "Job: Talend Data Management",
  "ConnectionProfile": "TALENDDATAM",
  "Task/Plan Execution": "Execute Task",
  "Task Name": "GetWeather job",
  "Parameters": "{"parameter_city":"London","parameter_appid":"43be3fea88g092d9226eb7ca"}"
  "Log Level": "Information",
  "Bring logs to output": "checked",
  "Task Polling Intervals" : "10"
}

The following example shows a job for a Talend plan:

"Talend Data Management": {
  "Type": "Job: Talend Data Management",
  "ConnectionProfile": "TALENDDATAM",
  "Task/Plan Execution": "Execute Plan",
  "Plan Name": "Sales Operation Plan",
  "Plan Polling Intervals" : "10"
}

The Talend Data Management job object uses the following parameters:

ConnectionProfile

Determines which connection profile to use to connect to the Talend Data Management Platform.

Task/Plan Execution

Determines one of the following options for execution in Talend: 

  • Execute Task
  • Execute Plan

Task Name /
Plan Name

Defines the name of the Talend task or plan to execute, as defined in the Tasks and Plans page in the Talend Management Console.

Parameters

(For a task) Defines specific parameters to pass when the Talend job runs, defined as JSON pairs of parameter name and value. All parameter names must contain the parameter_ prefix.

Format: {"parameter_param1":"value1", "parameter_param2":"value2"}

For no parameters, specify {}.

Log Level

(For a task) Determines one of the following levels of detail in log messages for the triggered task in the Talend Management Console:

  • Information — All logs available
  • Warning — Only warning logs
  • Error — Only Error logs
  • Off — No logs
Bring logs to output

(For a task) Determines whether to show Talend log messages in the job output.

Values: checked|unchecked

Default: unchecked

Task Polling Intervals /
Plan Polling Intervals

Determines the number of seconds to wait before checking the status of the triggered task or plan.

Default: 10 second


Back to top

Job:Trifacta

The following example shows how to define a Trifacta job. Trifacta is  a data-wrangling platform that allows you to discover, organize, edit, add to, and publish data in different formats and to multiple clouds, including AWS, Azure, Google, Snowflake, and Databricks. 

To deploy and run a Trifacta job, ensure that you have the Control-M Application Integrator plug-in installed and have deployed the Trifacta integration using the deploy jobtype command.

"Trifacta_Job_2": {
	"Type": "Job:Trifacta",
	"ConnectionProfile": "TRIFACTA",
	"Flow Name": "Flow",
	"Rerun with New Idempotency Token": "checked",
	"Idempotent Token": "Control-M-Idem_%%ORDERID'",
	"Retrack Job Status": "checked",
	"Run ID": "Run_ID",
	"Status Polling Frequency": "15"
}

The Trifacta job object uses the following parameters:

ConnectionProfile

Determines which connection profile to use to connect to the Trifacta platform.

Flow NameDetermines which Trifacta flow the job runs.
Rerun with New Idempotency Token

Determines whether to allow rerun of the job in Trifacta with a new idempotency token (for example, when the job run times out).

Values: checked|unchecked

Default: unchecked

Idempotent Token

Defines the idempotency token that guarantees that the job run is executed only once.

To allow rerun of the job with a new token, replace the default value with a unique ID that has not been used before. Use the RUN_ID, which can be retrieved from the job output.

Default: Control-M-Idem_%%ORDERID — job run cannot be executed again.

Retrack Job Status

Determines whether to track job run status as the job run progresses and the status changes (for example, from in-progress to failed or to completed).

Values: checked|unchecked

Default: unchecked

Run ID

Defines the RUN_ID number for the job run to be tracked.

The RUN_ID is unique to each job run and it can be found in the job output.

Status Polling Frequency

Determines the number of seconds to wait before checking the status of the Trifacta job.

Default: 10 second


Back to top

Job:Dummy

The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
   "Type" : "Job:Dummy"
}

Job:zOS:Member

The following example shows how to use Job:zOS:Member to run jobs on a z/OS system.

Note: Current support is for unique z/OS Job names only.

{
  "ZF_DOCS" : {
    "Type" : "Folder",
    "ControlmServer" : "M2MTROLM",
    "FolderLibrary" : "IOAA.CCIDM2.CTM.OPR.SCHEDULE",
    "RunAs" : "emuser",
    "CreatedBy" : "emuser",
    "When" : {
      "RuleBasedCalendars" : {
        "Included" : [ "EVERYDAY" ],
        "EVERYDAY" : {
          "Type" : "Calendar:RuleBased",
          "When" : {
            "DaysRelation" : "OR",
            "WeekDays" : [ "NONE" ],
            "MonthDays" : [ "ALL" ]
          }
        }
      }
    },
    "ZJ_DATA" : {
      "Type" : "Job:zOS:Member",
      "SystemAffinity" : "ABCD",
      "SchedulingEnvironment" : "PLEX8ALL",
      "ControlDCategory" : "SEQL_FILE",
      "PreventNCT2" : "Yes",
      "MemberLibrary" : "IOA.WORK.JCL",
      "SAC" : "Prev",
      "CreatedBy" : "emuser",
      "RequestNJENode" : "NODE3",
      "RunAs" : "emuser",
	  "StatisticsCalendar": "CALPERIO",
      "TaskInformation" : {
        "EmergencyJob" : true,
		"RunAsStartedTask" : true,
		"Cyclic": true,
      },
      "OutputHandling" : {
	    "Operation" : "Copy"
        "FromClass" : "X",
        "Destination" : "NODE3",
      },
      "History" : {
		"RetentionDays": "05",
        "RetentionGenerations" : "07"
      },
      "Archiving" : {
        "JobRunsToRetainData" : "4",
        "DaysToRetainData" : "1",
        "ArchiveSysData" : true
      },
      "Scheduling" : {
        "MinimumNumberOfTracks" : "5",
        "PartitionDataSet" : "fgf"
      },
      "RerunLimit" : {
        "RerunMember" : "JOBRETRY",
        "Units" : "Minutes",
        "Every" : "7"
      },
	  "MustEnd" : {
  		"Minutes" : "16",
  		"Hours" : "17",
  		"Days" : "0"
	  },
      "When" : {
        "WeekDays" : [ "NONE" ],
        "Months" : [ "NONE" ],
        "MonthDays" : [ "NONE" ],
        "DaysRelation" : "OR"
      },
      "CRS" : {
        "Type" : "Resource:Lock",
        "IfFail" : "Keep",
        "LockType" : "Shared"
      },
      "QRS" : {
        "Type" : "Resource:Pool",
        "IfFail" : "Keep",
        "IfOk" : "Discard",
        "Quantity" : "1"
      },
      "Demo" : {
        "Type" : "StepRange",
        "FromProgram" : "STEP1",
        "FromProcedure" : "SMPIOA",
        "ToProgram" : "STEP8",
        "ToProcedure" : "CTBTROLB"
      },
      "IfCollection:zOS_0" : {
        "Type" : "IfCollection:zOS",
        "Ifs" : [ {
          "Type" : "If:zOS:AnyProgramStep",
          "ReturnCodes" : [ "OK" ],
          "Procedure" : "SMPIOA"
        }, "OR", {
          "Type" : "If:zOS:EveryProgramStep",
          "ReturnCodes" : [ "*$EJ", ">S002" ],
          "Procedure" : "SMPIOA"
        } ],
        "CtbRuleData_2" : {
          "Type" : "Action:ControlMAnalyzerRule",
          "Name" : "RULEDEMO",
          "Arg" : "3"
        }
      },
      "IfCollection:zOS_1" : {
        "Type" : "IfCollection:zOS",
        "Ifs" : [ {
          "Type" : "If:zOS:SpecificProgramStep",
          "Program" : "Demo",
          "ReturnCodes" : [ "*****" ],
          "Procedure" : "SMPIOA"
        }, "OR", {
          "Type" : "If:zOS:SpecificProgramStep",
          "Program" : "STEP5",
          "ReturnCodes" : [ ">U0002" ],
          "Procedure" : "SMPIOA"
        } ],
        "IfRerun_2" : {
          "Type" : "Action:Restart",
          "FromProgram" : "STEP1",
          "FromProcedure" : "SMPIOA",
		  "ToProgram" : "STEP5",
	      "ToProcedure" : "CTBTROLB"
	      "Confirm" : false,           
        }
      }
    }
  }
}


FolderLibrary

Defines the location of the Member that contains the job folder.

Rules:

  • 1 - 44 characters
  • Built of qualifiers: 1-8 characters in length separated by '.' (min 1 character, cannot begin with a digit character).
  • Invalid characters: Non-English characters or begin with "."
  • Valid characters: A-Z, 0-9, @, #, $
SystemAffinity

Defines the identity of the system in which the Job must be initiated and executed (in JES2).

Rules:

  • 1 - 5 alpha-numeric characters
  • The alpha-numeric characters can be proceeded by a "/". "/" as a first character indicates NOT in JES3.
  • Invalid characters: Non-English characters
SchedulingEnvironment

Defines the JES2 workload management scheduling environment that is to be associated with the Job.

Rules:

  • 1 -16 characters
  • Case Sensitive
  • Invalid Characters: Non-English characters, blanks
ControlDCategory

Defines the name of the Control-D Report Decollating Mission Category. If specified, the report decollating mission is scheduled whenever the Job is scheduled under Control-M.

Rules:

  • 1 - 20 characters
  • Invalid characters: blanks
PreventNCT2

Determines whether to perform data set cleanup before the original job runs.

Values: yes | no

MemberLibrary

Defines the location of the Member that contains the JCL, started task procedure, or warning message.

Rules:

  • 1 - 44 characters
  • Built of qualifiers: 1-8 characters in length separated by '.' (min 1 character, cannot begin with a digit character).
  • Invalid characters: Non-English characters
  • Valid characters: A-Z, 0-9, @, #, $
SAC

(Optional) Determines whether to adjust the logical date for a job converted from a scheduling product other than Control‑M.

Valid values:

  • Blank: No adjustment is made. The SMART folder and its jobs are scheduled according to the regular criteria. This is the default.
  • Prev:
    • SMART folder: The SMART folder is scheduled on the day indicated by the regular scheduling criteria and the preceding day.
    • Job: The job is scheduled on the day that precedes the day indicated by the regular scheduling criteria.
  • Next:
    • SMART folder: The SMART folder is scheduled on the day indicated by the regular scheduling criteria and the following day.
    • Job: The job is scheduled on the day that follows the day indicated by the regular scheduling criteria.
RequestNJENode

Defines the node in the JES network where the Job executes

Rules:

  • 1 - 8 characters
  • Invalid Characters: Single quotation marks,"$", "/", "*", "?", " "
StatisticsCalendar

(Optional) Defines the Control-M periodic calendar used to collect statistics relating to the job. This provides more precise statistical information about the job execution. If the StatisticsCalendar parameter is not defined, the statistics are based on all run times of the job.

Rules:

  • 1 - 8 alphanumeric characters
  • Case sensitive
  • Invalid Characters: Blanks and non-English characters
TaskInformationDefines additional optional settings for the job.
  EmergencyJob

Determines whether to run the job as an emergency job.

Values: true | false

  RunAsStartedTask

Determines whether to run the job as a started task.

Values: true | false

  Cyclic

Determines if the job is run as a cyclic job.

Values: true | false

Default is false

OutputHandling

Defines how the job output is handled.

  Operation

Defines the output handling action.

Valid values:

  • None: Default
  • Delete: Deletes the output
  • Copy: Copies the output to a selected filename
  • Move: Moves the output to a new selected path
  • ChangeJobClass: Changes the class of the job output
  FromClass

Defines the previous class name.

  Destination

Defines the output name and full path to move the output.

Note: Do not use an internal Control-M directory or subdirectory.

Mandatory if the value for Operation is Copy, Move, or ChangeJobClass.

An asterisk (*) indicates the original MSGCLASS for the job output.

History

(Optional) Determines how long to retain the job in the History Jobs file

Note: Retention Days and Retention Generations are mutually exclusive. A value can be specified for either, but not both.

  RetentionDays

Number of days 

Valid values: 001 - 999

  RetentionGenerations

Number of generations

Valid values: 000 - 999

ArchivingDetermines how long Control-M Workload Archiving retains the job output
  JobRunsToRetainDataDetermines the number of times the job run data is retained in the job output
  DaysToRetainDataDetermines the number of days the job run data is retained in the job output
  ArchiveSysData

Determines whether to archive the job SYSDATA.

Values: true | false

SchedulingDefines the scheduling parameters when or how often the job is scheduled for submission
  MinimumNumberOfTracksDetermines the minimum number of free partitioned data set tracks required by the library specified for the PartitionDataSet parameter.
  PartitionDataSetDefines the name of a partitioned data set to check for free space. If PartitionDataSet has fewer than the minimum number of required free tracks specified in the MinimumNumberOfTracks parameter, the job executes.
RerunLimit

Determines the maximum number of reruns that can be performed for the job. 

When a z/OS job reruns, the job status is set to NOTOK, even if it was previously specified as OK.

  RerunMember

Defines the name of the JCL member to use when the job automatically reruns. 

Rules:

  • 1 - 8 characters
  • Case sensitive
  • Invalid Characters: Blanks and non-English characters
  Units

Defines the unit of measurement to wait between reruns.

Valid values:

  • Seconds
  • Minutes
  • Hours
  Every

Determines the number of Units to wait between reruns.

MustEnd

Defines the time of day and days offset when the folder must finish executing.

  Hours

Hour of the day

Format: HH

Valid values: 00 - 23

  Minutes

Minutes of the hour

Format: MM

Valid values: 00 - 59

  Days

Number of days

Format: DDD

Valid values: 000 - 120

LockTypeDetermines whether a lock resource is shared or exclusive. For more information, see Resources.
IfFail

Determines what happens to the lock or pool resource if the job fails to use the resource. For more information, see Resources.

Valid values:

Release | Keep

IfOk

Determines what happens to a pool resource if the job successfully uses the resource. For more information, see Resources.

Valid values:

Discard | Release

QuantityDetermines the number of lock or pool resources to allocate to the job. For more information, see Resources.

StepRange

Determines the job steps to execute during restart of a job.

Parameters:

  • FromProgram: First program step within the job stream

  • FromProcedure: First procedure step in the range

  • ToProgram: Last program step within the job stream

  • ToProcedure: Last procedure step in the range

Parameter rules:

  • 1 - 8 characters
  • Invalid Characters: Blanks and non-English characters

IfCollection:zOS - Ifs

The following unique If objects apply to a z/OS job:

  • If:zOS:AnyProgramStep:  Indicates that the Do Statements must be performed if the specified codes are found in any program step.

    • Procedure: Defines the name of the procedure step
    • ReturnCodes: Assigns a completion code for the entire job that are based on the steps completion codes. Enter more than one ReturnCode in an array separated by commas. For example: [ "<value1", "value2" ] . Valid completion codes are as follows:
      • *****: Job ended
      • OK: Job ended OK 
      • NOTOK: Job ended not OK 
      • Cnnnn: Condition return code (may be preceded by: < > = !)
      • Sxxx: System abend code  (may be preceded by: < > = !)
      • Unnnn: User abend code (may be preceded by: < > = !)
      • $EJ: Job queued for execution
      • *NCT2: File allocation problem 
      • *REC0: Maximum reruns number reached 
      • *TERM: Job terminated by CMEM 
      • *UNKW: Unknown error occurred 
      • FLUSH: Step not executed 
      • FORCE: Job was Forced OK 
      • JFAIL: Job failed due to JCL error
      • JNSUB: Job not submitted 
      • JSECU: Job failed due to security requirements 
      • JLOST: Job output was lost 
      • JNRUN: Job was cancelled during execution 
  • If:zOS:EveryProgramStep:  Indicates that the Do Statements must be performed if the specified codes are found in any program step.
  • If:zOS:NumberOfFailures: Determines whether the accompanying DO statements are performed if the job's number of failures is satisfied.
    • NumberOfFailures: Determines the number of times a job can fail before an action is taken.
  • If:zOS:SpecificRangeName: Specifies a range of steps in the steps of an PGMST statement.
    • Program: Defines the name of a specific program step
  • If:zOS:SpecificProgramStep:  Name of a specific program step. If a specific program step is specified, only program steps from the invoked program are checked to see if they satisfy the code criteria. Program steps directly from the job are not checked.
    • Program: Defines the name of a specific program step
  • If:zOS:JOBRCCodes: Assigns a completion code for the entire job based on the completion codes of its steps. 
  • If:zOS:OutputPattern: Indicates that the DO statements must be performed if the specified pattern is found in the output. 
    • OutputPattern: Defines a string in the  output. 1-40 characters
    • FromColumn: Defines the first column to look for the pattern. 3 digit number.
    • ToColumn: Defines the last column to look for the pattern. 3 digit number.
IfCollection:zOS - Actions

The following unique Action objects apply to a z/OS job:

  • Action:Restart: An object that determines whether to restart the job automatically if the job fails, the step to start from, and the step to end at. 
    • Confirm: Determines if the job restarts automatically. Values: true | false (The default is false)
  • Action:ControlMAnalyzerRule: An object that defines the Control‑M/Analyzer rule that Control-M executes.
    • Name: Defines the name of the Control‑M/Analyzer rule or mission. Mandatory.
    • Arg: Defines the arguments to be passed to the Control‑M/Analyzer step. Optional.

Job:zOS:InStreamJCL

The following example shows how to create an in-stream JCL job which runs an embedded script on a z/OS system:

{
  "ZF_ROBOT" : {
    "Type" : "SimpleFolder",
    "ControlmServer" : "R2MTROLM",
    "FolderLibrary" : "CTMP.V900.SCHEDULE",
    "OrderMethod" : "Manual",
    "Z_R1" : {
      "Type" : "Job:zOS:InStreamJCL", 
	  "JCL" : "//ROASMCL JOB ,ASM,CLASS=A,REGION=0M\\n//JCLLIB ORDER=IOAP.V900.PROCLIB\\n//INCLUDE MEMBER=IOASET\\n//S1 EXEC IOATEST,PARM='TERM=C0000'\\n//S2 EXEC IOATEST,PARM='TERM=C0000'",
      "CreatedBy" : "emuser",
      "RunAs" : "emuser",
      "When" : {
        "WeekDays" : [ "NONE" ],
        "MonthDays" : [ "ALL" ],
        "DaysRelation" : "OR"
      },
      "Archiving" : {
        "ArchiveSysData" : true
      }
    }
  }
}
JCL

Defines a script as it would be specified in a terminal for the specific computer and is part of the job definition. Each line begins with // and ends with \\n

Other Job Types

The following job types can also be deployed by Control-M Automation API.

Tandem and VMware jobs are not supported on Control-M Web. Mapping and OS2200 jobs are not supported at all.

Note: These job types are currently available to enable migration of any job type across environments with minimal efforts using Deploy API commands, and are not intended for creating new jobs. BMC is continuously developing these job types to enable the creation of all job types using API. If you have specific requirements, contact Support or BMC Community so we can prioritize development.

Job TypeDescription

Job:Messaging:FreeText

Enables you to send or receive a JMS or IBM WebSphereMQ Series messages to the message queue of another application with a free text message.

Job:Messaging:WaitForReply

Enables you to wait and consume a message from the reply queue/topic, according to the Connection Profile.

Job:Messaging:PreDefined

Enables you to send or receive a JMS or IBM WebSphereMQ Series messages to the message queue of another application with a predefined message.

Job:SAP:BW:InfoPackage

Enables you to run pre-defined SAP Process Chains  or SAP Infopackages, and monitor their completion status.

Job:SAP:DataArchiving

Enables you to automate Data   Archiving  sessions. There are 3 data archiving job types:

  • Job:SAP:DataArchiving:Write:  Spawns  Delete jobs in the SAP system that then spawn  Store jobs.
  • Job:SAP:DataArchiving:Delete:  The  Delete jobs that were created by the  Write job in SAP are ordered automatically into Control-M by the Extractor process.
  • Job:SAP:DataArchiving:Store: The  Store jobs that were created by the  Delete jobs in SAP are ordered automatically into Control-M by the Extractor process.
Job:SAP:R3:PredefinedSapJobEnables you to copy an existing SAP R3 job.
Job:SAP:R3:MonitorSapJobEnables you to monitor a SAP R3 job.
Job:SAP:R3:BatchInputSessionEnables you to run a SAP R3 job from a specific Batch Input Session.
Job:SAP:R3:SapProfile:ActivateEnables you to activate SAP profiles.
Job:SAP:R3:SapProfile:DeactivateEnables you to deactivate SAP profiles.
Job:SAP:R3:TriggerSapEventEnables you to trigger a SAP event.
Job:SAP:R3:WatchSapEventEnables you to watch a SAP event.
Job:OEBSEnables you to introduce all Control-M capabilities to Oracle E-Business Suite.
Job:IBMDataStageEnables you to monitor or create a DataStage job.
Job:JavaEnables you to schedule a Java class or a J2EE Enterprise Java Beans (EJBs) running on a J2EE application server, such as IBM WebSphere, BEA WebLogic, JBoss, and SAP NetWeaver.
Job:IBMCognosEnables you to automate report and job generation for pre-defined IBM Cognos reports and jobs.
Job:NetBackupEnables you to monitor or create a NetBackup job.
Job:Tandem:TACLScript

Enables you to run TACL scripts on an HPE NonStop (AKA Guardian) operating system.

Job:Tandem:ProgramEnables you to execute a program on an HPE NonStop (AKA Guardian) operating system.
Job:Tandem:CommandEnables you to run a command on an HPE NonStop (AKA Guardian) operating system.
Job:Tandem:EmbeddedTACLScriptEnables you to run TACL script, exactly as it is specified in a terminal for the specific computer, on an HPE NonStop (AKA Guardian) operating system.
Job:Tandem:ExternalProcessEnables you to attach an external process to a Control-M Active job on an HPE NonStop (AKA Guardian) operating system.
Job:VMware:Snapshot:TakeEnables you to create a snapshot a virtual machine (VM).
Job:VMware:Snapshot:RevertEnables you to change the execution state of a VM to the state of the selected snapshot.
Job:VMware:Snapshot:RevertToCurrentEnables you to change the execution state of a VM to the state of the current snapshot.
Job:VMware:Snapshot:RemoveEnables you to remove a snapshot of a VM.
Job:VMware:Snapshot:RemoveAllEnables you to remove all snapshots that are associated with a VM.
Job:VMware:Power:OnEnables you to start up a VM.
Job:VMware:Power:OffEnables you to shut down a VM.
Job:VMware:Power:SuspendEnables you to suspend execution capabilities on a VM
Job:VMware:Power:ResetEnables you to reset a VM.
Job:VMware:Power:RebootEnables you to restart a VM.
Job:VMware:Power:ShutdownEnables you to shut down a guest VM.
Job:VMware:Power:StandbyEnables you to switch a guest VM to standby state.
Job:VMware:Configuration:CloneVirtualMachineEnables you to clone a VM.
Job:VMware:Configuration:DeployTemplateEnables you to create a VM from a selected template.
Job:VMware:Configuration:ReconfigureVirtualMachineEnables you to edit the settings of a VM.
Job:VMware:Configuration:MigrateVirtualMachineEnables you to migrate a VM's execution to a specific resource pool or host.
Job:OS400:MultipleCommandsEnables you to execute multiple commands in a single job on OS/400 using the Control-M Command line interpreter. Creating multiple commands eliminates the need to use pre and post commands and enables an easier conversion from the ROBOT job schedulers.
Job:OS400:VirtualTerminalEnables you to define and execute Virtual Terminal types of jobs on OS/400. A Virtual Terminal job emulates an operator's activities on a physical terminal, while playing the recorded activity as a batch process, with the ability to inject input keystrokes onto the screens as well as validate the screen output.
Job:OS400:ExternalJobEnables you to monitor an external job that is submitted to OS/400 by another job scheduler or process.
Job:OS400:ExternalSubSystemEnables you to monitor an external subsytem that is submitted to OS/400 by another job scheduler or process.
Job:OS400:Full:ScriptFileEnables you to create a job that executes a Script file  in a native OS/400, QShell or S/38  environment. 
Job:OS400:Full:CommandLineEnables you to execute a Script file  in a native OS/400, QShell or S/38  environment. 
Job:OS400:Full:SubSystemEnables you to create a job that starts a subsystem and monitor the active subsystem until it completes.
Job:OS400:Full:DescriptionJobEnables you to create a job that starts a Job description and monitor it until it completes.
Job:OS400:Full:RestrictedStateAction

Enables you to execute a job while setting the OS/400 system into restricted state.

Job:OS400:Full:ProgramEnables you to define and execute an IBM i (AS/400) native program in a library, S/38 program or QShell program.
Job:OS400:Full:MultipleCommandsEnables you to execute multiple commands in a single job on OS/400 using the Control-M Command line interpreter. Creating multiple commands eliminates the need to use pre and post commands and enables an easier conversion from the ROBOT job schedulers.
Job:OS400:Full:VirtualTerminalEnables you to define and execute Virtual Terminal types of jobs on OS/400. A Virtual Terminal job emulates an operator's activities on a physical terminal, while playing the recorded activity as a batch process, with the ability to inject input keystrokes onto the screens as well as validate the screen output.
Job:OS400:Full:ExternalJobEnables you to monitor an external job that is submitted to OS/400 by another job scheduler or process.

Job:OS400:Full:ExternalSubSystem

Enables you to monitor an external subsytem that is submitted to OS/400 by another job scheduler or process.

Back to top

Was this page helpful? Yes No Submitting... Thank you

Comments