Job types

The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.

Job:Command

The following example shows how to use the Job:Command to run operating system commands.

	"JobName": {
		"Type" : "Job:Command",
    	"Command" : "echo hello",
        "PreCommand": "echo before running main command",
        "PostCommand": "echo after running main command",
    	"Host" : "myhost.mycomp.com",
    	"RunAs" : "user1"  
	}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating system user that will run the job.
PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.

Back to top

Job:Script

 The following example shows how to use Job:Script to run a script from a specified script file.

    "JobWithPreAndPost": {
        "Type" : "Job:Script",
        "FileName" : "task1123.sh",
        "FilePath" : "/home/user1/scripts",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1"   
    }
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".

PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.

Back to top

Job:EmbeddedScript

The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.

    "EmbeddedScriptJob":{
        "Type":"Job:EmbeddedScript",
        "Script":"#!/bin/bash\\necho \"Hello world\"",
        "Host":"myhost.mycomp.com",
        "RunAs":"user1",
        "FileName":"myscript.sh",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script"
    }
Script

Full content of the script, up to 64 kilobytes.

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
FileName

Name of a script file. This property is used for the following purposes:

  • The file extension provides an indication of how to interpret the script. If this is the only purpose of this property, the file does not have to exist.
  • If you specify an alternative script override using the OverridePath job property, the FileName property indicates the name of the alternative script file.
PreCommand(Optional) A command to execute before the job is executed.
PostCommand(Optional) A command to execute after the job is executed.

Back to top

Job:FileTransfer

The following example shows a Job:FileTransfer.

{
  "FileTransferFolder" :
  {
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTP" :
	{
		"Type" : "Job:FileTransfer",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"Host": "AgentHost",
		"FileTransfers" :
		[
			{
				"Src" : "/home/controlm/file1",
				"Dest" : "/home/controlm/file2",
				"TransferType": "Binary",
				"TransferOption": "SrcToDest"
			},
			{
				"Src" : "/home/controlm/otherFile1",
				"Dest" : "/home/controlm/otherFile2",
				"TransferOption": "DestToSrc"
			}
		]
	}
  }
}

Where:

ParameterDescription
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition, Control-M File Transfer plugin version 8.0.00 or later must be installed.

ConnectionProfileSrcThe connection profile to use as the source
ConnectionProfileDestThe connection profile to use as the destination
ConnectionProfileDualEndpoint

If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest.

For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint.

FileTransfersA list of file transfers to perform during job execution, each with the following properties:
   SrcFull path to the source file
   DestFull path to the destination file
   TransferType

(Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file).

Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file.

Default: "Binary"

   TransferOption

(Optional) The following is a list of the transfer options:

  • SrcToDest - transfer file from source to destination
  • DestToSrc - transfer file from destination to source
  • SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met
  • DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met
  • FileWatcher - Watch a file. If successful, the succeeding job will run.

Default: "SrcToDest"

The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:

{
  "FileTransferFolder" :
  {
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTPBasedOnEvent" :
	{
		"Type" : "Job:FileTransfer",
		"Host" : "AgentHost",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"FileTransfers" :
		[
			{
				"Src" : "/home/sftp/file1",
				"Dest" : "/home/sftp/file2",
				"TransferType": "Binary",
				"TransferOption" : "SrcToDestFileWatcher",
				"PreCommandDest" :
				{
					"action" : "rm",
					"arg1" : "/home/sftp/file2"
				},
				"PostCommandDest" :
				{
					"action" : "chmod",
					"arg1" : "700",
					"arg2" : "/home/sftp/file2"
				},
				"FileWatcherOptions":
				{
					"MinDetectedSizeInBytes" : "200",
					"TimeLimitPolicy" : "WaitUntil",
					"TimeLimitValue" : "2000",
					"MinFileAge" : "3Min",
					"MaxFileAge" : "10Min",
					"AssignFileNameToVariable" : "FileNameEvent",
					"TransferAllMatchingFiles" : true
				}
			}
		]
	}
  }
}

This example contains the following additional optional parameters: 

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

Defines commands that occur before and after job execution.
Each command can run only one action at a time.

Action

Description

chmod

Change file access permission:

arg1: mode

arg2: file name

mkdir

Create a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

FileWatcherOptions

Additional options for watching the transferred file using the File Watcher utility:

    MinDetectedSizeInBytesDefines the minimum number of bytes transferred before checking if the file size is static
    TimeLimitPolicy/
    TimeLimitValue

Defines the time limit to watch a file:
TimeLimitPolicy options:”WaitUntil”, "MinutesToWait"

TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM.
If TimeLimitPolicy: MinutesToWait, the TimeLimitValue is the number of minutes to wait.

    MinFileAge

Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    MaxFileAge

Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    AssignFileNameToVariableDefines the variable name that contains the detected file name
    TransferAllMatchingFiles

Whether to transfer all matching files (value of True) or only the first matching file (value of False) after waiting until the watching criteria is met.

Valid values: True | False
Default value: False

Back to top

Job:FileWatcher

A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete

    "FWJobCreate" : {
	    "Type" : "Job:FileWatcher:Create",
		"RunAs":"controlm",
 	    "Path" : "C:/path*.txt",
	    "SearchInterval" : "45",
	    "TimeLimit" : "22",
	    "StartTime" : "201705041535",
	    "StopTime" : "201805041535",
	    "MinimumSize" : "10B",
	    "WildCard" : true,
	    "MinimalAge" : "1Y",
	    "MaximalAge" : "1D2H4MIN"
    },
    "FWJobDelete" : {
        "Type" : "Job:FileWatcher:Delete",
        "RunAs":"controlm",
        "Path" : "C:/path.txt",
        "SearchInterval" : "45",
        "TimeLimit" : "22",
        "StartTime" : "201805041535",
        "StopTime" : "201905041535"
    }

This example contains the following parameters:

Path

Path of the file to be detected by the File Watcher

You can include wildcards in the path — * for any number of characters, and ? for any single character.

SearchIntervalInterval (in seconds) between successive attempts to detect the creation/deletion of a file
TimeLimit

Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code.

Default: 0 (no time limit)

StartTime

The time at which to start watching the file

The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM.
Alternatively, to specify a time on the current date, use the HHMM format.

StopTime

The time at which to stop watching the file.

Format: yyyymmddHHMM or HHMM (for the current date)

MinimumSize

Minimum file size to monitor for, when watching a created file

Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes.

If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true.

Wildcard

Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards

Values: true | false
Default: false

MinimalAge

(Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. 

For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched.

MaximalAge

(Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified.

For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched.

Back to top

Job:Database

The following types of database jobs are available:

Job:Database:SQLScript

The following example shows how to create a database job that runs a SQL script from a file system.

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "AgentHost",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "ORACLE_CONNECTION_PROFILE",
			"Parameters": [
				{"firstParamName": "firstParamValue"},
				{"secondParamName": "secondParamValue"}
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

ParametersParameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair.
Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Another example:

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "ORACLE_CONNECTION_PROFILE"
		}
	}
}

Job:Database:StoredProcedure

The following example shows how to create a database job that runs a program that is stored on the database.

{
	"storeFolder": {
		"Type": "Folder",
		"jobStoredProcedure": {
			"Type": "Job:Database:StoredProcedure",
			"Host": "myhost.mycomp.com",
			"StoredProcedure": "myProcedure",
			"Parameters": [ "value1","variable1",["value2","variable2"]],
			"ReturnValue":"RV",
			"Schema": "public",
			"ConnectionProfile": "DB-PG-CON",
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

StoredProcedureName of stored procedure that the job runs
Parameters

A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearence in the procedure.

The value that you specify for any specific parameter in the procedure depends on the type of parameter:

  • For an In parameter, specify an input value.
  • For an Out parameter, specify an output variable.
  • For an In/Out parameter, specify a pair of input value + output variable, enclosed in brackets: [value,variable]

In the example above, three parameters are listed, in the following order: [In,Out,Inout]

ReturnValueA variable for the Return parameter (if the procedure contains such a parameter)
Schema

The database schema where the stored procedure resides

Package

(Oracle only) Name of a package in the database where the stored procedure resides

The default is "*", that is, any package in the database.

ConnectionProfileName of a connection profile that contains the details of the connection to the database
Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Back to top

Job:Hadoop

Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:

Job:Hadoop:Spark:Python

The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
        "Type": "Job:Hadoop:Spark:Python",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",

        "SparkScript": "/home/user/processData.py"
    }
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
    "Type": "Job:Hadoop:Spark:Python",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",
    "SparkScript": "/home/user/processData.py",            
    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Spark:ScalaJava

The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",

    "ProgramJar": "/home/user/ScalaProgram.jar",
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",

    "ProgramJar": "/home/user/ScalaProgram.jar"
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",

    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Pig

The following example shows how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
    "Type" : "Job:Hadoop:Pig",
    "Host" : "edgenode",
    "ConnectionProfile": "DEV_CLUSTER",

    "PigScript" : "/home/user/script.pig" 
}
ConnectionProfileSee ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
        "Type" : "Job:Hadoop:Pig",
        "ConnectionProfile": "DEV_CLUSTER",
        "PigScript" : "/home/user/script.pig",            
        "Host" : "edgenode",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ],            
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Sqoop

The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
      "Type" : "Job:Hadoop:Sqoop",
	  "Host" : "edgenode",
      "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",

      "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfileSee Sqoop ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
        "Type" : "Job:Hadoop:Sqoop",
        "Host" : "edgenode",
        "ConnectionProfile" : "SQOOP_CONNECTION_PROFILE",

        "SqoopCommand" : "import --table foo",
		"SqoopOptions" : [
			{"--warehouse-dir":"/shared"},
			{"--default-character-set":"latin1"}
		],
 
        "SqoopArchives" : "",
        
        "SqoopFiles": "",
        
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

SqoopOptionsThese are passed as the specific sqoop tool args
SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFilesIndicates the location of the Sqoop files.

Back to top

Job:Hadoop:Hive

The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
    {
      "Type" : "Job:Hadoop:Hive",
      "Host" : "edgenode",
      "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",

      "HiveScript" : "/home/user1/hive.script"
    }

 

ConnectionProfileSee Hive ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
    {
        "Type" : "Job:Hadoop:Hive",
        "Host" : "edgenode",
        "ConnectionProfile" : "HIVE_CONNECTION_PROFILE",


        "HiveScript" : "/home/user1/hive.script", 
        "Parameters" : [
            {"ammount": "1000"},
            {"topic": "food"}
        ],

        "HiveArchives" : "",
        
        "HiveFiles": "",
        
        "HiveOptions" : [
            {"hive.root.logger": "INFO,console"}
        ],

        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

HiveSciptParametersPassed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFilesPassed to beeline as --hiveconf mapred.cache.files=”value”.

Back to top

Job:Hadoop:DistCp

The following example shows how to use Job:Hadoop:DistCp to run a DistCp job.  DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
        {
            "Type" : "Job:Hadoop:DistCp",
            "Host" : "edgenode",
            "ConnectionProfile": "DEV_CLUSTER",
         
            "TargetPath" : "hdfs://nns2:8020/foo/bar",
            "SourcePaths" :
            [
                "hdfs://nn1:8020/foo/a"
            ]
        }  
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.  See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
    {
        "Type" : "Job:Hadoop:DistCp",
        "Host" : "edgenode",
        "ConnectionProfile" : "HADOOP_CONNECTION_PROFILE",
        "TargetPath" : "hdfs://nns2:8020/foo/bar",
        "SourcePaths" :
        [
            "hdfs://nn1:8020/foo/a",
            "hdfs://nn1:8020/foo/b"
        ],
        "DistcpOptions" : [
            {"-m":"3"},
            {"-filelimit ":"100"}
        ]
    }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>.

Back to top

Job:Hadoop:HDFSCommands

The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Host" : "edgenode",
            "ConnectionProfile": "DEV_CLUSTER",

            "Commands": [
                {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Back to top

Job:Hadoop:HDFSFileWatcher

The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
    {
        "Type" : "Job:Hadoop:HDFSFileWatcher",
        "Host" : "edgenode",
        "ConnectionProfile" : "DEV_CLUSTER",

        "HdfsFilePath" : "/inputs/filename",
        "MinDetecedSize" : "1",
        "MaxWaitTime" : "2"
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePathSpecifies the full path of the file being watched.
MinDetecedSizeDefines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.
MaxWaitTimeDefines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Back to top

Job:Hadoop:Oozie

The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
        "Type" : "Job:Hadoop:Oozie",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",


        "JobPropertiesFile" : "/home/user/job.properties",
        "OozieOptions" : [
          {"inputDir":"/usr/tucu/inputdir"},
          {"outputDir":"/usr/tucu/outputdir"}
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false , that is, the job will complete successfully even if any post-command fails.

OozieOptionsSet or override values for given job property.

Back to top

Job:Hadoop:MapReduce

 The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
    {
       "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"]
    }
ConnectionProfile

See ConnectionProfile:Hadoop  

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
    {
        "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }    
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:MapredStreaming

The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
        "Type": "Job:Hadoop:MapredStreaming",
        "Host" : "edgenode",
        "ConnectionProfile": "DEV_CLUSTER",


        "InputPath": "/user/robot/input/*",
        "OutputPath": "/tmp/output",
        "MapperCommand": "mapper.py",
        "ReducerCommand": "reducer.py",
        "GeneralOptions": [
            {"-D": "fs.permissions.umask-mode=000"},
            {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true , that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

GeneralOptionsAdditional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options.

Back to top

Job:SAP

SAP-type jobs enable you to manage SAP processes through the Control-M environment. To manage SAP-type jobs, you must have the Control-M for SAP plugin installed in your Control-M environment.

The following JSON objects are available for creating SAP-type jobs:

Job:SAP:R3:COPY

This job type enables you to create a new SAP R3 job by duplicating an existing job. The following example shows how to use Job:SAP:R3:COPY:

"JobSapR3Copy" : {
    "Type" : "Job:SAP:R3:COPY",
    "ConnectionProfile":"SAP-CON",
    "SapJobName" : "CHILD_1",
    "Exec": "Server",
    "Target" : "Server-name",
    "JobCount" : "SpecificJob",
    "JobCountSpecificName" : "sap-job-1234",
    "NewJobName" : "My-New-Sap-Job",
    "StartCondition" : "AfterEvent",
    "AfterEvent" : "HOLA",
    "AfterEventParameters" : "parm1 parm2",
    "RerunFromPointOfFailure": true,
    "CopyFromStep" : "4",
    "PostJobAction" : {
        "Spool" : "CopyToFile",
        "SpoolFile": "spoolfile.log",
        "SpoolSaveToPDF" : true,
        "JobLog" : "CopyToFile",
        "JobLogFile": "Log.txt",
        "JobCompletionStatusWillDependOnApplicationStatus" : true
    },
    "DetectSpawnedJob" : {
        "DetectAndCreate": "SpecificJobDefinition",
        "JobName" : "Specific-Job-123",
        "StartSpawnedJob" : true,
        "JobEndInControlMOnlyAftreChildJobsCompleteOnSap" : true,
        "JobCompletionStatusDependsOnChildJobsStatus" : true
    }
}

This SAP job object uses the following parameters:

ConnectionProfileName of the SAP connection profile to use for the connection
SapJobNameName of SAP job to copy
Exec

Type of execution target where the SAP job will run, one of the following:

  • Server — an SAP application server
  • Group — an SAP group
Target

The name of the SAP application server or SAP group (depending on the value specified in the previous parameter)

JobCount

How to define a unique ID number for the SAP job, one of the following options:

  • FirstScheduled (the default)
  • LastScheduled
  • First
  • Last
  • SpecificJob

If you specify SpecificJob, you must provide the next parameter.

JobCountSpecificNameA unique SAP job ID number for a specific job (that is, for when JobCount is set to SpecificJob)

NewJobName

Name of the newly created job
StartCondition

Specifies when the job should run, one of the following:

  • ASAP — Job runs as soon as a background work process is available for it in SAP (the default). If the job cannot start immediately, it is transformed in SAP into a time-based job.
  • Immediate — Job runs immediately. If there are no work processes available to run it, the job fails.
  • AfterEvent — Job waits for an event that you specify (in the next two parameters) to be triggered.

AfterEvent

The name of the SAP event that the job waits for (if you set StartCondition to AfterEvent)
AfterEventParameters

Parameters in the SAP event to watch for.

Use space characters to separate multiple parameters.

RerunFromPointOfFailure

Whether to rerun the SAP R/3 job from its point of failure, either true of false (the default)

Note: If RerunFromPointOfFailure is set to false, use the CopyFromStep parameter to set a specific step from which to rerun.

CopyFromStep

The number of a specific step in the SAP R/3 job from which to rerun or copy

The default is step 1 (that is, the beginning of the job).

Note: If RerunFromPointOfFailure is set to true, the CopyFromStep parameter is ignored.

PostJobActionThis object groups together several parameters that control post-job actions for the SAP R/3 job.
Spool

How to manage spool output, one of the following options:

  • DoNotCopy (the default)
  • CopyToOutput
  • CopyToFile
SpoolFileThe file to which to copy the job's spool output (if Spool is set to CopyToFile)
SpoolSaveToPDFWhether to save the job's spool output in PDF format (if Spool is set to CopyToFile)
JobLog

How to manage job log output, one of the following options:

  • DoNotCopy
  • CopyToOutput (the default)
  • CopyToFile
JobLogFileThe file to which to copy the job's log output (if JobLog is set to CopyToFile)
JobCompletionStatusWillDependOnApplicationStatus

Whether job completion status depends on SAP application status, either true or false (the default)

DetectSpawnedJobThis object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job
DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default)
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify
JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

StartSpawnedJob

Whether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)

JobEndInControlMOnlyAftreChildJobsCompleteOnSapWhether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)
JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.

Job:SAP:BW:ProcessChain

This job type runs and monitors a Process Chain in SAP Business Warehouse (SAP BW).

NOTE: For the job that you define through Control-M Automation API to work properly, ensure that the Process Chain defined in the SAP BW system has Start Using Meta Chain or API as the start condition for the trigger process (Start Process) of the Process Chain. To configure this parameter, from the SAP transaction RSPC, right-click the trigger process and select Maintain Variant.

The following example shows how to use Job:SAP:BW:ProcessChain:

"JobSapBW": {
    "Type": "Job:SAP:BW:ProcessChain",
    "ConnectionProfile": "PI4-BW",
    "ProcessChainDescription": "SAP BW Process Chain",
    "Id": "123456",
    "RerunOption": "RestartFromFailiurePoint",
    "EnablePeridoicJob": true,
    "ConsiderOnlyOverallChainStatus": true,
    "RetrieveLog": false,
    "DetectSpawnedJob": {
        "DetectAndCreate": "SpecificJobDefinition",
        "JobName": "ChildJob",
        "StartSpawnedJob": false,
        "JobEndInControlMOnlyAftreChildJobsCompleteOnSap": false,
        "JobCompletionStatusDependsOnChildJobsStatus": false
        }
    }

This SAP job object uses the following parameters:

ConnectionProfileName of the SAP connection profile to use for the connection.
ProcessChainDescription

The description of the Process Chain that you want to run and monitor, as defined in SAP BW.

Maximum length of the textual description: 60 characters

IdID of the Process Chain that you want to run and monitor.
RerunOption

The rerun policy to apply to the job after job failure, one of the following values:

  • RestartFromFailiurePoint — Restart the job from the point of failure (the default)
  • RerunFromStart — Rerun the job from the beginning
EnablePeridoicJob

Whether the first run of the Process Chain prepares for the next run and is useful for reruns when big Process Chains are scheduled.

Values are either true (the default) or false.

ConsiderOnlyOverallChainStatus

Whether to view only the status of the overall Process Chain.

Values are either true or false (the default) .

RetrieveLog

Whether to add the Process Chain logs to the job output.

Values are either true (the default) or false.

DetectSpawnedJobThis object groups together several parameters that you specify if you want to detect and monitor jobs that are spawned by the current SAP job
    DetectAndCreate

How to determine the properties of detected spawned jobs:

  • CurrentJobDefinition — Properties of detected spawned jobs are identical to the current (parent) job properties (the default).
  • SpecificJobDefinition — Properties of detected spawned jobs are derived from a different job that you specify.
    JobName

Name of an SAP-type job to use for setting properties in spawned jobs of the current job (if DetectAndCreate is set to SpecificJobDefinition)

Note: The specified job must exist in the same folder as the current job, and the connection profile should be the same. BMC recommends that it should have the same Application and Sub Application values.

    StartSpawnedJobWhether to start spawned jobs that have a status of Scheduled (either true or false, where false is the default)
    JobEndInControlMOnlyAftreChildJobsCompleteOnSapWhether to allow the job to end in Control-M only after all child jobs complete in the SAP environment (either true or false, where false is the default)
    JobCompletionStatusDependsOnChildJobsStatus

Whether Control-M should wait for all child jobs to complete (either true or false, where false is the default)

When set to true, the parent job does not end OK if any child job fails.

This parameter is relevant only if JobEndInControlMOnlyAftreChildJobsCompleteOnSap is set to true.


Back to top

Job:ApplicationIntegrator

Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see the Control-M Application Integrator Help.

The following example shows the JSON code used to define a job type named Monitor Remote Job:

"JobFromAI" : {
    "Type": "Job:ApplicationIntegrator:Monitor Remote Job",
    "ConnectionProfile": "AI_CONNECTION_PROFILE",
    "AI-Host": "Host1",
    "AI-Port": "5180",
    "AI-User Name": "admin",
    "AI-Password": "*******",
    "AI-Remote Job to Monitor": "remoteJob5",
    "RunAs": "controlm"
}

In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.

The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.

  • The name of the job type appears in the Name field in the job type details.
  • Job properties appear in the Job Type Designer, in the Connection Profile View and the Job Properties View.
    When defining these properties through the .json code, you prefix them with "AI-", except for the property that specifies the name of the connection profile.


Back to top

Job:Informatica

Informatica-type jobs enable you to automate Informatica workflows through the Control-M environment. To manage Informatica-type jobs, you must have the Control-M for Informatica plugin installed in your Control-M environment.

The following example shows the JSON code used to define an Informatica job.

"InformaticaApiJob": {
    "Type": "Job:Informatica",
    "ConnectionProfile": "INFORMATICA_CONNECTION",
    "RepositoryFolder": "POC",
    "Workflow": "WF_Test",
    "InstanceName": "MyInstamce",
    "OsProfile": "MyOSProfile",
    "WorkflowExecutionMode": "RunSingleTask",
    "RunSingleTask": "s_MapTest_Success",
    "WorkflowRestartMode": "ForceRestartFromSpecificTask",
    "RestartFromTask": "s_MapTest_Success",
    "WorkflowParametersFile": "/opt/wf1.prop",
}

This Informatica job object uses the following parameters:

ConnectionProfileName of the Informatica connection profile to use for the connection
RepositoryFolderThe Repository folder that contains the workflow that you want to run
WorkflowThe workflow that you want to run in Control-M for Informatica
InstanceName(Optional) The specific instance of the workflow that you want to run
OsProfile(Optional) The operating system profile in Informatica

WorkflowExecutionMode

The mode for executing the workflow, one of the following:

  • RunWholeWorkflow — run the whole workflow
  • StartFromTask — start running the workflow from a specific task, as specified by the StartFromTask parameter
  • RunSingleTask — run a single task in the workflow, as specified by the RunSingleTask parameter
  StartFromTask

The task from which to start running the workflow

This parameter is required only if you set WorkflowExecutionMode to StartFromTask.

  RunSingleTask

The workflow task that you want to run

This parameter is required only if you set WorkflowExecutionMode to RunSingleTask.

Depth

The number of levels within the workflow task hierarchy for the selection of workflow tasks

Default: 10 levels

EnableOutput

Whether to include the workflow events log in the job output (either True or False)

Default: True

EnableErrorDetails

Whether to include a detailed error log for a workflow that failed (either True or False)

Default: True

WorkflowRestartMode

The operation to execute when the workflow is in a suspended satus, one of the following:

  • Recover — recover the suspended workflow
  • ForceRestart — force a restart of the suspended workflow
  • ForceRestartFromSpecificTask — force a restart of the suspended workflow from a specific task, as specified by the RestartFromTask parameter
  RestartFromTask

The task from which to restart a suspended workflow

This parameter is required only if you set WorkflowRestartMode to ForceRestartFromSpecificTask

WorkflowParametersFile

(Optional) The path and name of the workflow parameters file

This enables you to use the same workflow for different actions.

Back to top

Job:Dummy

The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
   "Type" : "Job:Dummy"
}

Back to top

Was this page helpful? Yes No Submitting... Thank you

Comments