Code Reference

Introduction

The code samples below describe how to define Control-M objects using JSON notation. 

Each Control-M object begins with a "Name" and then a "Type" specifier as the first property. All object names are defined in PascalCase notation with first letters in capital case. In the examples below, the "Name" of the object is "ObjectName" and the "Type" is "ObjectType".

	"ObjectName" : {
		"Type" : "ObjectType"
	}

The following object types are the most basic objects in Control-M job management:

Object type Description
Folder

A container of jobs. The following types of folders are available:

  • Regular folder — Groups together a collection of jobs and enables you to configure definitions at the folder level and have these definitions inherited by the jobs within the folder. For example, you can set schedules and manage events, resources, and notifications at the Folder level, to be applied to all jobs in the folder.
  • SubFolder — A folder nested within another regular folder or subfolder. Subfolders offer many (but not all) of the capabilities that are offered by regular folders.
  • Simple folder — Groups together a collection of jobs. Folder definitions are not inherited by jobs within the folder.

You can use a Flow to define order dependency between jobs in a folder.

Job A business process that you schedule and run in your enterprise environment
Connection Profile Access methods and security credentials for a specific application that runs jobs
Defaults Definitions of default parameter values that you can apply to multiple objects, all at once

Folder

A folder is a container of jobs and subfolders. The default type of folder (as opposed to a simple folder) enables you to configure various settings such as scheduling, event management, adding resources, or adding notifications on the folder level. Folder-level definitions are inherited by the jobs or subfolders within the folder.

For example, you can specify scheduling criteria on the folder level instead of defining the criteria per job in the folder. All jobs in the folder will take on the rules of the folder. This reduces job definition in code.

    "FolderSample": {
        "Type": "Folder",

         "Job1": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         },
         "Job2": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         }
    }

Optional parameters:

    "FolderSampleAll": {
        "Type": "Folder",
        "AdjustEvents": true,
        "ControlmServer": "controlm",
        "SiteStandard": "",
        "OrderMethod": "Manual",
        "Application": "ApplicationName",
        "SubApplication" : "SubApplicationName",
        "RunAs" : "controlm",
        "When" : {
            "WeekDays": ["SUN"]
        },
        "ActiveRetentionPolicy": "KeepAll",
        "DaysKeepActiveIfNotOk" : "41",
        "mut1" : {
            "Type": "Resource:Mutex",
            "MutexType": "Exclusive"
        },
        "Notify1": {
            "Type": "Notify:ExecutionTime",
            "Criteria": "LessThan",
            "Value": "3",
            "Message": "Less than expected"
        }
    }

AdjustEvents

Whether a job in a folder or subfolder should start running and not wait for an event from a predecessor job that was not scheduled.

Values: true | false
Default: false

ControlmServer Specifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to.
SiteStandard Enforces the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell.
OrderMethod

Options are:

  • (Default) Automatic: The folder and its jobs are automatically ordered on days specified in the 'When' property
  • Manual: The 'When' property is ignored. To order such a folder use the "ctm run order" API or Action Type:Run
  • Any other value : See the description of the <user daily> method and other methods in Order Method in the Control-M Online Help.
RunAs

The name of the user responsible for running the jobs in the folder or subfolder. For more details, see RunAs.

When Defines scheduling for all jobs in the folder or subfolder, using various scheduling parameters or rule-based calendars. For more details, see When.
ActiveRetentionPolicy

The retention policy for jobs in the folder or subfolder, one of the following options:

  • (Default) KeepAll: All jobs wait for the folder or subfolder to complete and are removed at the same time as the folder or subfolder.
  • CleanEndedOK: Jobs in the folder or subfolder are removed automatically from the database.
DaysKeepActiveIfNotOk

Defines the number of days to keep all jobs in the folder active after the folder is set to NOT OK.

This parameter is relevant only when ActiveRetentionPolicy=KeepAll.

Valid values are 0-99 (where 99 is forever). The default is 0.

Resource:Mutex See Resource:Mutex for detailed information about the resource.
Notification

Issues notifications for various scenarios that occur before, during, or after the execution of jobs within the folder or subfolder. For more details, see Notification.

If See If and If Actions for detailed information
Job See Job for detailed information
Events See Events for detailed information
Flow See Flow for detailed information

The following example shows description and time zone for the object Folder8. 

"Folder8": {
	"Type": "Folder",
	"Description": "folder desc",
	"Application" : "Billing",
	"SubApplication" : "Payable",
	"TimeZone":"HAW",
	"SimpleCommandJob": {
		"Type": "Job:Command", 
		"Description": "job desc", 
		"Application" : "BillingJobs", 
		"SubApplication" : "PayableJobs", 
		"TimeZone":"MST", 
		"Host":"agent8", 
		"RunAs":"owner8", 
		"Command":"ls" 
	}
}

Back to top

SubFolder

A subfolder is a folder contained within another (parent) folder or subfolder. A subfolder can contain a group of jobs or a next-level subfolder, and it can also contain a flow. Subfolders offer many (but not all) of the capabilities that are offered by regular folders.

The following example shows a folder that contains two subfolders with the most basic definitions:

    "FolderWithSubFolders":{
        "Type":"Folder",
        "SubFolder1":{
            "Type":"SubFolder",
            "job1": {
                "Type": "Job:Script",
                "FileName": "scriptname.sh",
                "FilePath": "/home/user/scripts",
                "RunAs": "em900cob"
            }
        },
        "SubFolder2":{
            "Type":"SubFolder",
            "job1": {
                "Type": "Job:Script",
                "FileName": "scriptname2.sh",
                "FilePath": "/home/user/scripts",
                "RunAs": "em900cob"
            }
        }
    }

The following example shows a more complex hierarchy of subfolders, with scheduling properties:

"FolderWithComplexSubFolders" : {
	"Type" : "Folder",
	"ControlmServer" : "LocalControlM",
	"When" : {
		"RuleBasedCalendars" : {
			"Included" : [ "glob1", "cal2","cal1" ],
			"Excluded" : [ "glob2" ],
			"cal1" : {
				"Type" : "Calendar:RuleBased",
				"When" : {
					"WeekDays" : [ "MON" ],
					"MonthDays" : [ "NONE" ]
				}
			}
		}
	},
	"subF1" : {
		"Type" : "SubFolder",
		"Application" : "application",
		"When" : {
			"RuleBasedCalendars" : {
				"Included" : [ "USE PARENT" ]
			}
		}
	},
	"subF2" : {
		"Type" : "SubFolder",
		"Application" : "application",
		"When" : {
			"FromTime":"1211",
			"ToTime":"2211",
			"RuleBasedCalendars" : {
				"Included" : [ "cal1" ]
			}
		},
		"subF2a" : {
			"Type" : "SubFolder",
			"Application" : "application",
			"job3" : {
				"Type" : "Job:Script",
				"FileName" : "scriptname.sh",
				"FilePath" : "/home/user/scripts",
				"RunAs" : "em900cob",
				"Application" : "application"
				}
		}
	}
}

Subfolders support the use of the following properties and parameters:

Simple Folder

Simple Folder is a container of jobs. A Simple Folder does not enable configuration of job definitions at the folder level. The following example shows how to use a simple folder.

{
  "SimpleFolderName": {
    "Type": "SimpleFolder",
    "ControlmServer": "ec2-54-191-85-182",
    "job1": {
      "Type": "Job:Command",
      "Command": "echo 123",
      "RunAs": "controlm"
    },
    "job2": {
      "Type": "Job:Command",
      "Command": "echo 123",
      "RunAs": "controlm"
    },
    "Flow": {
      "Type": "Flow",
      "Sequence": ["job1", "job2"]
    }
  }
}

The following example shows optional parameters for SimpleFolder:

{
"FolderSampleAll": {
		"Type": "SimpleFolder",
		"ControlmServer": "controlm", 
		"SiteStandard": "myStandards",
		"OrderMethod": "Manual"
		}
}


ControlmServer Specifies a Control-M Scheduling Server. If more than one Control-M Scheduling Server is configured in the system, you must define the server that the folder belongs to.
SiteStandard Enforces the defined Site Standard to the folder and all jobs contained within the folder. See Control-M in a nutshell.
OrderMethod

Options are:

  • (Default) Automatic: The folder and its jobs are automatically ordered on days specified in the 'When' property
  • Manual: The 'When' property is ignored. To order such a folder use the "ctm run order" API or Action Type:Run
  • Any other value : See the description of the <user daily> method and other methods in Order Method in the Control-M Online Help.

Back to top

Flow

The Flow object type allows you to define order dependency between jobs in folders and subfolders. A job must end successfully for the next job in the flow to run.

    "flowName": {
      "Type":"Flow",
      "Sequence":["job1", "job2", "job3"]
    }

The following example shows how one job can be part of multiple flows. Job3 will execute if either Job1 or Job2 end successfully. 

    "FlowSamples" :
    {
        "Type" : "Folder",

        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job2": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job3": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }, 
        "flow1": {
          "Type":"Flow",
          "Sequence":["Job1", "Job3"]
        },
        "flow2": {
          "Type":"Flow",
          "Sequence":["Job2", "Job3"]
        }

    }

The following example shows how to create flow sequences with jobs contained within different folders and subfolders.

    "FolderA" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }
    },    
    "FolderB" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },
		"SubFolderB1" : {
			"Type" : "SubFolder",
			"Job2": {
				"Type" : "Job:Command",
				"Command" : "echo hello again from subjob",
            	"RunAs" : "user1"  
        	}
		}
 	},    
    "CrossFoldersFlowSample": {
        "Type":"Flow",
          "Sequence":["FolderA:Job1", "FolderB:Job1", "FolderB:SubFolderB1:Job2]
    }            	

The following example shows a flow defined within a subfolder and referencing jobs within next-level subfolders.

"FolderWithSubFoldersAndFlow":{
      "Type":"Folder",
      "SubFolderA":{
            "Type":"SubFolder",
            "SubFolder1":{
                  "Type":"SubFolder",
                  "job1": {
                        "Type": "Job:Script",
                        "FileName": "scriptname.sh",
                        "FilePath": "/home/user/scripts",
                        "RunAs": "em900cob"
                  }
            },
            "SubFolder2":{
                  "Type":"SubFolder",
                  "job1": {
                        "Type": "Job:Script",
                        "FileName": "scriptname2.sh",
                        "FilePath": "/home/user/scripts",
                        "RunAs": "em900cob"
                  }
            },
            "flowInSubFolderA": {
                  "Type": "Flow",
                  "Sequence": [
                  "SubFolder1:job1",
                  "SubFolder2:job1"
                  ]
            }
        }
    }


Back to top

Job Properties

Below is a list of job properties for Control-M objects.

Type

Defines the type of job. For example:

    "CommandJob1": {
        "Type" : "Job:Command",
        "Command" : "echo hello",
        "RunAs" : "user1"
        }

Many of the other properties that you include in the job's definitions depend on the type of job that you are running. For a list of supported job types and more information about the parameters that you use in each type of job, see Job types.

Application, SubApplication 

Supplies a common descriptive name to a set of related Jobs, Folders, or SubFolders. The jobs do not necessarily have to run at the same time.

    "Job1": {
	    "Type": "Job:Command",
 		"Application": "ApplicationName",
		"SubApplication": "SubApplicationName",
        "Command": "echo I am a Job",
        "RunAs": "controlm"
    }

Back to top

Comment

Allows you to write a comment on an object. Comments are not uploaded to Control-M.

    "JobName": {
        "Type" : "Job:Command",
        "Comment" : "code reviewed by tom",
        "Command" : "echo hello",
        "RunAs" : "user1"
        }

Back to top

When

Enables you to define scheduling parameters for Jobs, Folders and SubFolders, including the option of using calendars. If When is used in a Folder or SubFolder, those parameters apply to all Jobs in the Folder or Subfolder.

When working in a Control-M Workbench environment, jobs will not wait for time constants and will run in an ad-hoc manner. Once deployed to a Control-M instance, all time constraints will be obeyed. 

The following example defines scheduling based on a combination of date and time constraints:

      "When" : {
                "Schedule":"Never",
				"Months": ["JAN", "OCT", "DEC"],
                "MonthDays":["22","1","11"],
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"      
            }

The following example defines scheduling based on specific dates that you specify explicitly:

      "When" : {
                "WeekDays" : [ "NONE" ],
                "Months" : [ "NONE" ],
                "MonthDays" : [ "NONE" ],
                "SpecificDates" : [ "03/01", "03/10" ],
                "FromTime":"1500",
                "ToTime":"1800" 
            }

The following date/time constraints are available for use in the definitions of a Job, Folder, or SubFolder:

Option Description Job Folder SubFolder
WeekDays

One or more of the following:

"SUN",MON","TUE","WED","THU","FRI","SAT"

For all days of the week, use "ALL" (the default value).

(tick) (tick) (error)
Month

One or more of the following:

"JAN", "FEB", "MAR", "APR","MAY","JUN", "JUL", "AUG",

"SEP", "OCT", "NOV", "DEC"

For all months of the year, use "ALL" (the default value).

(tick) (tick) (error)
MonthDays

One or more days in the range of 1 to 31

For all days of the month, use "ALL" (the default value).

(tick) (tick) (error)
FromTime

FromTime specifies that a job will not start before this time

Format: HHMM

(tick) (tick) (tick)
ToTime

ToTime specifies that a job will not start after this time

Format: HHMM

To allow the job to be submitted even after its original scheduling date (if it was not submitted on the original date), specify a value of ">".

(tick) (tick) (tick)
Schedule

One of the following options:

  • "Everyday" - scheduling is applied every day, provided that the running criteria are met
  • "Never" - no scheduling is defined, and the job must be ordered manually
(tick) (tick) (error)
SpecificDates

Specific dates for running jobs.

For each date, use the format "MM/DD" (enclosed in quotes). Separate multiple dates with commas.

Note: The SpecificDates option cannot be used in combination with options WeekDays, Month, or MonthDays.
However, since the default for these options is "ALL", you must specify these options with a value of "NONE".

(tick) (tick) (error)

MonthDay additional parameters 

You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M. 

"When": {
	"MonthDaysCalendar": “Summer2017”
}

You can specify the days of the month the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the MonthDays parameter:

"When": {
	"MonthDaysCalendar": “Summer2017”,
    “MonthDays":[“1”,”+2”,”-3”,”>4”,”<5”,”D6”,”L7”]
}

Where:

MonthDays syntax

Description

1

Day 1 included only if defined in the calendar
+2 Day 2 included regardless of calendar
-3 Day 3 excluded regardless of calendar
>4 Day 4 or next closest calendar working day
<5 Day 5 or previous closest calendar working day
D6 The 6th calendar working day
L7 The 7th from the last calendar working day
D6PA or D6P*

If MonthDaysCalendar is of type periodical, you can use PA or P* to specify a calendar period name such as A,B,C, or you can use * for any period.

-D6 or -L6P*  D and L can also have an exclude specifier

WeekDays additional parameters 

You can specify the days of the week that the job will run by referencing a predefined calendar set up in Control-M, and also using advanced rules specified in the WeekDays parameter:

"When" : {
	"WeekDaysCalendar" : "Summer2017",
	"WeekDays" : ["SUN","+MON","-TUE",">WED","<THU"]
}

Where:

WeekDays syntax Description

SUN

Sunday included only if defined in the calendar

+MON Monday included regardless of calendar
-TUE Tuesday excluded regardless of calendar
>WED Wednesday or next closest calendar working day
<THU Thursday or previous closest calendar working day

Specifying start/end dates of a job run

You can add start and end dates for a job run in addition to the other date/time elements. 

 "When": { 
            "StartDate":"20160322", 
            "EndDate":"20160325" 
         }

Where:

StartDate

First date that a job can run

EndDate

Last date that a job can run

Relationship between MonthDays and WeekDays

"When" : {
    "Months": ["JAN", "OCT", "DEC"], 
	"MonthDays":["22","1","11"],
	“DaysRelation” : “OR”,
    "WeekDays":["MON","TUE"]
}


Parameter Description
DaysRelation

Default: AND

AND - the job will run only if the WeekDays and MonthDays contraints are met

OR - the job will run if the WeekDays or MonthDays constraints are met

Using rule-based calendars in job scheduling

You can base scheduling on predefined rule-based calendars (RBC). Under the When parameter for a specific Job, Folder, or SubFolder, you can specify the RBCs to include and the RBCs to exclude from scheduling. For more information, see Rule-based Calendar and Excluded Rule-based Calendar lists in the Control-M Online Help.

"RuleBasedCalJobSimple" : {
  "Type" : "Job:Command",
  "RunAs" : "controlm",
  "Command" : "ls -l",
  "When" : {
	"RuleBasedCalendars":{
		"Included": ["calendar1"],
		"Excluded": ["calendar2"]
	}
  }
}        

You can combine the use of rule-based calendars with standard scheduling parameters. In such a case, the Relationship parameter enables you to define the logical relationship (AND/OR) between the criteria defined by the calendars and all other basic scheduling criteria. In other words, you can decide whether either set of criteria, or both sets of criteria, must be satisfied. The default relationship is OR.

"RuleBasedCalJobComplex" : {
  "Type" : "Job:Command",
  "RunAs" : "controlm",
  "Command" : "ls -l",
  "When" : {
	"RuleBasedCalendars":{
		"Included": ["weekdays"],
		"Excluded": ["endOfQuarter"],
		"Relationship": "AND"
	},
	"Months":["JAN","FEB","MAR"],
	"WeekDays":["TUE","WED"]
  }
 }

Rule-based calendars can also be used with SubFolders. In the following example, two different SubFolders are defined in the parent folder. For one of these SubFolders, RBCs to include and exclude are explicitly specified. These RBCs are either global or from the parent folder. For the other SubFolder, the "USE PARENT" value is specified instead of the name of an actual RBC, so that all scheduling is inherited from the parent Folder (as defined in the next example).

	"subF1" : {
		"Type" : "SubFolder",
		"When" : {
			"FromTime":"1211",
			"ToTime":"2211",
			"RuleBasedCalendars" : {
				"Included" : [ "calendar1" ],
				"Excluded" : [ "calendar2" ]
			}
		}
	},
	"subF2" : {
		"Type" : "SubFolder",
		"When" : {
			"RuleBasedCalendars" : {
				"Included" : [ "USE PARENT" ]
			}
		}
	}

For folders, you can define Folder RBCs in addition to using predefined RBCs or other basic scheduling criteria. Folder RBCs are specific to a single folder and are applied to all jobs within the folder. The following example shows how to define RBCs under the folder's When parameter:

"RuleBasedTestFolder": {
	"Type": "Folder",
	"ControlmServer": "LocalControlM",
	"When": {
		"RuleBasedCalendars": {
			"endOfQ":{
				"Type": "Calendar:RuleBased",
				"When": {
					"Months": ["MAR","JUN","SEP","DEC"],
					"MonthDays": ["29","30","31"]
				}
			},
			"winterWeekendDays": {
				"Type": "Calendar:RuleBased",
				"When": {
					"Months": ["DEC","JAN","FEB"],
					"WeekDays": ["SAT","SUN"]
				}
			},
			"Included": ["winterWeekendDays","calendar1"],
			"Excluded": ["endOfQ"]
		},
		"Months": ["APR"],
		"WeekDays": ["FRI","MON"]
	},
	"TestJobInTestFolder": {
		"Type": "Job:Command",
		"RunAs": "controlm",
		"Command": "ls -l",
		"When": {
			"RuleBasedCalendars": {
				"Included": ["calendar3"],
				"Excluded": ["calendar2"]
			}
		}
	}	
}

Note the following guidelines:

  • Each Folder RBC that you define has its own When parameter, with scheduling parameters under it. The scheduling parameters under this When parameter are similar to the scheduling parameters under a When parameter of a job or folder, with the following exceptions:
    • The When parameter of an RBC does not support the FromTime and ToTime parameters.
    • For the When parameter of an RBC, you can also use the DaysKeepActive parameter.
    • You cannot nest another RuleBasedCalendars parameter under the When parameter of an RBC.
  • To apply the defined Folder RBCs to the jobs within the folder, you must list each of the defined Folder RBCs in either the Included parameter or the Excluded parameter.
  • You can list additional predefined Control-M RBCs in the Included and Excluded parameters. In the example above, a predefined RBC named calendar1 is listed along with the Folder RBC winterWeekendDays.
  • You can combine the use of RBCs with other scheduling parameters. In the example above, the additional Months and WeekDays settings were added after the RuleBasedCalendars definitions. These further scheduling parameters are combined with the RBCs based on a logical AND.
  • The folder's scheduling parameters are inherited by each of the jobs in the folder. In addition, for any specific job in the folder, you can add further scheduling parameters or RBCs. In the example above, further RBCs (calendar3 and calendar2) are associated with a job named TestJobInTestFolder.

Back to top

Events

Events can be generated by Control-M or can trigger jobs. Events are defined by a name and a date.

Here is a list of the various capabilities of event usages:

  1. A job can wait for events before running, add events after running, or delete events after running. See WaitForEvents, AddEvents, and DeleteEvents
  2. Jobs can add or remove events from Control-M. See Event:Add or Event:Delete.
  3. You can add or remove events from the Control-M by an API call. See Event Management.

You can set events for a Job, Folder, or SubFolder.

For "OrderDate", you can use the following values:

Date Type

Description
AnyDate Any scheduled date
OrderDate

Control-M scheduled date.

If you do not specify an OrderDate value, this is the default.

PreviousOrderDate Previous Control-M scheduled date
NextOrderDate Next Control-M scheduled date
MMDD

Specific date

Example: "0511"

WaitForEvents

The following example shows how to define events that the job must wait for before running:

"Wait1": {
          "Type": "WaitForEvents",
          "Events": [
              {"Event":"e1"}, 
              {"Event":"e2"}, 
              {"Event":"e3", "OrderDate":"AnyDate"}
          ]
}

You can specify the logical relationship between events, using logical operators (AND/OR) and parentheses. The default relationship is AND. Note that nesting of parentheses within parentheses is not supported.

"Wait2": {
           "Type": "WaitForEvents",
           "Events": [
               "(",
               {"Event":"ev1"},
               "OR",
               {"Event":"ev2"},
               ")",
               "OR",
               "(",
               {"Event":"ev3"},
               {"Event":"ev4"},
               ")"
            ]
}

AddEvents

The following example shows how to specify events for the job to add after running:

"add1" :
{
          "Type": "AddEvents",
          "Events": [
              {"Event":"a1"}, 
              {"Event":"a2"}, 
              {"Event":"a3", "OrderDate":"1112"}
          ]
}

DeleteEvents

The following example shows how to specify events for the job to remove after running:

"del1" :
{
          "Type": "DeleteEvents",
          "Events": [
              {"Event":"d1"},
              {"Event":"d2", "OrderDate":"1111"},
              {"Event":"d3"}
          ]
}

Back to top

If

If statements trigger one or more actions when job-related criteria are fulfilled (for example, the job ended with a specific status or the job failed several times).

The following If statements are available for specifying job-related criteria that must occur for action to be taken:

For descriptions of the various actions that can be triggered in response to an If statement that is fulfilled, see If Actions.

If:CompletionStatus

The following example shows an If statement that triggers actions based on job completion status. In this example, if the job runs unsuccessfully, it sends an email and runs another job. You can set this property for a Job, Folder, or SubFolder.

    "JobName": {
        "Type" : "Job:Command",
        "Command" : "echo hello",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1",  
        
        "ActionIfFailure" : {
            "Type": "If",        
            "CompletionStatus": "NOTOK",
            
            "mailToTeam": {
              "Type": "Action:Mail",
              "Message": "Job %%JOBNAME failed",
              "To": "team@mycomp.com"
            },
            "CorrectiveJob": {
              "Type": "Action:Run",
              "Folder": "FolderName",
              "Job": "JobName"
            }
        }
    }

If can be triggered based on one of the following CompletionStatus values:

Value Action

NOTOK

When job fails

OK

When job completed successfully

ANY

When the job completed regardless of success or failure

value

When completion status = value

Example: value=10

Even When completion status is an even number
Odd When completion status is an odd number
">=5", "<=5", "<5", ">5", "!=5" When the completion status comparison operator is true

If:NumberOfReruns

The following example shows how to trigger an action based on number of job reruns. You can set this property for a Job, Folder, or SubFolder.

"ActionByNumberOfReruns" : {
	"Type": "If:NumberOfReruns", 
	"NumberOfReruns": ">=4",
    "RunJob": {
    	 "Type": "Action:Run",
   		 "Folder": "Folder1",
   		 "Job": "job1"
	}
}

Where:

Parameter Description Possible values
NumberOfReruns Performs an action if the condition of number of job reruns is met.

"Even"

"Odd"

"!=value"

">=value"

"<=value"

">value"

"<value"

"value"

If:NumberOfFailures

The following example shows how to trigger an action based on number of job failures. You can set this property for a Job, Folder, or SubFolder.

"ActionByNumberOfFailures" : {
	"Type": "If:NumberOfFailures", 
	"NumberOfFailures": "1",
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

Where:

Parameter Description Possible values
NumberOfFailures Performs an action if the condition of number of job failures is met

"value"

If:JobNotSubmitted

The following example shows how to trigger an action based on whether the job is not submitted.

"ActionByJobNotSubmitted" : {
	"Type": "If:JobNotSubmitted"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

If:JobOutputNotFound

The following example shows how to trigger an action based on whether the job output is not found.

"ActionByOutputNotFound" : {
	"Type": "If:JobOutputNotFound"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

If:NumberOfExecutions

The following example shows how to trigger an action based on number of job executions. You can set this property for a Job, Folder, or SubFolder.

"ActionByNumberExecutions" : {
	"Type": "If:NumberOfExecutions", 
	"NumberOfExecutions": ">=5"
    "RunJob": {
          "Type": "Action:Run",
          "Folder": "Folder1",
          "Job": "job1"   
    }
}

Where:

Parameter Description Possible values
NumberOfExecutions Performs an action if the condition of number of job executions is met

"Even"

"Odd"

"!=value"

">=value"

"<=value"

">value"

"<value"

"value"

If:Output

The following example shows how to trigger an action based on whether a specified string is found within the job output. You can set this property for a Job, Folder, or SubFolder.

"OutputFound":{
    "Type": "If:Output",
    "Code": "myfile.sh",
    "Statement": "ls -l",
    "RunJob":{
          "Type":"Action:Run",
          "Folder":"Folder1",
          "Job":"job1"
    }
}

Where:

Parameter Description
Code

The string to search for in the output.

You can include wildcards in the code — * for any number of characters, and $ or ? for any single character.

Statement

(Optional) Limits the search to a specific statement within the output. If no statement is specified, all statements in the output are searched.

You can include wildcards in the statement — * for any number of characters, and $ or ? for any single character.

Back to top

If Actions

The following actions can be triggered in response to an If statement that is fulfilled:

You can set any of these properties for a Job, Folder, or SubFolder.

For descriptions of the various If statements that you can specify to trigger these actions, see If.

Action:Mail 

 The following example shows an action that sends an e-mail.

    "mailToTeam": {
      "Type": "Action:Mail",
      "Message": "%%JOBNAME failed",
      "To": "team@mycomp.com"
    }

The following example shows that you can add optional parameters to the email action.

    "mailToTeam": {
      "Type": "Action:Mail",
      "Urgency": "Urgent", 
      "Subject" : "Completion Email",
      "Message": "%%JOBNAME just completed", 
      "To": "team@mycomp.com",
      "CC": "other@mycomp.com",
      "AttachOutput": true
 }

The following table describes the parameters of the email action:

Parameter Description
Urgency Level of urgency of the message — Regular, Urgent, or VeryUrgent. The default is Regular.
Subject A subject line for the message.
Message The message text.
To

A list of email recipients to whom the message is directed.

Use the semicolon (;) to separate multiple email addresses.

CC

A list of email recipients who receive a copy of the message.

Use the semicolon (;) to separate multiple email addresses.

AttachOutput

Whether to include the job output as an email attachment, either true or false.

If no value is specified, the default follows the configuration of the Control-M/Server.

Action:Rerun

The following example shows an action that reruns the job.

"RerunActionName": {
      "Type": "Action:Rerun"  
}

Action:Set

The following example shows an action that sets a variable.

"SetVariable": {
   "Type": "Action:Set",
    "Variable": "var1",
    "Value": "1"
}

Action:SetToOK

The following example shows an action that sets the job status to OK.

"SetToOKActionName": {
      "Type": "Action:SetToOK"
}

Action:SetToNotOK

The following example shows an action that sets the job status to not OK.

"SetToNotOKActionName": {
      "Type": "Action:SetToNotOK"
}

Action:StopCyclicRun

The following example shows an action that disables the cyclic attribute of the job.

"CyclicRunActionName": {
      "Type": "Action:StopCyclicRun"
}

Action:Run

The following example shows an action that runs another job.

"CorrectiveJob": {
        "Type": "Action:Run",
        "Folder": "FolderName",
        "Job": "JobName",
        "ControlmServer":"RemoteControlM",
        "Date":"010218",
        "Variables":[{"Cvar1":"val1"}, {"Cvar2":"val2"}]
}

The run action has the following optional properties:

Property Description
ControlmServer

The Control-M Scheduling Server for the run action.

By default, the Control-M Scheduling Server for the run action is the same as defined for the folder.

You can use this property to specify a different, remote server.

Date

Value to be used as the original scheduling date for the job.

The default is OrderDate (that is, the Control-M scheduled date).

For any other date, specify the date in the relevant 4-character or 6-character format mmdd, ddmm, yymmdd, or yyddmm, depending on the site standard.

Variables If the run action is defined for a remote Control-M Scheduling server, you can define variables for the run action.

Action:Notify

The following example shows an action that sends a notification.

"Notifying": {
   "Type": "Action:Notify",
   "Message": "job1 just ran",
   "Destination": "JobLog",
   "Urgency": "VeryUrgent"
}

Event:Add

The following example shows an action that adds an event for the current date.

"setEvent1": {
    "Type": "Event:Add",
    "Event": "e1"
}

Optional parameters:

"setEvent1": {
    "Type": "Event:Add",
    "Event": "e1",
    "OrderDate": "1010"
}
Date Type Description
AnyDate Any scheduled date
NoDate Not date specific
OrderDate Control-M scheduled date.
PreviousOrderDate Previous Control-M scheduled date
NextOrderDate Next Control-M scheduled date
MMDD

Specific date

Example: "0511"

Event:Delete

The following example shows an action that deletes an event.

OrderDate possible values:

  • "AnyDate"
  • "OrderDate"
  • "PreviousOrderDate"
  • "NextOrderDate"
  • "0511" - (MMDD) 
"unsetEvent2": {
    "Type": "Event:Delete",
    "Event": "e2",
    "OrderDate": "PreviousOrderDate"
}

Action:Output

The Output action supports the following operations:

  • Copy
  • Move
  • Delete
  • Print

The following example shows an action that copies the output to the specified destination.

"CopyOutput": {
         "Type": "Action:Output",
         "Operation": "Copy",
         "Destination": "/home/copyHere"
}

Back to top

Confirm

Allows you to define a job or subfolder that requires user confirmation. This can be done by running the run confirm command.

 "JobName": {
        "Type" : "Job:Command",
        "Comment" : "this job needs user confirmation to start execution",
        "Command" : "echo hello",
        "RunAs" : "user1",
		"Confirm" : true
        }

Back to top

CreatedBy

Allows you to specify the Control‑M user responsible for job definitions. You can define this property for a Job object or Folder object.

    "SimpleJob": {
        "Type": "Job:Command",
        "Command": "echo I am a Job.",
        "RunAs": "controlm",
        "CreatedBy":"username"
    }

The behavior of this property depends on the security policy defined in the Control-M environment, as controlled by the AuthorSecurity parameter in Control-M/Enterprise Manager:

Security level Allowed values Default value
Permissive Any user name ctmdk (an internal user)
Restrictive User currently logged in User currently logged in


Back to top

Critical

Allows you to set a critical job. A critical job is a job that has a higher priority to reserve resources in order to run.

Default: false

"Critical": true

Back to top

DaysKeepActive

Allows you to define the number of days to keep a job if the job did not run at its scheduled date. You can set this property for a Job, Folder, or SubFolder.

Jobs in a folder are kept until the maximum DaysKeepActive value for any of the jobs in the folder has passed. This enables you to retrieve job status of all the jobs in the folder. 

"DaysKeepActiveFolder": {
       "Type" : "Folder",
       "Defaults": {
         "RunAs":"owner8" 
       },
       "keepForeverIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "Forever" 
       },
       "keepForThreeDaysIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "3" 
       }
}

Where:

DaysKeepActive

Valid values:

  • 0-98
  • Forever

Default: 0

Back to top

Description

Enables you to add a description to a Job, Folder, or SubFolder.

 "DescriptionFolder":
    {
       "Type" : "Folder",
       "Description":"folder description",
       "SimpleCommandJob": { 
         "Type": "Job:Command", 
         "Description":"job description",
         "RunAs":"owner8", 
         "Command":"ls"
       }
       
    }

Back to top

Documentation

Allows you to add the location and name of a file that contains the documentation for the Job, Folder, or SubFolder.

"DocumentationFile":{
	"Path": "C://temp",
	"FileName": "job.txt"
	}
}

Allows you to add the URL location of a file that contains the documentation for the Job, Folder, or SubFolder.

"DocumentationUrl":{
	"Url": "http://bmc.com"
	}
}

Back to top

EndFolder

Enables you to specify which job is the end point in a folder. After this job completes, no additional jobs in the folder will run, unless they have already started running. The folder is complete once all jobs that are still running complete. Remaining jobs that have not yet started running change to status WAIT SCHEDULE.

Values: true | false
Default: false

"EndFolder": {
        "Type": "Folder",
        "EndFolderJob": {
            "Type": "Job:Command",
            "Command": "echo When this job ends, the folder is complete",
            "RunAs": "controlm",
            "EndFolder": true
        }

Back to top

Notification

Allows you to create a notification for certain scenarios before, during and after job execution. You can set notifications for a Job, Folder, or SubFolder.

This example shows a notification sent to JobLog of critical job failure.

"NotifyCriticalJobFailure": {
   "Type":"Notify:NotOK",
   "Message": "Critical job failed, details in job output",
   "Urgency": "Urgent",
   "Destination": "JobLog"
}

The following parameters are relevant to all notification types.

Parameters

Description
Message The message to display

Destination

(description for each of the options)

The message is sent to one of the following:

  • (Default) Alerts - Control-M Alerts window
  • JobLog - writes to Control-M job log , to get the job log use the run job:log::get
  • Console - operating system console

Or

Predefined destination values (for example, FinanceGroup)

Urgency

The message urgency is logged as one of the following:

  • (Default) Regular
  • Urgent
  • VeryUrgent

Notify:OK

When setting the notification type to OK, if job executed with no errors, the notification "Job run OK" is sent to the JobLog. 

"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1", 
	  "Notify1": {
        "Type": "Notify:OK",
        "Message": "Job run OK",
        "Destination": "JobLog"
 	  }
	}
  }
}	

Notify:NotOK

When setting the notification type to NotOK, if job executed with errors, the notification "Job run not OK" is sent to the JobLog. 

"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1", 
	  "Notify1": {
        "Type": "Notify:NotOK",
        "Message": "Job run not OK",
        "Destination": "JobLog"
 	  }
	}
  }
}	

Notify:DoesNotStart

If the job has not started by 15:10, a notification is immediately sent to the email defined in the job with the message that the job has not started.

{
"Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
	  "Notify3": {
        "Type": "Notify:DoesNotStart",
        "By": "1510",
        "Message": "Job has not started",
        "Destination": "mail",
        "Urgency": "VeryUrgent"
	    }
	}
  }
}	

Parameters

Description
By

Format: HHMM  

Notification sent when job does not start by specified time

Notify:ExecutionTime

When setting the notification type ExecutionTime to LessThan, if the job completes in less than 3 minutes, the notification "Less than expected" is sent to the Alert destination. 

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:ExecutionTime",
        "Criteria": "LessThan",
        "Value": "3",
        "Message": "Less than expected"
      }
	}
  }
}

Criteria

Value Description
LessThan

Value in minutes

Example: 3

If the job runs less than the defined value, a notification is sent to the defined destination

GreaterThan

Value in minutes

Example: 5

If the job runs longer than the defined value, a notification is sent to the defined destination

LessThanAverage

Value in minutes or percentage

Example: 10%

If the job runs less than the defined value of the average execution time of the job, a notification is sent to the defined destination
GreaterThanAverage

Value in minutes or percentage

Example: 10%

If the job runs longer than the defined value of the average execution time of the job, a notification is sent to the defined destination

Notify:DoesNotEnd

When setting the notification type DoesNotEnd, if job does not end by the specified time, message is sent to JobLog.

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:DoesNotEnd",
        "By": "1212",
		"Message": "Job does not end",	
		"Destination": "JobLog"
      }
	}
  }
}

Parameters

Description
By

Format: HHMM  

Notification sent when job does not end by specified time

Notify:ReRun 

When setting the notification type ReRun, when job reruns, message is sent to console.

{
  "Folder1": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "user1",
      "Notify1": {
        "Type": "Notify:ReRun",
		"Message": "Job5 ReRun",	
		"Destination": "Console"
      }
	}
  }
}

Back to top

OverridePath

The following example demonstrates the use of the OverridePath property:

"JobName": {
    "Type": "Job:Script",
    "FileName": "task1123.sh",
    "FilePath": "/home/user1/scripts",
    "Host": "myhost.mycomp.com",
    "RunAs": "user1",
    "Priority": "XB",
    "OverridePath":"/usr/lib/overridepath" ,
    "RunAsDummy": true
}


Back to top

Priority

Allows you to define the priority that a job has over other jobs. You can set this property for a Job, Folder, or SubFolder.

The following options are supported:

  • Very High

  • High

  • Medium

  • Low

  • Very Low (default)

{
	"Folder8": {
		"Type": "Folder",
		"Description": "folder desc",
		"Application": "Billing",
		"SubApplication": "Payable",
		"SimpleCommandJob": {
			"Type": "Job:Command",
			"Description": "job desc",
			"Application": "BillingJobs",
			"Priority": "High",
			"SubApplication": "PayableJobs",
			"TimeZone": "MST",
			"Host": "agent8",
			"RunAs": "owner8",
			"Command": "ls"
		}
	}
}

Back to top

Rerun

Allows to define cyclic jobs.

The example shows how to define a cyclic job that runs every 2 minutes indefinitely.

    "Rerun" : {
        "Every": "2"
    }

The following example shows how to run a job four times where each run starts three days after the previous run ended.

    "Rerun" : {
        "Every": "3",
        "Units":  "Days",                     
        "From": "End",                               
        "Times": "4"
    }

Units

One of the following: "Minutes" "Hours" or "Days". The default is "Minutes"

From

One of the following values:

  • Start - next run time is calculated an N Units from the start time of the current run
  • End - next run time is calculated as N Units from the end time of current run
  • Target - run starts every N units

The default is "Start".

Times

Number of cycles to run. To run forever, define 0

The default is run forever.

Back to top

RerunIntervals

Allows to define a set of time intervals for the job to rerun.

The example shows how to define Rerunintervals for the job to run every 12 months, 12 days, 11 hours and 1 minute from the end of the last job run.

"RerunIntervals": {
           "Intervals" : ["12m","11h","12d","1m"],
           "From": "End"
       }

Intervals

The time intervals for job to run again in months, hours, days and minutes

From

One of the following values:

  • Start - next run time is calculated an N Units from the start time of the current run
  • End - next run time is calculated as N Units from the end time of current run
  • Target - run start every N units

The default is "Start".

Back to top

RerunSpecificTimes

Allows to rerun a job at specific times. 

The following example shows how to define RerunSpecificTimes for jobs to run at those specific times. 

"CyclicExactTimesJob": {
    "Type": "Job:Command",
    "Command":"ls",
    "RunAs": "user1",
    "RerunSpecificTimes": {
        "At" : ["0900","1100","1230","1710"],
        "Tolerance": "20"
   }
}

At

One or more time of day in the format HHMM

Tolerance

Maximum delay in minutes permitted for a late submission of a specific time

Back to top

RerunLimit

Allows you to set a limit to the number of times a non cyclic job can rerun.

"jobWithRerunLimit": {
        "Type":"Job:Command",
		"Command":"ls",
		"RunAs":"user1",
		"RerunLimit": {
			"Times":"5"
		}
     }

Times

Maximum number of times a non cyclic job can rerun

Default 0, no limit to the number of reruns.

Back to top

Resources 

Resource:Semaphore

Allows you to set the Semaphore (also known as quantitative resourcesquantity to the job, used to control access to a resource that is concurrently shared by other jobs. For API command information on resources, see Resource Management.

The following example shows how to add a semaphore parameter to a job. 

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "pok",
      "Critical": true,
      "sem1": {
        "Type": "Resource:Semaphore",
        "Quantity": "3"
      }      
    }
  }
}

Resource:Mutex

Allows you to set a Mutex (also known as control resource) as shared or exclusive. If the resource is shared, other jobs can use the resource concurrently. If set to exclusive, the job has to wait until the resource is available before it can run. You can set a Mutex for a Job, Folder, or SubFolder.

The following example shows how to add a Mutex parameter to a job.

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls", 
      "RunAs": "pok",
      "Critical": true,
      "mut1": {
        "Type": "Resource:Mutex",
        "MutexType": "Exclusive"
      }
    }
  }
}

Back to top

RetroactiveOrder

Enables you to order a job retroactively to make up for days on which the job did not run. For example, Control-M was down for two days due to a hardware issue; as soon as jobs can run again, this job is scheduled retroactively to run an additional two times, to make up for the days that Control-M was inactive.

Values: true | false
Default: false

"RetroactiveJob": {
            "Type": "Job:Command",
            "Command": "echo I am a retroactive order Job, I will be ordered even after my date",
            "RunAs": "controlm",
            "RetroactiveOrder": true
        }

Back to top

RunAs

Enables you to define the OS user responsible for running a job (or Folder or SubFolder).

By default, jobs are run by the user account where the Control-M/Agent is installed. To specify a diferent user, the agent must be running as root.

	"Job1": {
		"Type": "Job:Command",
		"Command": "echo I am a Job",
		"RunAs": "controlm",
	}

Back to top

RunAsDummy

Enables you to run a job of any type (other than Dummy) as a dummy job.

This is useful, for example, when a job is temporarily not in use but is still included in a flow. You can temporarily set this job to run as a dummy job, so that there is no impact to the flow.

Values: true | false
Default: false

	"Job1": {
		"Type": "Job:Command",
		"Command": "echo I am a Job",
		"RunAs": "controlm",
		"RunAsDummy": true
    }

Back to top

RunOnAllAgentsInGroup

Allows you to set jobs to run on all agents in the group. 

The example shows how to define RunOnAllAgentsInGroup 

"jobOnAllAagents": {
         "Type": "Job:Dummy",
         "RunOnAllAgentsInGroup" : true,
         "Host" : "dummyHost"
       } 

RunOnAllAgentsInGroup

true | false

Default: false

Back to top

Time Zone

Allows you to add the time zone to jobs, folder, or subfolder. Time zones should be defined at least 48 hours before the intended execution date. We recommend to define the same time zone for all jobs in a folder. 

"TimeZone":"MST"

Time zone possible values:

HNL (GMT-10:00)
HAW (GMT-10:00)
ANC (GMT-09:00)
PST (GMT-08:00)
MST (GMT-07:00)
CST (GMT-06:00)
EST (GMT-05:00)
ATL (GMT-04:00)
RIO (GMT-03:00)
GMT (GMT+00:00)
WET (GMT+01:00)
CET (GMT+02:00)
EET (GMT+03:00)
DXB (GMT+04:00)
KHI (GMT+05:00)
DAC (GMT+06:00)
BKK (GMT+07:00)
HKG (GMT+08:00)
TYO (GMT+09:00)
TOK (GMT+09:00)
SYD (GMT+10:00)
MEL (GMT+10:00)
NOU (GMT+11:00)
AKL (GMT+12:00)

Back to top

Variables

Allows you to use job level variables with %% notation in job fields. You can set this property for a Job, Folder, or SubFolder.

"job1": {
     "Type": "Job:Script",

     "FileName": "scriptname.sh",
     "FilePath":"%%ScriptsPath",
     "RunAs": "em900cob",

     "Arguments":["--date", "%%TodayDate" ],

     "Variables": [
       {"TodayDate": "%%$DATE"},
       {"ScriptsPath": "/home/em900cob"}
     ]
 }

For specifications of system defined variables such as %%$DATE see Control-M system variables in the Control-M Online Help.

Named pools of variables can share data between jobs using the syntax "\\poolname\variable". NOTE that due to JSON character escaping, each backslash in the pool name must be doubled. For example, "\\\\pool1\\date".

        "job1": {
           "Type": "Job:Dummy",
	       "Variables": [

	         {"\\\\pool1\\date": "%%$DATE"}
	       ]
	    },
 
        "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%\\\\pool1\\date" ]
	    }

Jobs in a folder can share variables at the folder level using the syntax "\\variable name" to set and %%variable to use

"Folder1"   : {
     "Type" : "Folder", 
 
     "Variables": [
	    {"TodayDate": "%%$DATE"}
	 ],
 
     "job1": {
           "Type": "Job:Dummy",

           "Variables": [
              {"\\\\CompanyName": "compName"}
           ]
	  },
	  
      "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%TodayDate", "--comp", "%%CompanyName" ]
	    }
}

Back to top

Job types

The following series of sections provide details about the available types of jobs that you can define and the parameters that you use to define each type of job.

Job:Command

The following example shows how to use the Job:Command to run operating system commands.

	"JobName": {
		"Type" : "Job:Command",
    	"Command" : "echo hello",
        "PreCommand": "echo before running main command",
        "PostCommand": "echo after running main command",
    	"Host" : "myhost.mycomp.com",
    	"RunAs" : "user1"  
	}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs Identifies the operating system user that will run the job.
PreCommand (Optional) A command to execute before the job is executed.
PostCommand (Optional) A command to execute after the job is executed.

Back to top

Job:Script

 The following example shows how to use Job:Script to run a script from a specified script file.

    "JobWithPreAndPost": {
        "Type" : "Job:Script",
        "FileName" : "task1123.sh",
        "FilePath" : "/home/user1/scripts",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1"   
    }
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs Identifies the operating systems user that will run the job.
FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".

PreCommand (Optional) A command to execute before the job is executed.
PostCommand (Optional) A command to execute after the job is executed.

Back to top

Job:EmbeddedScript

The following example shows how to use Job:EmbeddedScript to run a script that you include in the JSON code. Control-M deploys this script to the Agent host during job submission. This eliminates the need to deploy scripts as part of your application stack.

    "EmbeddedScriptJob":{
        "Type":"Job:EmbeddedScript",
        "Script":"echo hello",
        "Host":"myhost.mycomp.com",
        "RunAs":"user1",
        "FileName":"myscript.sh",
        "PreCommand": "echo before running script",
        "PostCommand": "echo after running script",
    }
Script Full content of the script.
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAs Identifies the operating systems user that will run the job.
FileName

(Optional) Name of a script file. This property is used for the following purposes:

  • The file extension provides an indication of how to interpret the script. If this is the only purpose of this property, the file does not have to exist.
  • If you specify an alternative script override using the OverridePath job property, the FileName property indicates the name of the alternative script file.
PreCommand (Optional) A command to execute before the job is executed.
PostCommand (Optional) A command to execute after the job is executed.

Back to top

Job:FileTransfer

The following example shows a Job:FileTransfer.

"FileTransferFolder" :
{
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTP" :
	{
		"Type" : "Job:FileTransfer",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"Host": "AgentHost",
		"FileTransfers" :
		[
			{
				"Src" : "/home/controlm/file1",
				"Dest" : "/home/controlm/file2",
				"TransferType": "Binary",
				"TransferOption": "SrcToDest",
			},
			{
				"Src" : "/home/controlm/otherFile1",
				"Dest" : "/home/controlm/otherFile2",
				"TransferOption": "DestToSrc"
			}
		]
	}
}

Where:

Parameter Description
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition, Control-M File Transfer plugin version 8.0.00 or later must be installed.

ConnectionProfileSrc The connection profile to use as the source
ConnectionProfileDest The connection profile to use as the destination
ConnectionProfileDualEndpoint

If you need to use a dual-endpoint connection profile that you have set up, specify the name of the dual-endpoint connection profile instead of ConnectionProfileSrc and ConnectionProfileDest.

For more information about dual-endpoint connection profiles, see ConnectionProfile:FileTransfer:DualEndPoint.

FileTransfers A list of file transfers to perform during job execution, each with the following properties:
   Src Full path to the source file
   Dest Full path to the destination file
   TransferType

(Optional) FTP transfer mode, either Ascii (for a text file) or Binary (non-textual data file).

Ascii mode handles the conversion of line endings within the file, while Binary mode creates a byte-for-byte identical copy of the transferred file.

Default: "Binary"

   TransferOption

(Optional) The following is a list of the transfer options:

  • SrcToDest - transfer file from source to destination
  • DestToSrc - transfer file from destination to source
  • SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met
  • DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met
  • FileWatcher - Watch a file. If successful, the succeeding job will run.

Default: "SrcToDest"

The following example presents a File Transfer job in which the transferred file is watched using the File Watcher utility:

"FileTransferFolder" :
{
	"Type" : "Folder",
	"Application" : "aft",
	"TransferFromLocalToSFTPBasedOnEvent" :
	{
		"Type" : "Job:FileTransfer",
		"Host" : "AgentHost",
		"ConnectionProfileSrc" : "LocalConn",
		"ConnectionProfileDest" : "SftpConn",
		"FileTransfers" :
		[
			{
				"Src" : "/home/sftp/file1",
				"Dest" : "/home/sftp/file2",
				"TransferType": "Binary",
				"TransferOption" : "SrcToDestFileWatcher",
				"PreCommandDest" :
				{
					"action" : "rm",
					"arg1" : "/home/sftp/file2"
				},
				"PostCommandDest" :
				{
					"action" : "chmod",
					"arg1" : "700",
					"arg2" : "/home/sftp/file2"
				},
				"FileWatcherOptions":
				{
					"MinDetectedSizeInBytes" : "200",
					"TimeLimitPolicy" : "WaitUntil",
					"TimeLimitValue" : "2000",
					"MinFileAge" : "3Min",
					"MaxFileAge" : "10Min",
					"AssignFileNameToVariable" : "FileNameEvent",
					"TransferAllMatchingFiles" : true
				}
			}
		]
	}
}

This example contains the following additional optional parameters: 

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

Defines commands that occur before and after job execution.
Each command can run only one action at a time.

Action

Description

chmod

Change file access permission:

arg1: mode

arg2: file name

mkdir

Create a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

FileWatcherOptions

Additional options for watching the transferred file using the File Watcher utility:

    MinDetectedSizeInBytes Defines the minimum number of bytes transferred before checking if the file size is static
    TimeLimitPolicy/
    TimeLimitValue

Defines the time limit to watch a file:
TimeLimitPolicy options:”WaitUntil”, "MinutesToWait"

TimeLimitValue: If TimeLimitPolicy: WaitUntil, the TimeLimitValue is the specific time to wait, for example 04:22 would be 4:22 AM.
If TimeLimitPolicy: MinutesToWait, the TimeLimitValue is the number of minutes to wait.

    MinFileAge

Defines the minimum number of years, months, days, hours, and/or minutes that must have passed since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    MaxFileAge

Defines the maximum number of years, months, days, hours, and/or minutes that can pass since the watched file was last modified

Valid values: 9999Y9999M9999d9999h9999Min

For example: 2y3d7h

    AssignFileNameToVariable Defines the variable name that contains the detected file name
    TransferAllMatchingFiles

Whether to transfer all matching files (value of True) or only the first matching file (value of False) after waiting until the watching criteria is met.

Valid values: True | False
Default value: False

Back to top

Job:FileWatcher

A File Watcher job enables you to detect the successful completion of a file transfer activity that creates or deletes a file. The following example shows how to manage File Watcher jobs using Job:FileWatcher:Create and Job:FileWatcher:Delete

    "FWJobCreate" : {
	    "Type" : "Job:FileWatcher:Create",
		"RunAs":"controlm",
 	    "Path" : "C:/path*.txt",
	    "SearchInterval" : "45",
	    "TimeLimit" : "22",
	    "StartTime" : "201705041535",
	    "StopTime" : "201805041535",
	    "MinimumSize" : "10B",
	    "WildCard" : true,
	    "MinimalAge" : "1Y",
	    "MaximalAge" : "1D2H4MIN"
    },
    "FWJobDelete" : {
        "Type" : "Job:FileWatcher:Delete",
        "RunAs":"controlm",
        "Path" : "C:/path.txt",
        "SearchInterval" : "45",
        "TimeLimit" : "22",
        "StartTime" : "201805041535",
        "StopTime" : "201905041535"
    }

This example contains the following parameters:

Path

Path of the file to be detected by the File Watcher

You can include wildcards in the path — * for any number of characters, and ? for any single character.

SearchInterval Interval (in seconds) between successive attempts to detect the creation/deletion of a file
TimeLimit

Maximum time (in minutes) to run the process without detecting the file at its minimum size (for Job:FileWatcher:Create) or detecting its deletion (for Job:FileWatcher:Delete). If the file is not detected/deleted in this specified time frame, the process terminates with an error return code.

Default: 0 (no time limit)

StartTime

The time at which to start watching the file

The format is yyyymmddHHMM. For example, 201805041535 means that the File Watcher will start watching the file on May 4, 2018 at 3:35 PM.
Alternatively, to specify a time on the current date, use the HHMM format.

StopTime

The time at which to stop watching the file.

Format: yyyymmddHHMM or HHMM (for the current date)

MinimumSize

Minimum file size to monitor for, when watching a created file

Follow the specified number with the unit: B for bytes, KB for kilobytes, MB for megabyes, or GB for gigabytes.

If the file name (specified by the Path parameter) contains wildcards, minimum file size is monitored only if you set the Wildcard parameter to true.

Wildcard

Whether to monitor minimum file size of a created file if the file name (specified by the Path parameter) contains wildcards

Values: true | false
Default: false

MinimalAge

(Optional) The minimum number of years, months, days, hours, and/or minutes that must have passed since the created file was last modified. 

For example: 2Y10M3D5H means that 2years, 10 months, 3 days, and 5 hours must pass before the file will be watched. 2H10Min means that 2 hours and 10 minutes must pass before the file will be watched.

MaximalAge

(Optional) The maximum number of years, months, days, hours, and/or minutes that can pass since the created file was last modified.

For example: 2Y10M3D5H means that after 2years, 10 months, 3 days, and 5 hours have passed, the file will no longer be watched. 2H10Min means that after 2 hours and 10 minutes have passed, the file will no longer be watched.

Back to top

Job:Database

Job:Database:SQLScript

The following example shows how to create a database job that runs a SQL script.

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "AgentHost",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample",
			"Parameters": [
				{"firstParamName": "firstParamValue"},
				{"secondParamName": "secondParamValue"}
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host, as well as Control-M Databases plugin version 9.0.00 or later.

Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Parameters Parameters are pairs of name and value. Every name that appears in the SQL script will be replaced by its value pair.
Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Another example:

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOracle": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample"
		}
	}
}

Back to top

Job:Hadoop

Various types of Hadoop jobs are available for you to define using the Job:Hadoop objects:

Job:Hadoop:Spark:Python

The following example shows how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
        "Type": "Job:Hadoop:Spark:Python",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",

        "SparkScript": "/home/user/processData.py"
    }
ConnectionProfile See ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
    "Type": "Job:Hadoop:Spark:Python",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",
    "SparkScript": "/home/user/processData.py",            
    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Spark:ScalaJava

The following example shows how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar",
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}
ConnectionProfile See ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar"
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",

    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Pig

The following example shows how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
    "Type" : "Job:Hadoop:Pig",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "PigScript" : "/home/user/script.pig" 
}
ConnectionProfile See ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
        "Type" : "Job:Hadoop:Pig",
        "ConnectionProfile": "DevCluster",
        "PigScript" : "/home/user/script.pig",            
        "Host" : "edgenode",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ],            
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:Sqoop

The following example shows how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
      "Type" : "Job:Hadoop:Sqoop",
	  "Host" : "edgenode",
      "ConnectionProfile" : "SqoopConnectionProfileSample",

      "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfile See Sqoop ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
        "Type" : "Job:Hadoop:Sqoop",
        "Host" : "edgenode",
        "ConnectionProfile" : "SqoopConnectionProfileSample",

        "SqoopCommand" : "import --table foo",
		"SqoopOptions" : [
			{"--warehouse-dir":"/shared"},
			{"--default-character-set":"latin1"}
		],
 
        "SqoopArchives" : "",
        
        "SqoopFiles": "",
        
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

SqoopOptions These are passed as the specific sqoop tool args
SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFiles Indicates the location of the Sqoop files.

Back to top

Job:Hadoop:Hive

The following example shows how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
    {
      "Type" : "Job:Hadoop:Hive",
      "Host" : "edgenode",
      "ConnectionProfile" : "HiveConnectionProfileSample",

      "HiveScript" : "/home/user1/hive.script"
    }

 

ConnectionProfile See Hive ConnectionProfile:Hadoop
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
    {
        "Type" : "Job:Hadoop:Hive",
        "Host" : "edgenode",
        "ConnectionProfile" : "HiveConnectionProfileSample",


        "HiveScript" : "/home/user1/hive.script", 
        "Parameters" : [
            {"ammount": "1000"},
            {"topic": "food"}
        ],

        "HiveArchives" : "",
        
        "HiveFiles": "",
        
        "HiveOptions" : [
            {"hive.root.logger": "INFO,console"}
        ],

        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

HiveSciptParameters Passed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFiles Passed to beeline as --hiveconf mapred.cache.files=”value”.

Back to top

Job:Hadoop:DistCp

The following example shows how to use Job:Hadoop:DistCp to run a DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
        {
            "Type" : "Job:Hadoop:DistCp",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",
         
            "TargetPath" : "hdfs://nns2:8020/foo/bar",
            "SourcePaths" :
            [
                "hdfs://nn1:8020/foo/a"
            ]
        }  
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
    {
        "Type" : "Job:Hadoop:DistCp",
        "Host" : "edgenode",
        "ConnectionProfile" : "ConnectionProfileSample",
        "TargetPath" : "hdfs://nns2:8020/foo/bar",
        "SourcePaths" :
        [
            "hdfs://nn1:8020/foo/a",
            "hdfs://nn1:8020/foo/b"
        ],
        "DistcpOptions" : [
            {"-m":"3"},
            {"-filelimit ":"100"}
        ]
    }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>.

Back to top

Job:Hadoop:HDFSCommands

The following example shows how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",

            "Commands": [
                {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Back to top

Job:Hadoop:HDFSFileWatcher

The following example shows how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
    {
        "Type" : "Job:Hadoop:HDFSFileWatcher",
        "Host" : "edgenode",
        "ConnectionProfile" : "DevCluster",

        "HdfsFilePath" : "/inputs/filename",
        "MinDetecedSize" : "1",
        "MaxWaitTime" : "2"
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePath Specifies the full path of the file being watched.
MinDetecedSize Defines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.
MaxWaitTime Defines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Back to top

Job:Hadoop:Oozie

The following example shows how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
        "Type" : "Job:Hadoop:Oozie",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "JobPropertiesFile" : "/home/user/job.properties",
        "OozieOptions" : [
          {"inputDir":"/usr/tucu/inputdir"},
          {"outputDir":"/usr/tucu/outputdir"}
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

OozieOptions Set or override values for given job property.

Back to top

Job:Hadoop:MapReduce

 The following example shows how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
    {
       "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"]
    }
ConnectionProfile

See ConnectionProfile:Hadoop  

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
    {
        "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }    
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

Back to top

Job:Hadoop:MapredStreaming

The following example shows how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
        "Type": "Job:Hadoop:MapredStreaming",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "InputPath": "/user/robot/input/*",
        "OutputPath": "/tmp/output",
        "MapperCommand": "mapper.py",
        "ReducerCommand": "reducer.py",
        "GeneralOptions": [
            {"-D": "fs.permissions.umask-mode=000"},
            {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M in a nutshell.

NOTE : If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true, that is, the job will fail if any pre-command fails.

The default for PostCommands is false, that is, the job will complete successfully even if any post-command fails.

GeneralOptions Additional Hadoop command options passed to the hadoop-streaming.jar, including generic options and streaming options.

Back to top

Job:ApplicationIntegrator

Use Job:ApplicationIntegrator:<JobType> to define a job of a custom type using the Control-M Application Integrator designer. For information about designing job types, see the Control-M Application Integrator Help.

The following example shows the JSON code used to define a job type named Monitor Remote Job:

"JobFromAI" : {
    "Type": "Job:ApplicationIntegrator:Monitor Remote Job",
    "ConnectionProfile": "ConnectionProfileForJob",
    "AI-Host": "Host1",
    "AI-Port": "5180",
    "AI-User Name": "admin",
    "AI-Password": "*******",
    "AI-Remote Job to Monitor": "remoteJob5",
    "RunAs": "controlm"
}	

In this example, the ConnectionProfile and RunAs properties are standard job properties used in Control-M Automation API jobs. The other job properties will be created in the Control-M Application Integrator, and they must be prefixed with "AI-" in the .json code.

The following images show the corresponding settings in the Control-M Application Integrator, for reference purposes.

  Click here to expand...
  • The name of the job type appears in the Name field in the job type details.
  • Job properties appear in the Job Type Designer, in the Connection Profile View and the Job Properties View.
    When defining these properties through the .json code, you prefix them with "AI-", except for the property that specifies the name of the connection profile.


Back to top

Job:Dummy

The following example shows how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
   "Type" : "Job:Dummy"
}

Back to top

Connection Profile

Connection profiles are used to define access methods and security credentials for a specific application. They can be referenced by multiple jobs. To do this, you must deploy the connection profile definition before running the relevant jobs.

ConnectionProfile:Hadoop

These examples show how to use connection profiles for the various types of Hadoop jobs.

Job:Hadoop

These are the required parameters for all Hadoop job types.

"HadoopConnectionProfileSample":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost"
}
Parameter Description
TargetAgent The Control-M/Agent to which to deploy the connection profile.
TargetCTM The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.

 These are the optional parameters for defining the user running the Hadoop job types.

"HadoopConnectionProfileSample":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
	"TargetCTM" : "CTMHost",
    "RunAs": "",
    "KeyTabPath":""
}
RunAs

Defines the user of the account on which to run Hadoop jobs.

Leave this field empty to run Hadoop jobs using the user account where the agent was installed.

The Control-M/Agent must run as root, if you define a specific RunAs user.

In the case of Kerberos security

RunAs

Principal name of the user

KeyTabPath Keytab file path for the target user

Job:Hadoop:Oozie

 The following example shows a connection profile that defines access to an Oozie server.

 "OozieConnectionProfileSample" :
 {
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "hdp-ubuntu",
    "Oozie" :
    {
      "SslEnabled"     : false,
      "Host" : "hdp-centos",
      "Port" : "11000"
    }
  }
}
Parameter Description
Host Oozie server host
Port

Oozie server port

Default: 11000

SslEnabled

true | false

Default: false

Job:Hadoop:Sqoop

The following example shows a connection profile that defines a Sqoop data source and access credentials.

 "SqoopConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Sqoop" :
    {
      "User"     : "username",
      "Password" : "userpassword",
      "ConnectionString" : "jdbc:mysql://mysql.server/database",
      "DriverClass" : "com.mysql.jdbc.Driver"
    }
}

Job:Hadoop:Hive

The following example shows a connection profile that defines a Hive beeline endpoint and access credentials. The parameters in the example translate to this beeline command: 

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>

 "HiveConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
    }
}

The following shows how to use optional parameters for a Hadoop Hive job type connection profile. 

The parameters in the example translate to this beeline command:  

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>;principal=<Principal> -n <User> -p <Password> 

 "HiveConnectionProfileSample1":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
       "User" : "user_name",
       "Password" : "user_password",
       "Principal" : "Server_Principal_of_HiveServer2@Realm"
    }
}

Back to top

ConnectionProfile:FileTransfer 

The following examples show you how to define a connection profile for File Transfers. File Transfer connection profiles are divided into two types, depending on the number of hosts for which they contain connection details:

  • Single endpoint: Each connection profile contains the connection details of a single host. Such a connection profile can be used for either the source host or the destination host in a file transfer.
  • Dual endpoint: The connection profile contains connection details of two hosts, both the source host and the destination host, in a file transfer.

Connection details can be based on the FTP or SFTP communication protocols or can be to a local file system.

ConnectionProfile:FileTransfer:FTP

The following examples show a connection profile for a file transfer to a single endpoint using the FTP communication protocol.

Simple ConnectionProfile:FileTransfer:FTP

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "FTPServer",
   "User" : "FTPUser",
   "Password" : "ftp password"
}

ConnectionProfile:FileTransfer:FTP with optional parameters

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "WorkloadAutomationUsers":["john","bob"],
   "HostName": "FTPServer",
   "Port": "21",
   "User" : "FTPUser",
   "Password" : "ftp password",
   "HomeDirectory": "/home/FTPUser",
   "OsType": "Unix"
}
TargetAgent The Control-M/Agent to which to deploy the connection profile.
TargetCTM The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard. For example, "e*"

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

OsType

(Optional) FTP server operating system type

Default: Unix

Types: Unix, Windows

Password (Optional) Password for FTP server account. Use Secrets in code to not expose the password in the code.
HomeDirectory (Optional) User home directory
Passive

Set the FTP client mode. Passive False means active.

True | False

Default: False

True - recommended for servers behind a firewall

ConnectionProfile:FileTransfer:SFTP

The following examples show a connection profile for a file transfer to a single endpoint using the SFTP communication protocol. 

Simple ConnectionProfile:FileTransfer:SFTP

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "Password" : "sftp password"
}

ConnectionProfile:FileTransfer:SFTP with optional parameters

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "HomeDirectory": "/home/SFTPUser",  
   "PrivateKeyName": "/home/controlm/ctm_agent/ctm/cm/AFT/data/Keys/sFTPkey",
   "Passphrase": "passphrase"
}
TargetAgent Which agent computer to deploy the connection profile
TargetCTM The Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default.
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE : You can use "*" as a wildcard. For example, "e*"

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

PrivateKeyName

(Optional) Private key full file path

Passphrase

(Optional) Password for the private key. Use Secrets in code to not expose the password in the code.

Password (Optional) Password for SFTP Server account. Use Secrets in code to not expose the password in the code.
HomeDirectory (Optional) User home directory

ConnectionProfile:FileTransfer:Local

The following example shows a connection profile for a file transfer to a single endpoint on a Local File System. 

"LocalConn" : {
   "Type" : "ConnectionProfile:FileTransfer:Local",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "User" : "controlm",
   "Password" : "local password"
}
TargetAgent The Control-M/Agent to which to deploy the connection profile.
TargetCTM The Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default. 
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE : You can use "*" as a wildcard. For example, "e*"

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

OsType

(Optional) Local server operating system type

Default: Unix

Types: Unix, Windows

Password (Optional) Password for local account. Use Secrets in code to not expose the password in the code.

ConnectionProfile:FileTransfer:DualEndPoint

In a dual-endpoint connection profile, you specify connection details for the source host and for the destaination host of the file transfer. Connection details can be based on the FTP or SFTP communication protocols or can be to a local file system.

The following example shows a dual-endpoint connection profile. One endpoint uses the FTP communication protocol and the other endpoint uses the SFTP communication protocol.

"DualEpConn" : {
	"Type" : "ConnectionProfile:FileTransfer:DualEndPoint",
	"WorkloadAutomationUsers" : [ "emuser1" ],
	"TargetAgent" : "AgentHost",
	"src_endpoint" : {
		"Type" : "Endpoint:Src:FTP",
		"User" : "controlm",
 		"Port" : "10023",
		"HostName" : "localhost",
		"Password" : "password",
		"HomeDirectory" : "/home/controlm/"
	},
	"dest_endpoint" : {
		"Type" : "Endpoint:Dest:SFTP",
		"User" : "controlm",
		"Port" : "10023",
		"HostName" : "host2",
		"Password" : "password",
		"HomeDirectory" : "/home/controlm/"
	}
}

The dual-endpoint connection profile can have the following parameters:

TargetAgent The Control-M/Agent to which to deploy the connection profile.
TargetCTM The Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard. For example, "e*"

Checksum

(Optional) Enable or disable error detection on file transfer

True | False

Default: False

Endpoint

Two endpoint objects, one for the source host and one for the destaination host. Each endpoint can be based on FTP, SFTP, or local file system.

Here are all the possible types of Endpoint objects:

  • Endpoint:Src:FTP
  • Endpoint:Src:SFTP
  • Endpoint:Src:Local
  • Endpoint:Dest:FTP
  • Endpoint:Dest:SFTP
  • Endpoint:Dest:Local

Parameters under the Endpoint object are the same as the remaining parameters for a single-endpoint connection profile, depending on type of connection:

Back to top

ConnectionProfile:Database

The connection profile for database allows you to connect to the following database types:

The following example shows how to define an MSSQL database connection profile. 

 {
	"MSSqlConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:MSSQL",
		"TargetAgent": "AgentHost",
		"Host": "MSSQLHost",
		"User": "db user",
		"Port":"1433",
		"Password": "db password",
		"DatabaseName": "master",
		"DatabaseVersion": "2005",
		"MaxConcurrentConnections": "9",
		"ConnectionRetryTimeOut": "34",
		"ConnectionIdleTime": "45"
	},
	"MSsqlDBFolder": {
		"Type": "Folder",
		"testMSSQL": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectArgs.sql",
			"ConnectionProfile": "MSSqlConnectionProfileSample",
			"Parameters": [ 
				{ "firstParamName": "firstParamValue" }, 
				{ "second": "secondParamValue" } 
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

This example contains the following parameters:

Parameter Description
Port

The database port number.

If the port is not specified, the following default values are used for each database type:

  • MSSQL - 1433
  • Oracle - 1521
  • DB2 - 50000
  • Sybase - 4100
  • PostgreSQL - 5432
Password Password to the database account. Use Secrets in code to not expose the password in the code.
DatabaseName The name of the database
DatabaseVersion

The version of the database. The following database drivers are supported in Control-M for database V9:

  • MSSQL - 2005, 2008, 2012, 2014
  • Oracle - 9i, 10g, 11g, 12c
  • DB2 - 9, 10
  • Sybase - 12, 15
  • PostgreSQL - 8, 9

The default version for each database is the earliest version listed above.

MaxConcurrentConnections

The maximum number of connections that the database can process at the same time.

Allowed values: 1–512
Default value: 100

ConnectionRetryTimeOut

The number of seconds to wait before attempting to connect again.

Allowed values: 1–300
Default value: 5 seconds

ConnectionIdleTime

The number of seconds that the database connection profile can remain idle before disconnecting.

Default value: 300 seconds

ConnectionRetryNum

The number of times to attempt to reconnect after a connection failure.

Allowed values: 1–24
Default value: 5

AuthenticationType

SQL Server Authentication

Possible values are:

  • NTLM2 Windows Authentication
  • Windows Authentication
  • SQL Server Authentication

ConnectionProfile:Database:DB2

The following example shows how to define a connection profile for DB2. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:DB2",
    "TargetAgent": "AgentHost",
    "Host": "DB2Host",
    "Port":"50000",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "db2"
  }
} 

ConnectionProfile:Database:Sybase

The following example shows how to define a connection profile for Sybase. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:Sybase",
    "TargetAgent": "AgentHost",
    "Host": "SybaseHost",
    "Port":"4100",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "Master"
  }
} 

ConnectionProfile:Database:PostgreSQL

The following example shows how to define a connection profile for PostgreSQL. Additional available parameters are described in the table above.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:PostgreSQL",
    "TargetAgent": "AgentHost",
    "Host": "PostgreSQLHost",
    "Port":"5432",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "postgres"
  }
} 

ConnectionProfile:Database:Oracle

Oracle includes three types of database definition types:

ConnectionProfile:Database:Oracle:SID

The following example shows how to define a connection profile for an Oracle database using the SID identifier. Additional available parameters are described in the table above.

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:SID",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"SID": "ORCL" 
}
ConnectionProfile:Database:Oracle:ServiceName

The following example shows how to define a connection profile for an Oracle database using a single service name. Additional available parameters are described in the table above.

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:ServiceName",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"ServiceName": "ORCL" 
}
ConnectionProfile:Database:Oracle:ConnectionString

The following example shows how to define a connection profile for an Oracle database using a connection string that contains text from your tnsname.ora file. Additional available parameters are described in the table above.

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:Oracle:ConnectionString",
		"TargetCTM":"CTMHost",
		"ConnectionString":"OracleHost:1521:ORCL",
		"TargetAgent": "AgentHost",
		"User": "db user",
		"Password": "db password"
	}
}

ConnectionProfile:Database:JDBC

The following example shows how to define a connection profile using a custom defined database type created using JDBC. Additional available parameters are described in the table above.

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:JDBC",
		"User":"db user",
		"TargetCTM":"CTMHost",
		"Host": "PGSQLHost",
		"Driver":"PGDRV",
		"Port":"5432",
		"TargetAgent": "AgentHost",
		"Password": "db password",
		"DatabaseName":"dbname"
	}
}
Parmeter Description
Driver

JDBC driver name as defined in Control-M or as defined using the Driver object

Driver:JDBC:Database

You can define a driver to be used by a connection profile. The following example shows the parameters that you use to define a driver:

{
  "MyDriver": {
    "Type": "Driver:Jdbc:Database",
    "TargetAgent":"app-redhat",
    "StringTemplate":"jdbc:sqlserver://<HOST>:<PORT>/<DATABASE>",
    "DriverJarsFolder":"/home/controlm/ctm/cm/DB/JDBCDrivers/PostgreSQL/9.4/",
    "ClassName":"org.postgresql.Driver",
    "LineComment" : "--",
    "StatementSeparator" : ";"
 }
}
Parameter Description
TargetAgent The Control-M/Agent to which to deploy the driver.
StringTemplate The structure according to which a connection profile string is created.
DriversJarsFolder The path to the folder where the database driver jars are located.
ClassName Name of driver class
LineComment  The syntax used for line comments in the scripts that run on the database.
StatementSeparator The syntax used for statement separator in the scripts that run on the database.

Back to top

ConnectionProfile:ApplicationIntegrator

The following example shows how to define a connection profile for a job type defined in the Control-M Application Integrator. For information about the Control-M Application Integrator, see the Control-M Application Integrator Help.

Properties defined for the connection profile in Control-M Application Integrator are all prefixed with "AI-" in the .json code, as shown in the following example.

	"ConnectionProfileForJob": {
		"Type": "ConnectionProfile:ApplicationIntegrator:<JobType>",
		"TargetAgent": "AgentHost",
		"TargetCTM":"CTMHost",
		"AI-Param03": "45",
		"AI-Param04": "group"
		}

Back to top

Secrets in Code

You can use the Secret object in your JSON code when you do not want to expose confidential information in the source (for example, the password field in a Connection Profile). The syntax below enables you to reference a named secret as defined in the Control-M vault. To learn how to manage secrets, see section Config Secrets. The value of the secret is resolved during deployment.

The following syntax is used to reference a secret. 

"<parameter>" :  {"Secret": "<secret name>"}

The following example shows how to use secrets in code:

{
    "Type": "ConnectionProfile:Hadoop",
    "Hive": {
        "Host": "hiveServer",
        "Principal": "a@bc",
        "Port": "1024",
        "User": "emuser",
        "Password": {"Secret": "hive_dev_secret"}
    }
}

Back to top

Defaults

Allows you to define default parameter values for all objects at once.

The following example shows how to define scheduling criteria using the When parameter. This configures all jobs to run according to the same scheduling criteria. Note that if you also set a specific value at the job level, the job-level value overrides the value in the global-level Defaults section.

{
    "Defaults" : {
        "Host" : "HOST",
        "When" : {
            "WeekDays":["MON","TUE"],
            "FromTime":"1500",
            "ToTime":"1800"       
        }
    }
}

The following example shows how to define defaults for all objects of type Job:*.

{
    "Defaults" : {
        "Job": {
            "Host" : "HOST",
            "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
            }
        }
    }
 
}

The following example shows how to define defaults at the folder level that override defaults at the global level. 

{
	"Folder1": {
         "Type": "Folder",
         "Defaults" : {
          "Job:Hadoop": {
              "Host" : "HOST1",
              "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
             }
           }
         }
	}
}

The following example shows how to define defaults that are user-defined objects such as actionIfSuccess. For each job that succeeds, an email is sent.

{
	"Defaults" : {
        "Job": {
            "Host" : "HOST",
            "actionIfSuccess" : {
                "Type": "If",
                "CompletionStatus":"OK",
                "mailTeam": {
                  "Type": "Mail",
                  "Message": "Job %%JOBNAME succeeded",
                  "Subject": "Success",
                  "To": "team@mycomp.com"
                }
            }
        }
    }
}

Back to top

Related information

For more information about Control-M, use the following resources.

Supplies a common descriptive name to a set of related Jobs, Folders, or SubFolders.

Was this page helpful? Yes No Submitting... Thank you

Comments