Table of Contents

 Where to find more information

For more information about Control-M, see the BMC channel on Youtube.

BMC on Youtube.com

Introduction

The code samples below describe how to define Control-M objects using JSON notation. 

Each Control-M object begins with a "Name" and then a "Type" specifier as the first property. All object names are defined in PascalCase notation with first letters in capital case. In the examples below, the "Name" of the object is "ObjectName" and the "Type" is "ObjectType".

{
	"ObjectName" : {
		"Type" : "ObjectType"
	}
}

Job Properties

Below is a list of job properties for Control-M objects.

Application, SubApplication 

Supplies a common descriptive name to a set of related jobs. The jobs do not necessarily have to run at the same time.

    "Job1": {
	    "Type": "Job:Command",
 		"Application": "ApplicationName",
		"SubApplication": "SubApplicationName",
        "Command": "echo I am a Job",
        "RunAs": "controlm"
    }

Comment

Allows you to write a comment on an object. Comments are not uploaded to Control-M.

    "JobName": {
        "Type" : "Job:Command",
        "Comment" : "code reviewed by tom",
        "Command" : "echo hello",
        "RunAs" : "user1"
        }
    } 

Confirm

Allows you to define a job that requires user confirmation. This can be done by running the run confirm command.

 "JobName": {
        "Type" : "Job:Command",
        "Comment" : "this job needs user confirmation to start execution",
        "Command" : "echo hello",
        "RunAs" : "user1",
		"Confirm" : true
        }
    } 

Critical

Allows you to set a critical job. A critical job is a job that has a higher priority to reserve resources in order to run.

Default: false

"Critical": true

DaysKeepActive

Allows you to define the number of days to keep a job if it did not run at its scheduled date. 

Valid values: 0-98, Forever. Default: 0

Jobs in a folder are kept until the job with the maximum number of days for DaysKeepActive has passed. This enables you to retrieve job status of all the jobs in the folder. 

"DaysKeepActiveFolder": {
       "Type" : "Folder",
       "Defaults": {
         "RunAs":"owner8" 
       },
       "keepForeverIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "Forever" 
       },
       "keepForThreeDaysIfDidNotRun": { 
         "Type": "Job:Command", 
         "Command":"ls",
         "DaysKeepActive": "3" 
       }
}

Defaults

Allows you to define set parameters for all objects. The following example includes scheduling criteria using the When parameter, which configures all jobs to run according to the same scheduling criteria. The scope of defaults in The following example shows all objects in this file. A specific value defined at the job level will override that value in the Defaults section.

{
    "Defaults" : {
        "Host" : "HOST",
        "When" : {
            "WeekDays":["MON","TUE"],
            "FromTime":"1500",
            "ToTime":"1800"       
        }
    }
}

The following example shows you how to define defaults for all objects of type Job:*.

{
    "Defaults" : {
        "Job": {
            "Host" : "HOST",
            "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
            }
        }
    }
 
}

The following example shows you how to define defaults at the folder level that override defaults at the file level. 

{
	"Folder1": {
         "Type": "Folder",
         "Defaults" : {
          "Job:Hadoop": {
              "Host" : "HOST1",
              "When" : {
                "WeekDays":["MON","TUE"],
                "FromTime":"1500",
                "ToTime":"1800"       
             }
           }
         }
	}
}

The following example shows you how to define defaults that are user-defined objects such as actionIfSuccess. For each job that succeeds, an email is sent.

{
	"Defaults" : {
        "Job": {
            "Host" : "HOST",
            "actionIfSuccess" : {
                "Type": "If",
                "CompletionStatus":"OK",
                "mailTeam": {
                  "Type": "Mail",
                  "Message": "Job %%JOBNAME succeeded",
                  "Subject": "Success",
                  "To": "team@mycomp.com"
                }
            }
        }
    }
}

Description

Allows you to add a description to jobs and folders.

 "DescriptionFolder":
    {
       "Type" : "Folder",
       "Description":"folder description",
       "SimpleCommandJob": { 
         "Type": "Job:Command", 
         "Description":"job description",
         "RunAs":"owner8", 
         "Command":"ls"
       }
       
    }

Events

Events can be generated by Control-M or can trigger jobs. Events are defined by a name and a date.

Here is a list of the various capabilities of event usages:

  1. A job can wait, add or deletes event/s to run. See WaitForEvents, AddEvents, and DeleteEvents
  2. Jobs can add or remove events from Control-M. See Event:Add or Event:Delete.
  3. You can add or remove events from the Control-M by an API call. See Event Management.

   

The following options for "OrderDate" are:

Date TypeDescription
AnyDateAny scheduled date
OrderDateControl-M scheduled date.
PreviousOrderDatePrevious Control-M scheduled date
NextOrderDateNext Control-M scheduled date
MMDD

Specific date

Example: "0511"

WaitForEvents

"Wait1":
{
          "Type": "WaitForEvents",
          "Events": [
              {"Event":"e1"}, 
              {"Event":"e2"}, 
              {"Event":"e3", "OrderDate":"AnyDate"}
          ]
}

AddEvents

"add1" :
{
          "Type": "AddEvents",
          "Events": [
              {"Event":"a1"}, 
              {"Event":"a2"}, 
              {"Event":"a3", "OrderDate":"1112"}
          ]
}

DeleteEvents

"del1" :
{
          "Type": "DeleteEvents",
          "Events": [
              {"Event":"d1"},
              {"Event":"d2", "OrderDate":"1111"},
              {"Event":"d3"}
          ]
}

Flow

Allows you to define order dependency between jobs using an object type Flow. A job must end successfully for the next job in the flow to run.

    "flowName": {
      "Type":"Flow",
      "Sequence":["job1", "job2", "job3"]
    }

The example shows you how one job can be part of multiple flows. Job3 will execute if either Job1 or Job2 end successfully. 

    "FlowSamples" :
    {
        "Type" : "Folder",

        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job2": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        },    
        "Job3": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }, 
        "flow1": {
          "Type":"Flow",
          "Sequence":["Job1", "Job3"]
        },
        "flow2": {
          "Type":"Flow",
          "Sequence":["Job2", "Job3"]
        }

    }

The following example shows you how you can create flow sequences with jobs contained within different folders.

    "FolderA" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }
    },    
    "FolderB" :
    {
        "Type" : "Folder",
        "Job1": {
            "Type" : "Job:Command",
            "Command" : "echo hello",
            "RunAs" : "user1"  
        }
    },    
    "CrossFoldersFlowSample": {
        "Type":"Flow",
          "Sequence":["FolderA:Job1", "FolderB:Job1"]
    }

Resources 

Resource:Semaphore

Allows you to set the Semaphore (aka quantitative resources) quantity to the job, used to control access to a resource that is concurrently shared by other jobs. For API command information on resources, see Resource Management.

The following example shows you how to add a semaphore parameter to a job. 

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls",
      "RunAs": "pok",
      "Critical": true,
      "sem1": {
        "Type": "Resource:Semaphore",
        "Quantity": "3"
      }      
    }
  }
}

Resource:Mutex

Allows you to set a Mutex (aka control resource) as shared or exclusive. If the resource is shared, other jobs can use the resource concurrently, or if set to exclusive, the job has to wait until the resource is available before it can run.

The following example shows you how to add a Mutex parameter to a job.

{
  "FolderRes": {
    "Type": "Folder",
    "job1": {
      "Type": "Job:Command",
      "Command": "ls", 
      "RunAs": "pok",
      "Critical": true,
      "mut1": {
        "Type": "Resource:Mutex",
        "MutexType": "Exclusive"
      }
    }
  }
}

Priority

Allows you to define the priority a job has over other jobs.  

The following options are supported:

  • Very High

  • High

  • Medium

  • Low

  • Very Low (default)

{
	"Folder8": {
		"Type": "Folder",
		"Description": "folder desc",
		"Application": "Billing",
		"SubApplication": "Payable",
		"SimpleCommandJob": {
			"Type": "Job:Command",
			"Description": "job desc",
			"Application": "BillingJobs",
			"Priority": "High",
			"SubApplication": "PayableJobs",
			"TimeZone": "MST",
			"Host": "agent8",
			"RunAs": "owner8",
			"Command": "ls"
		}
	}
}

Time Zone

Allows you to add the time zone to jobs and folder.  Time zones should be defined at least 48 hours before the intended execution date. We recommend to define the same time zone for all jobs in a folder. 

"TimeZone":"MST"
Time zone possible values:
HNL (GMT-10:00)
HAW (GMT-10:00)
ANC (GMT-09:00)
PST (GMT-08:00)
MST (GMT-07:00)
CST (GMT-06:00)
EST (GMT-05:00)
ATL (GMT-04:00)
RIO (GMT-03:00)
GMT (GMT+00:00)
WET (GMT+01:00)
CET (GMT+02:00)
EET (GMT+03:00)
DXB (GMT+04:00)
KHI (GMT+05:00)
DAC (GMT+06:00)
BKK (GMT+07:00)
HKG (GMT+08:00)
TYO (GMT+09:00)
TOK (GMT+09:00)
SYD (GMT+10:00)
MEL (GMT+10:00)
NOU (GMT+11:00)
AKL (GMT+12:00)

When

Allows you to define scheduling parameters for jobs and folders. If When is used in a folder, those parameters apply to all jobs in the folder.

When working in a Control-M Workbench environment, jobs will not wait for time constants and will run on a ad-hoc manner. Once deployed to a Control-M instance, all time constraints will be obeyed. 

      "When" : {
                "Schedule":"Never",
				"Months": ["JAN", "OCT", "DEC"],
                "MonthDays":["22","1","11"],
                "WeekDays":["MON","TUE
                "FromTime":"1500",
                "ToTime":"1800"      
            }

One or more of the date/time constraints can be defined.

WeekDays

One or more of the following:

"SUN",MON","TUE","WED","THU","FRI","SAT"

MonthDaysOne or more days in the range of 1 to 31
Month

One or more of the following:

"JAN", "FEB", "MAR", "APR","MAY","JUN", "JUL", "AUG",

"SEP", "OCT", "NOV", "DEC"

FromTime

FromTime specifies that a job will not start before this time

in format of HHMM

ToTime

ToTime specifies that a job will not start after this time

in the format of HHMM

Schedule

One of the following options:

"Everyday", "Never"

You can add start and end dates in addition to the other date/time elements. 

 "When": { 
            "StartDate":"20160322", 
            "EndDate":"20160325" 
         }
StartDate
First date that a job can run
EndDate
Last date that a job can run

Rerun

Allows to define cyclic jobs.

The example shows you how to define a cyclic job that runs every 2 minutes indefinitely.

    "Rerun" : {
        "Every": "2"
    }

The following example shows you how to run a job four times where each run starts three days after the previous run ended.

    "Rerun" : {
        "Every": "3",
        "Units":  "Days",                     
        "From": "End",                               
        "Times": "4"
    }

Units

Can be one of the following "Minutes" "Hours" or "Days". The default is "Minutes"

From

Can be one of the following "Start" "End" or "Target". The default is "Start":

Start - next run time is calculated an N Units from the start time of the current run

End - next run time is calculated as N Units from the end time of current run

Target - run start every N units

Times

Number of cycles to run, To run forever define 0

The default is run forever

IF

IF can trigger one or more actions conditional to the job completion status. In the following example, if the job runs unsuccessfully, it sends an email and runs another job.

    "JobName": {
        "Type" : "Job:Command",
        "Command" : "echo hello",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1",  
        
        "ActionIfFailure" : {
            "Type": "If",        
            "CompletionStatus": "NOTOK",
            
            "mailToTeam": {
              "Type": "Mail",
              "Message": "%%JOBNAME failed",
              "To": "team@mycomp.com"
            },
            "CorrectiveJob": {
              "Type": "Run",
              "Folder": "FolderName",
              "Job": "JobName"
            }
        }
    }

IF can be triggered based on one of the following CompletionStatus values:

ValueAction

NOTOK

When job fails

OK

When job completed successfully

ANY

When the job completed regardless of success or failure

10

When completion status = value

EvenWhen completion status is an even number
OddWhen completion status is an odd number
">=5", "<=5", "<5", ">5", "!=5"When the completion status comparison operator is true

IF Actions

Mail 

 The following example shows an action that sends an e-mail.

    "mailToTeam": {
      "Type": "Mail",
      "Message": "%%JOBNAME failed",
      "To": "team@mycomp.com"
    }

The following example shows that you can add optional parameters to the email action.

    "mailToTeam": {
      "Type": "Mail",
      "Urgency": "Urgent", 
      "Subject" : "Completion Email",
      "Message": "%%JOBNAME just compleated", 
      "To": "team@mycomp.com",
      "CC": "other@mycomp.com"
 }

Urgency can be one of the following: Regular | Urgent | VeryUrgent. The default is Regular.

Action:Rerun

The following example shows an action that reruns the job.

"RerunActionName": {
      "Type": "Action:Rerun"
    },

Action:SetToOK

The following example shows an action that sets the job status to OK.

    "SetToOKActionName": {
      "Type": "Action:SetToOK"
    },

Action:SetToNotOK

The following example shows an action that sets the job status to not OK.

    "SetToNotOKActionName": {
      "Type": "Action:SetToNotOK"
    },

Action:StopCyclicRun

The following example shows an action that disables the cyclic attribute of the job.

    "CyclicRunActionName": {
      "Type": "Action:StopCyclicRun"
    }

Run

The following example shows an action that runs another job.
   "CorrectiveJob": {
      "Type": "Run",
      "Folder": "FolderName",
      "Job": "JobName"
    }

Event:Add

The following example shows you an action that adds an event for the current date.

 "setEvent1": {
    "Type": "Event:Add",
    "Event": "e1"
    },

Optional parameters:

 "setEvent1": {
    "Type": "Event:Add",
    "Event": "e1",
    "OrderDate": "1010"
},
Date TypeDescription
AnyDateAny scheduled date
NoDateNot date specific
OrderDateControl-M scheduled date.
PreviousOrderDatePrevious Control-M scheduled date
NextOrderDateNext Control-M scheduled date
MMDD

Specific date

Example: "0511"


Event:Delete

The following example shows you an action that deletes an event.

OrderDate possible values:

  • "AnyDate"
  • "OrderDate"
  • "PreviousOrderDate"
  • "NextOrderDate"
  • "0511" - (MMDD) 
"unsetEvent2": {
    "Type": "Event:Delete",
    "Event": "e2",
    "OrderDate": "PreviousOrderDate"
},

Output

The Output action supports the following operations:

  • Copy
  • Move
  • Delete
  • Print

The following example shows you an action that copies the output to the specified destination.

"CopyOutput": {
         "Type": "Output",
         "Operation": "Copy",
         "Destination": "/home/copyHere"
       }

Variables

Allows you to use job level variables with %% notation in job fields.

"job1": {
     "Type": "Job:Script",

     "FileName": "scriptname.sh",
     "FilePath":"%%ScriptsPath",
     "RunAs": "em900cob",

     "Arguments":["--date", "%%TodayDate" ],

     "Variables": [
       {"TodayDate": "%%$DATE"},
       {"ScriptsPath": "/home/em900cob"}
     ]
 }

For specifications of system defined variables such as %%$DATE see https://documents.bmc.com/supportu/ctrlm9/help/Main_help/en-US/index.htm#1211.htm

Named pools of variables can share data between jobs using the syntax "\\poolname\variable". NOTE that due to JSON character escaping, each backslash in the pool name must be doubled. For example, "\\\\pool1\\date".

        "job1": {
           "Type": "Job:Dummy",
	       "Variables": [

	         {"\\\\pool1\\date": "%%$DATE"}
	       ]
	    },
 
        "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%\\\\pool1\\date" ]
	    }

Jobs in a folder can share variables at the folder level using the syntax "\\variable name" to set and %%variable to use

"Folder1"   : {
     "Type" : "Folder", 
 
     "Variables": [
	    {"TodayDate": "%%$DATE"}
	 ],
 
     "job1": {
           "Type": "Job:Dummy",

           "Variables": [
              {"\\\\CompanyName": "compName"}
           ]
	  },
	  
      "job2": {
           "Type": "Job:Script",

           "FileName": "scriptname.sh",
	       "FilePath":"/home/user/scripts",
	       "RunAs": "em900cob",

	       "Arguments":["--date", "%%TodayDate", "--comp", "%%CompanyName" ]
	    }
}

Folder

A folder is a container of jobs. 

    "FolderSample": {
        "Type": "Folder",

         "Job1": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         },
         "Job2": {
           "Type": "Job:Command",
                "Command": "echo I am a Job",
                "RunAs": "controlm"
         }
    }

Optional parameters:

     "FolderSampleAll": {
        "Type": "Folder",
        "ControlmServer": "controlm",
        "SiteStandard": "",
        "OrderMethod": "Manual",
        "Application": "ApplicationName",
        "SubApplication" : "SubApplicationName",
		"RunAs" : "controlm",
        "When" : {
            "WeekDays": ["SUN"]
        }
    }
ControlmServer(Optional) This parameter specifies a Control-M Scheduling Server. If more then one Control-M Scheduling Server is configured in the system, you must define the server the folder belongs to.
SiteStandardThis is used to enforce the defined Site Standard to the folder and all jobs contained within the folder. See Control-M Automation API - Getting Started Guide.
OrderMethod"Automatic" by default.  Determines how the job is loaded into the  Control-M job queue. For other options, see: https://documents.bmc.com/supportu/ctrlm9/help/Main_help/en-US/index.htm#JefFolderOrderMethod.htm."
RunAs

Control-m security mechanism uses this parameter for deployment authorization of the folder.

The following example shows description and time zone for the object folder8. 

"Folder8": {
	"Type": "Folder",
	"Description": "folder desc",
	"Application" : "Billing",
	"SubApplication" : "Payable",
	"TimeZone":"HAW",
	"SimpleCommandJob":
		{ "Type": "Job:Command", "Description": "job desc", "Application" : "BillingJobs", "SubApplication" : "PayableJobs", "TimeZone":"MST", "Host":"agent8", "RunAs":"owner8", "Command":"ls" }
}

Secrets in Code

You can use the Secret object in your JSON code when you do not want to expose confidential information in the source (e.g. the password field in a Connection Profile). The syntax below enables you to reference a named secret as defined in the Control-M vault. To learn how to manage secrets, see section Config Secrets. The value of the secret is resolved during deployment.

The following syntax is used to reference a secret. 

<parameter>" :  {"Secret": "<secret name>"}

The following example shows you how to use secrets in code:

{
    "Type": "ConnectionProfile:Hadoop",
    "Hive": {
        "Host": "hiveServer",
        "Principal": "a@bc",
        "Port": "1024",
        "User": "emuser",
        "Password": {"Secret": "hive_dev_secret"}
    }
}

Driver

Allows you to define a driver definition to be used by the connection profile.

Driver:JDBC:Database

The following example shows how to use the parameters for the object MyDriver:

{
  "MyDriver": {
    "Type": "Driver:Jdbc:Database",
    "TargetAgent":"app-redhat",
    "StringTemplate":"jdbc:sqlserver://<HOST>:<PORT>/<DATABASE>",
    "DriverJarsFolder":"/home/controlm/ctm/cm/DB/JDBCDrivers/PostgreSQL/9.4/",
    "ClassName":"org.postgresql.Driver",
    "LineComment" : "--",
    "StatementSeparator" : ";"
 }
}
ParameterDescription
TargetAgentThe Control-M/Agent to which to deploy the driver.
StringTemplateThe structure according to which a connection profile string is created.
DriversJarsFolderThe path to the folder where the database driver jars are located.
ClassNameName of driver class
LineComment The syntax used for line comments in the scripts that run on the database.
StatementSeparatorThe syntax used for statement separator in the scripts that run on the database.

Connection Profile

Connection profiles are used to define access methods and security credentials for a specific application. They can be referenced by multiple jobs. To do this, you must deploy the connection profile definition before running the relevant jobs.

 

ConnectionProfile:Hadoop

These examples show how to use connection profiles for the various types of Hadoop jobs.

Job:Hadoop

These are the required parameters for all Hadoop job types.

"HadoopConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost"
}
ParameterDescription
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.

 

These are the optional parameters for defining the user running the Hadoop job types.

"HadoopConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode"
	"TargetCTM" : "CTMHost",
    "RunAs": "",
    "KeyTabPath":""
}
RunAs

Defines the user of the account on which to run Hadoop jobs.

Leave this field empty to run Hadoop jobs using the user account where the agent was installed.

The Control-M/Agent must runs as root, if you define a specific RunAs user.

In the case of Kerberos  security

RunAs

Principal name of the user

KeyTabPathKeytab file path for the target user

 

Job:Hadoop:Sqoop

 The following example shows a connection profile that defines a Sqoop data source and access credentials.

 "SqoopConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Sqoop" :
    {
      "User"     : "username",
      "Password" : "userpassword",
      "ConnectionString" : "jdbc:mysql://mysql.server/database",
      "DriverClass" : "com.mysql.jdbc.Driver"
    }
}

Job:Hadoop:Hive

The following example shows a connection profile that defines a Hive beeline endpoint and access credentials. The parameters in the example translate to this beeline command: 

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>

 

 "HiveConnectionProfileSample" :
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
    }
}

The following shows you how to use optional parameters for an Hadoop Hive job type connection profile. 

The parameters in the example translate to this beeline command:  

beeline  -u jdbc:hive2://<Host>:<Port>/<DatabaseName>;principal=<Principal> -n <User> -p <Password> 

 "HiveConnectionProfileSample1":
{
    "Type" : "ConnectionProfile:Hadoop",
    "TargetAgent" : "edgenode",
    "TargetCTM" : "CTMHost",
    "Hive" :
    {
       "Host" : "hive_host_name",
       "Port" : "10000",
       "DatabaseName" : "hive_database",
       "User" : "user_name",
       "Password" : "user_password",
       "Principal" : "Server_Principal_of_HiveServer2@Realm"
    }
}

ConnectionProfile: File Transfer 

The following examples show you how to define a connection profile for the different File Transfer types.  

ConnectionProfile:FileTransfer:FTP

Simple ConnectionProfile:FileTransfer:FTP

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "FTPServer",
   "User" : "FTPUser",
   "Password" : "ftp password"
}

 

ConnectionProfile:FileTransfer:FTP with optional parameters

"FTPConn" : {
   "Type" : "ConnectionProfile:FileTransfer:FTP",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "FTPServer",
   "Port": "21",
   "User" : "FTPUser",
   "Password" : "ftp password",
   "HomeDirectory": "/home/FTPUser",
   "OsType": "Unix",   
   "WorkloadAutomationUsers":["john","bob"]
}
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. If there is only one Control-M/Server, that is the default.
OsType

(Optional) FTP server operating system type

Default: Unix

Types: Unix, Windows

Password

(Optional) Password for FTP server account. Use Secrets in code to not expose the password in the code.

HomeDirectory(Optional) User home directory
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard, e.g. "e*"

ConnectionProfile:FileTransfer:SFTP

The following examples show a connection profile for SFTP communication protocol. 

Simple ConnectionProfile:FileTransfer:SFTP

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "Password" : "sftp password"
}


ConnectionProfile:FileTransfer:SFTP with optional parameters

"sFTPconn": {
   "Type": "ConnectionProfile:FileTransfer:SFTP",
   "TargetAgent": "AgentHost",
   "TargetCTM" : "CTMHost",
   "HostName": "SFTPServer",
   "Port": "22",
   "User" : "SFTPUser",
   "HomeDirectory": "/home/SFTPUser",  
   "PrivateKeyName": "/home/controlm/ctm_agent/ctm/cm/AFT/data/Keys/sFTPkey",
   "Passphrase": "passphrase"
}
TargetAgentWhich agent computer to deploy the connection profile
TargetCTMThe Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default.
PrivateKeyName

(Optional) Private key full file path

Passphrase

(Optional) Password for the private key. Use Secrets in code to not expose the password in the code.

Password(Optional) Password for SFTP Server account. Use Secrets in code to not expose the password in the code.
HomeDirectory(Optional) User home directory
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard, e.g. "e*"

 

ConnectionProfile:FileTransfer:Local

The following example shows a connection profile for Local File System. 

"LocalConn" : {
   "Type" : "ConnectionProfile:FileTransfer:Local",
   "TargetAgent" : "AgentHost",
   "TargetCTM" : "CTMHost",
   "User" : "controlm",
   "Password" : "local password"
}
TargetAgentThe Control-M/Agent to which to deploy the connection profile.
TargetCTMThe Control-M/Server to which to deploy the connection profile. When there is only one Control-M/Server, this will be the default. 
OsType

(Optional) FTP server operating system type

Default: Unix

Types: Unix, Windows

Password(Optional) Password for local account. Use Secrets in code to not expose the password in the code.
WorkloadAutomationUsers

(Optional) Users that are allowed to access the connection profile.

NOTE: You can use "*" as a wildcard, e.g. "e*"

ConnectionProfile:Database (DB2, Sybase, PostgreSQL, MSSQL, Oracle, JDBC)

The connection profile for database allows you to connect to the following database types:

  • ConnectionProfile:Database:DB2
  • ConnectionProfile:Database:Sybase
  • ConnectionProfile:Database:PostgreSQL
  • ConnectionProfile:Database:MSSQL
  • ConnectionProfile:Database:Oracle
  • ConnectionProfile:JDBC 

The following example shows you how to define an MSSQL database connection profile. 

 {
	"MSSqlConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:MSSQL",
		"TargetAgent": "AgentHost",
		"Host": "MSSQLHost",
		"User": "db user",
		"Port":"1433",
		"Password": "db password",
		"DatabaseName": "master",
		"DatabaseVersion": 
		"2005",
		"MaxConcurrentConnections": "9",
		"ConnectionRetryTimeOut": "34",
		"ConnectionIdleTime": "45"
	},
	"MSsqlDBFolder": {
		"Type": "Folder",
		"testMSSQL": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectArgs.sql",
			"ConnectionProfile": "MSSqlConnectionProfileSample",
			"Parameters": [ 
				{ "firstParamName": "firstParamValue" }, 
				{ "second": "secondParamValue" } 
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}

Port

If port is not added, the following default values are used for each of the

database types:

  • DB2 - 50000
  • MSSQL - 1433
  • PostgreSQL - 5432
  • Sybase - 4100
PasswordPassword to the database account. Use Secrets in code to not expose the password in the code.
DatabaseVersion

2005

Supported drivers in Control-M for database V9 are:

  • MSSQL - 2005, 2008, 2012, 2014
  • Oracle - 9i, 10g, 11g, 12c
  • DB2 - 9, 10
  • Sybase - 12, 15
  • PostgreSQL - 8, 9
MaxConcurrentConnections100
ConnectionRetryTimeOut5
ConnectionIdleTime300
5ConnectionRetryNum5
AuthenticationType

SQL Server Authentication

Possible values are:

  • NTLM2 Windows Authentication
  • Windows Authentication
  • SQL Server Authentication

 

ConnectionProfile:Database:DB2

The following example shows you how to define a connection profile for DB2. 

The parameters used in the example are described in the table.  

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:DB2",
    "TargetAgent": "AgentHost",
    "Host": "DB2Host",
    "Port":"50000",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "db2"
  }
} 

 

ConnectionProfile:Database:Sybase

The following example shows you how to define a connection profile for Sybase.

The parameters used in the example are described in the table.  

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:Sybase",
    "TargetAgent": "AgentHost",
    "Host": "SybaseHost",
    "Port":"4100",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "Master"
  }
} 

ConnectionProfile:Database:PostgreSQL

The following example shows you how to define a connection profile for PostgreSQL:

The parameters used in the example are described in the table.

{
  "DB2ConnectionProfileSample": {
    "Type": "ConnectionProfile:Database:PostgreSQL",
    "TargetAgent": "AgentHost",
    "Host": "PostgreSQLHost",
    "Port":"5432",
    "User": "db user",
    "Password": "db password",
    "DatabaseName": "postgres"
  }
} 

ConnectionProfile:Database:Oracle

Oracle includes three types of database definition types:

  • SID
  • ServiceName
  • ConnectionString

ConnectionProfile:Database:Oracle:SID

The following example shows you how to define a SID: 

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:SID",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"SID": "ORCL" 
}
SID

Defines the name of the service ID

PortDefault is 1521

ConnectionProfile:Database:Oracle:ServiceName

The following example shows you how to define a ServiceName: 

"OracleConnectionProfileSample": {   
	"Type": "ConnectionProfile:Database:Oracle:ServiceName",   
	"TargetCTM": "controlm",   
	"Port": "1521",   
	"TargetAgent": "AgentHost",
	"Host": "OracleHost",
	"User": "db user",   
	"Password": "db password",
	"ServiceName": "ORCL" 
}
ServiceName

Defines the service name

PortDefault is 1521

ConnectionProfile:Database:Oracle:ConnectionString

The following example shows you how to define a ConnectionString: 

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:Oracle:ConnectionString",
		"TargetCTM":"CTMHost",
		"ConnectionString":"OracleHost:1521:ORCL",
		"TargetAgent": "AgentHost",
		"User": "db user",
		"Password": "db password"
	}
}

ConnectionProfile:Database:JDBC

The following example shows you how to define a connection profile using a custom defined database type created using JDBC.

The parameters used in the example are described in the table.

{
	"OracleConnectionProfileSample": {
		"Type": "ConnectionProfile:Database:JDBC",
		"User":"db user",
		"TargetCTM":"CTMHost",
		"Host": "PGSQLHost",
		"Driver":"PGDRV",
		"Port":"5432",
		"TargetAgent": "AgentHost",
		"Password": "db password",
		"DatabaseName":"dbname"
	}
}
ParmeterDescription
Driver

JDBC driver name as defined in Control-M

Job Types 

Job:Command

The following example shows you how to use the Job:Command to run operating system commands.

	"JobName": {
		"Type" : "Job:Command",
    	"Command" : "echo hello",
    	"Host" : "myhost.mycomp.com",
    	"RunAs" : "user1"  
	}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.

Job:Script

 The following example shows you how to use Job:Script to run a script.

    "JobName": {
        "Type" : "Job:Script",
        "FileName" : "task1123.sh",
        "FilePath" : "/home/user1/scripts",
        "Host" : "myhost.mycomp.com",
        "RunAs" : "user1"   
    }
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

RunAsIdentifies the operating systems user that will run the job.
FileName together with FilePath

Indicates the location of the script. 

NOTE: Due to JSON character escaping, each backslash in a Windows file system path must be doubled. For example, "c:\\tmp\\scripts".


Job:FileTransfer

The following example shows a Job:FileTransfer.

"AftFolder" :
{
   "Type" : "Folder",
   "Application" : "aft",
   "MyAftJob" :
   {
      "Type" : "Job:FileTransfer",
      "ConnectionProfileSrc" : "LocalConn",
      "ConnectionProfileDest" : "sFTPconn",
      "FileTransfers" :
      [
         {
            "Src" : "/home/sFTP/file",
            "Dest" : "/home/sFTP/file2",
            "TransferOption": "SrcToDest",
			"TransferType": "Binary",
            "PreCommandDest": {
               "action": "rm",
               "arg1": "/home/sFTP/file2"
            },
            "PostCommandDest": {
               "action": "chmod",
               "arg1": "700",
               "arg2": "/home/sFTP/file2"
            }
         },
         {
            "Src" : "/home/sFTP/otherFile",
            "Dest" : "/home/sFTP/otherFile2",
            "TransferOption": "SrcToDestFileWatcher"
         }
      ]
   }
}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition Control-M File Transfer plugin version 9.0.00 or higher should be installed.
Optionally, you can define a host group instead of a host machine. See Control-M-Terms.
NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

ConnectionProfileSrcThe connection profile to use as the source
ConnectionProfileDestThe connection profile to use as the destination
SrcFull path to the source file
DestFull path to the destination file

TransferType

(Optional) Type of transfer

Options: ASCII, Binary

Default: Binary

TransferOption

(Optional)

The following is a list of the transfer options:

SrcToDest - transfer file from source to destination

DestToSrc - transfer file from destination to source

SrcToDestFileWatcher - watch the file on the source and transfer to destination only when all criteria is met

DestToSrcFileWatcher - watch the file on the destination and transfer to source only when all criteria is met

FileWatcher - Watch a file. If successful, the succeeding job will run.

Default: "SrcToDest"

PreCommandSrc

PreCommandDest

PostCommandSrc

PostCommandDest

(Optional) Defines the commands that occur before and after job execution. Each command can only run one action at a time.

action

Description

chmod

Changing file access permission:

arg1: mode

arg2: file name

mkdir

Creating a new directory

arg1: directory name

rename

Rename a file/directory

arg1: current file name

arg2: new file name

rm

Delete a file

arg1: file name

rmdir

Delete a directory

arg1: directory name

Job:Database

Job:Database:SQLScript

The following example shows you how to create a database job that runs an SQL script.

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOrcle": {
			"Type": "Job:Database:SQLScript",
			"Host": "AgentHost",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample",
			"Parameters": [
				{"firstParamName": "firstParamValue"},
				{"secondParamName": "secondParamValue"}
			],
			"Autocommit": "N",
			"OutputExcecutionLog": "Y",
			"OutputSQLOutput": "Y",
			"SQLOutputFormat": "XML"
		}
	}
}
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host.
In addition Control-M Databases plugin version 9.0.00 or higher should be installed.
Optionally, you can define a host group instead of a host machine. See Control-M-Terms.
NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

ParametersParameters are pairs of name and value.  Every name that appears in the SQL script, will be replaced by its value pair.
Autocommit

(Optional) Commits statements to the database that completes successfully

Default: N

OutputExcecutionLog

(Optional) Shows the execution log in the job output

Default: Y

OutputSQLOutput

(Optional) Shows the SQL sysout in the job output

Default: N

SQLOutputFormat

(Optional) Defines the output format as either Text, XML, CSV, or HTML

Default: Text

Another example:

{
	"OracleDBFolder": {
		"Type": "Folder",
		"testOrcle": {
			"Type": "Job:Database:SQLScript",
			"Host": "app-redhat",
			"SQLScript": "/home/controlm/sqlscripts/selectOrclParm.sql",
			"ConnectionProfile": "OracleConnectionProfileSample"
		}
	}
}

Job:Hadoop:Spark:Python

The following example shows you how to use Job:Hadoop:Spark:Python to run a Spark Python program.

    "ProcessData": {
        "Type": "Job:Hadoop:Spark:Python",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",

        "SparkScript": "/home/user/processData.py"
    }
ConnectionProfileSee ConnectionProfile:Hadoop  
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
    "Type": "Job:Hadoop:Spark:Python",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",
    "SparkScript": "/home/user/processData.py",            
    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

Job:Hadoop:Spark:ScalaJava

The following example shows you how to use Job:Hadoop:Spark:ScalaJava to run a Spark Java or Scala program.

 "ProcessData": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar",
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName" 
}
ConnectionProfileSee ConnectionProfile:Hadoop  
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

 "ProcessData1": {
  	"Type": "Job:Hadoop:Spark:ScalaJava",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "ProgramJar": "/home/user/ScalaProgram.jar"
	"MainClass" : "com.mycomp.sparkScalaProgramName.mainClassName",

    "Arguments": [
        "1000",
        "120"
    ],            

    "PreCommands": {
        "FailJobOnCommandFailure" :false,
        "Commands" : [
            {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
            {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
        ]
    },

    "PostCommands": {
        "FailJobOnCommandFailure" :true,
        "Commands" : [
            {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
        ]
    },

    "SparkOptions": [
        {"--master": "yarn"},
        {"--num":"-executors 50"}
    ]
 }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

Job:Hadoop:Pig

The following example shows you how to use Job:Hadoop:Pig to run a Pig script.

"ProcessDataPig": {
    "Type" : "Job:Hadoop:Pig",
    "Host" : "edgenode",
    "ConnectionProfile": "DevCluster",

    "PigScript" : "/home/user/script.pig" 
}
ConnectionProfileSee ConnectionProfile:Hadoop  
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "ProcessDataPig1": {
        "Type" : "Job:Hadoop:Pig",
        "ConnectionProfile": "DevCluster",
        "PigScript" : "/home/user/script.pig",            
        "Host" : "edgenode",
        "Parameters" : [
            {"amount":"1000"},
            {"volume":"120"}
        ],            
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
 
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

Job:Hadoop:Sqoop

The following example shows you how to use Job:Hadoop:Sqoop to run a Sqoop job.

    "LoadDataSqoop":
    {
      "Type" : "Job:Hadoop:Sqoop",
	  "Host" : "edgenode",
      "ConnectionProfile" : "SqoopConnectionProfileSample",

      "SqoopCommand" : "import --table foo --target-dir /dest_dir" 
    }

 

ConnectionProfileSee Sqoop ConnectionProfile:Hadoop 
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "LoadDataSqoop1" :
    {
        "Type" : "Job:Hadoop:Sqoop",
        "Host" : "edgenode",
        "ConnectionProfile" : "SqoopConnectionProfileSample",

        "SqoopCommand" : "import --table foo",
		"SqoopOptions" : [
			{"--warehouse-dir","/shared"},
			{"--default-character-set","latin1"}
		],
 
        "SqoopArchives" : "",
        
        "SqoopFiles": "",
        
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

SqoopOptionsThese are passed as the specific sqoop tool args
SqoopArchives

Indicates the location of the Hadoop archives.

SqoopFilesIndicates the location of the Sqoop files.

Job:Hadoop:Hive

The following example shows you how to use Job:Hadoop:Hive to run a Hive beeline job.

    "ProcessHive":
    {
      "Type" : "Job:Hadoop:Hive",
      "Host" : "edgenode",
      "ConnectionProfile" : "HiveConnectionProfileSample",

      "HiveScript" : "/home/user1/hive.script"
    }

 

ConnectionProfileSee Hive ConnectionProfile:Hadoop 
Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "ProcessHive1" :
    {
        "Type" : "Job:Hadoop:Hive",
        "Host" : "edgenode",
        "ConnectionProfile" : "HiveConnectionProfileSample",


        "HiveScript" : "/home/user1/hive.script", 
        "Parameters" : [
            {"ammount": "1000"},
            {"topic": "food"}
        ],

        "HiveArchives" : "",
        
        "HiveFiles": "",
        
        "HiveOptions" : [
            {"hive.root.logger": "INFO,console"}
        ],

        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

HiveSciptParametersPassed to beeline as --hivevar “name”=”value”.

HiveProperties

Passed to beeline as --hiveconf “key”=”value”.

HiveArchives

Passed to beeline as --hiveconf mapred.cache.archives=”value”.

HiveFilesPassed to beeline as --hiveconf mapred.cache.files=”value”.

Job:Hadoop:DistCp

The following example shows you how to use Job:Hadoop:DistCp to run DistCp job. DistCp (distributed copy) is a tool used for large inter/intra-cluster copying.

        "DistCpJob" :
        {
            "Type" : "Job:Hadoop:DistCp",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",
         
            "TargetPath" : "hdfs://nns2:8020/foo/bar",
            "SourcePaths" :
            [
                "hdfs://nn1:8020/foo/a"
            ]
        }  
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

    "DistCpJob" :
    {
        "Type" : "Job:Hadoop:DistCp",
        "Host" : "edgenode",
        "ConnectionProfile" : "ConnectionProfileSample",
        "TargetPath" : "hdfs://nns2:8020/foo/bar",
        "SourcePaths" :
        [
            "hdfs://nn1:8020/foo/a",
            "hdfs://nn1:8020/foo/b"
        ],
        "DistcpOptions" : [
            {"-m":"3"},
            {"-filelimit ":"100"}
        ]
    }

TargetPath, SourcePaths and DistcpOptions

Passed to the distcp tool in the following way: distcp <Options> <TargetPath> <SourcePaths>.

Job:Hadoop:HDFSCommands

The following example shows you how to use Job:Hadoop:HDFSCommands to run a job that executes one or more HDFS commands.

        "HdfsJob":
        {
            "Type" : "Job:Hadoop:HDFSCommands",
            "Host" : "edgenode",
            "ConnectionProfile": "DevCluster",

            "Commands": [
                {"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Job:Hadoop:HDFSFileWatcher

The following example shows you how to use Job:Hadoop:HDFSFileWatcher to run a job that waits for HDFS file arrival.

    "HdfsFileWatcherJob" :
    {
        "Type" : "Job:Hadoop:HDFSFileWatcher",
        "Host" : "edgenode",
        "ConnectionProfile" : "DevCluster",

        "HdfsFilePath" : "/inputs/filename",
        "MinDetecedSize" : "1",
        "MaxWaitTime" : "2"
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

HdfsFilePathSpecifies the full path of the file being watched.
MinDetecedSizeDefines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.
MaxWaitTimeDefines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Job:Hadoop:Oozie

The following example shows you how to use Job:Hadoop:Oozie to run a job that submits an Oozie workflow.

    "OozieJob": {
        "Type" : "Job:Hadoop:Oozie",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "JobPropertiesFile" : "/home/user/job.properties",
        "OozieOptions" : [
          {"inputDir":"/usr/tucu/inputdir"},
          {"outputDir":"/usr/tucu/outputdir"}
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

JobPropertiesFile

The path to the job properties file.

 

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

OozieOptionsSet or override values for given job property.

Job:Hadoop:MapReduce

 The following example shows you how to use Job:Hadoop:MapReduce to execute a Hadoop MapReduce job.

    "MapReduceJob" :
    {
       "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

   "MapReduceJob1" :
    {
        "Type" : "Job:Hadoop:MapReduce",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "ProgramJar" : "/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar",
        "MainClass" : "pi",
        "Arguments" :[
            "1",
            "2"
        ],
        "PreCommands": {
            "FailJobOnCommandFailure" :false,
            "Commands" : [
                {"get" : "hdfs://nn.example.com/user/hadoop/file localfile"},
                {"rm"  : "hdfs://nn.example.com/file /user/hadoop/emptydir"}
            ]
        },
        "PostCommands": {
            "FailJobOnCommandFailure" :true,
            "Commands" : [
                {"put" : "localfile hdfs://nn.example.com/user/hadoop/file"}
            ]
        }    
    }
PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

Job:Hadoop:MapredStreaming

The following example shows you how to use Job:Hadoop:MapredStreaming to execute a Hadoop MapReduce Streaming job.

     "MapredStreamingJob1": {
        "Type": "Job:Hadoop:MapredStreaming",
        "Host" : "edgenode",
        "ConnectionProfile": "DevCluster",


        "InputPath": "/user/robot/input/*",
        "OutputPath": "/tmp/output",
        "MapperCommand": "mapper.py",
        "ReducerCommand": "reducer.py",
        "GeneralOptions": [
            {"-D": "fs.permissions.umask-mode=000"},
            {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}              
        ]
    }
ConnectionProfile

See ConnectionProfile:Hadoop

Host

Defines the name of the host machine where the job runs. A Control-M/Agent must be installed on this host. Optionally, you can define a host group instead of a host machine. See Control-M-Terms.

NOTE: If this parameter is left blank, the job is submitted for execution on the Control-M Scheduling Server host.

Optional parameters:

PreCommands and PostCommands

Allows you to define HDFS commands to perform before and after running the job. For example, you can use them for preparation and cleanup.

FailJobOnCommandFailure

This parameter is used to ignore failure in the pre- or post- commands.

The default for PreCommands is true meaning the job will fail if any pre-command fails.

The default for PostCommands is false meaning that the job will complete successfully even if any post-command fails.

GeneralOptionsAdditional [genericOptions] [streamingOptions] passed to the hadoop-streaming.jar.

 

Job:Dummy

The following example shows you how to use Job:Dummy to define a job that always ends successfully without running any commands. 

"DummyJob" : {
   "Type" : "Job:Dummy"
}
Was this page helpful? Yes No Submitting... Thank you