Installation Documentation

Installation Requirements

Installation Instructions for:

Windows, Linux, Mac OS, and other platforms

Control-M Automation API Videos

  Where to find more information

For more information about Control-M, use the following resources.

Support videos

Online help

Introduction

Control-M integrates the management of batch workload processes to a single point of control. With cross-application and cross-platform scheduling capabilities, this powerful solution accelerates delivery of digital services and increases the quality of service delivered to users and the business.

With Control-M you:

  • Gain a faster, more cost-effective way to manage workload with a unique architecture that supports growth and provides unmatched integration
  • Reduce the number of failure points and delays caused by manual processes with a single, unified scheduling interface — regardless of platform or application
  • Eliminate your reliance on multiple toolsets and staff resources with automated scheduling processes that help you manage priorities according to business needs
  • Accelerate delivery of digital services by connecting Applications Development and IT Operations using a collaborative web application

Control-M Automation API is a set of programmatic interfaces that give developers and DevOps engineers access to the capabilities of Control-M within the modern application release process. Job flows and related configuration objects are built in JSON and managed together with other application artifacts in any source code management solution, such as GIT.  This approach enables capabilities like sophisticated scheduling, flow control and SLA management to be built in right from inception and used during the running of batch applications as they are automatically deployed in dev, test and production environments.

Control-M Automation API provides an automated interface to Control-M in addition to graphical user interfaces that have been used traditionally. With Control-M Automation API, you manage job workflows as code, similar to the way modern configuration management solutions manage infrastructure as code. The actual execution of batch jobs is managed by Control‑M. 

You can use a Command Line Interface or the REST API to build, run and test jobs against an existing Control-M. A local workbench provides initial job flow unit testing functions which are a subset of the complete set of Control-M capabilities. Once your flows are ready, you can check them into your source control system with the rest of your project files. The Automation API provides services to package and deploy job flows to the Control-M environment. Control-M schedules and operates on the most recently deployed flows.

Additionally, Control-M Automation API provides services to provision Control-M/Agent machines, configure host groups, and more.

No prior experience with Control-M is required to write jobs. Work through this Guide and it's tutorials to get started. The guide introduces you to the concepts of Control-M Automation API with some examples of flows. You can get more information about creating jobs by following the steps from the tutorial of Creating Your First Job Flow. You can find more information about accessing Control-M Automation API services to build, test and deploy your jobs on the Installation page.

Control-M in a Nutshell

In order to make use of Control-M Automation API effectively, there are some concepts of Control-M you need to be familiar with:

  • Jobs
  • Folders
  • Scheduling
  • Control-M/Server
  • Control-M/Agent
  • Host groups
  • Site standards

A Job is a basic execution unit of Control-M which has several attributes, such as:

  • What is the job type: script, command, Hadoop/Spark, file transfer, etc. 
  • Where does this job run: host
  • Who runs the job: connection profile or run as
  • When should it run: scheduling criteria
  • What are the job dependencies: a specific job must complete, a file must arrive, etc.

Jobs are contained in folders, which can also have scheduling criteria, dependencies, and other instructions that apply to all jobs in the folder. You define these entities in JSON format. 

The scheduling engine of Control-M is the Control-M/Server. Once a day it reads job definitions and starts monitoring all jobs scheduled to start on that day. These are then submitted to run on Control-M/Agents when scheduling criteria and dependencies are met.

Control-M/Agents reside on the application hosts. You can group them into a host group. Specifying a job to run on a host group, causes Control-M/Server to balance the load by directing jobs to the various hosts in the host group.

Site Standards are a set of naming rules for the different job fields. When using them, only jobs that match the relevant site standard can be built and deployed to that Control-M. Site standards are available as part of Control-M Workload Change Manager add-on.

For more information on Control-M solutions, refer to the Control-M documentation.

Control-M Automation API decentralizes access to Control-M entities and provides them directly to developers. When using the Control-M Automation API, you access Control-M via a REST API HTTP endpoint.

 

Installation

You can install Control-M Workbench or access Control-M REST API from a Control-M instance.

Control-M Workbench provides a development environment that enables you to build, run, and test your job flows. You can make and validate changes without effecting the work of other stakeholders. To install Control-M Workbench, follow the steps described in Installation of Control-M Workbench.

For information on Control-M instance, refer to the Control-M documentation.

You can run the Control-M API commands via CLI or REST API to build and manage your Control-M workflows. To install the CLI, follow the steps described in Installation of Control-M Automation CLI.

 

The following tutorials are designed to introduce you to common best practices of using Control-M Automation API.

Tutorials

Following are the tutorials that explain common usage of the Automation API. 

Before you start with the tutorials, you must follow the steps in Setting up the prerequisites to set up your preferred environment.

Creating Your First Job Flow Explains how to write jobs that execute OS commands and scripts
Running your applications and programs in your environment

Explains how to run:

How to Automate Code Deployment Explains how DevOps engineer can automate code deployment
Building a docker container for batch applications Explains how to build a docker container for Control-M/Agent and run some basic API commands

 

 

 

Setting up the prerequisites

This following steps describe how to set up your environment to run each of the tutorials.

Step 1 - Access to Control-M Automation API

Control-M Automation API is a REST API interface over HTTPS. You can either install Control-M Workbench or access a Control-M instance where you must have Control-M credentials.

Step 2 - Install Control-M Automation CLI

To run the tutorials you need a working Control-M CLI. Refer to the Control-M Automation CLI installation guide. Select the relevant platform and follow the steps. 

Step 3 - Clone GitHub repository

Fetch tutorial samples from GitHub and run git clone to create a local copy of the source code.

Step 4 - Setting up a Control-M Environment

The Control-M Automation CLI requires an environment to run the commands. An Environment is a combination of an endPoint, username and password.

An endPoint is the URI for Control-M Automation API.

Here is an example of the default URI for Control-M Automation API:


If you installed Control-M Workbench, you do not need to add an environment. By default an environment named "workbench" is created after installing the Control-M Automation CLI. Continue to next step.

If you accessed a Control-M instance, use the following command to add an environment:

Step 5 - Make the Environment Default

All commands invoked will be using the default environment. 

For Control-M Workbench:

For Control-M instance:

The following command shows you how to set devEnvironment and the response returned.

Step 6 - Accessing documentation

You can access the REST API documentation for this environment:

To access this Getting Started Guide:

Tutorial - Creating Your First Job Flow

This example shows you how to write command and script jobs that run in sequence. 

Step 1 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

Step 2 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

If the code is not valid, an error is returned.

Step 2 - Run the Source Code

Use the run command to run the jobs on the Control-M environment. The returned runId is used to check the job status. The following shows the command and a typical successful response.

This code ran successfully and returned the runId of "7cba67de-9e0d-409d-8d93-1b8229432eee".

Step 3 - Check the Job Status Using the runId

The following command shows how to check job status using the runId. Note that when there is more than one job in the flow, the status of all the jobs are checked and returned.

Step 4 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The "ActionIfFailure" object determines what action is taken if a job ends unsuccessfully.

This example contains two jobs: CommandJob and ScriptJob. These jobs are contained within a folder named AutomationAPISampleFlow.  To define the sequence of job execution, use the Flow object.  

Step 5 - Modify the Code to Run in Your Environment

In the code above, the following parameters need to be set to run the jobs in your environment: 

RunAs identifies the operating system user that will execute the jobs.

Host defines the machine where the jobs will run. This machine should have a Control-M/Agent installed. 

FilePath and FileName define the location and name of the file that contains the script to run.

Change the values that suits your Control-M environment. In a Control-M Workbench environment, change the values of the following parameters as shown in the example below. This script returns a communication report. 


Step 6 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

You can now see that both jobs EndedOK.

Let's view the output of CommandJob. You need to use the jobId to get this information.

Step 7 - Using the Workbench Debugger

 When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

A browser window opens where you can view and manage your jobs.


To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to run your applications and programs in your development environment.

 

 

Tutorial - Running your applications and programs in your environment

Jobs run where the Control-M/Agent resides.You need a Control-M/Agent on your application host. The provision service enables you to install and setup a Control-M/Agent. 

Select the relevant application:

 

Running Hadoop Spark Job Flow

This example walks you through writing Hadoop and Spark jobs that run in sequence. To complete this tutorial, you need a Hadoop edge node where the Hadoop client software is installed.

Let's verify that Hadoop and HDFS are operational using the following commands:

 

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

As you can see, there are three available Linux images:

  • Agent.Linux - provides the ability to run scripts, programs and commands. 
  • ApplicationsAgent.Linux - in addition to Agent.Linux, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux - in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, we will provision the BigDataAgent.Linux image.

Step 2 - Provision the BigDataAgent Image 


After provisioning the BigDataAgent successfully, you now have a running instance of Control-M/Agent on your Hadoop edge node.

Now let's access the tutorial samples code.

Step 3 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleHadoopFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

If the code is not valid, an error is returned.


Step 5 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

This example contains two jobsProcessData and CopyOutputData. These jobs are contained within a folder named AutomationAPIHadoopSampleFlow.  To define the sequence of job execution, use the Flow object.  

In this sample there are two jobs, a Spark job followed by an HDFS Commands job. The "Flow" object is used to define the sequence in which the jobs run.

Notice that in the Spark job we use the "PreCommands" object to cleanup output from any previous Spark job runs.  

The "SampleConnectionProfile" object is used to define the connection parameters to the Hadoop cluster. Note that for Sqoop and Hive, it is used to set data sources and credentials.

Here is the code of processData.py:

Step 6 - Modify the Code to Run in Your Environment

You need to modify the path to the samples directory for them to run successfully in your environment. Replace the URI file:///home/[USER]/automation-api-quickstart/101-running-hadoop-spark-job-flow/ with the location of the samples you installed on your machine.

For example: file:///home/user1/automation-api-quickstart/101-running-hadoop-spark-job-flow/

 

In the code sample, replace the value for "TargetAgent" and "Host" with the host name of the machine where we provisioned the Control-M/Agent.

 

Step 7 - Rerun the Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

Each time the code runs, a new runId is generated. So, let's take the new runId, and check the jobs status again:

You can see that now the status of both jobs is "EndedOK".

Let's view the output of "CopyOutputData". We need to use the jobId to get this information.

Step 8 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

A browser window opens where you can view and manage your jobs.

 

To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to automate code deployments.

Running Script and Command Job Flow

This example walks you through running script and commands that run in sequence. You need a Windows 64-bit machine or Linux 64-bit machine that has access to scripts and programs that you would like to run.

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

As you can see, there are three available Linux images and two Windows images:

  • Agent.Linux/Agent.Windows- provides the ability to run scripts and commands. 
  • ApplicationsAgent.Linux/ApplicationsAgent.Windows- in addition to Agent.Linux/Agent.Windows, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux- in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, you will provision Agent.Windows or Agent.Linux according to the jobs that you would like to run. 

Step 2 - Provision the image 


After provisioning the Agent successfully, you now have a running instance of your Control-M/Agent on your host..

Step 3 - Access the tutorial code

Go to the directory where the tutorial sample is located:

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPISampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

If the code is not valid, an error is returned.

Step 5 - Run the Source Code

Use the run command to run the jobs on the Control-M environment. The returned runId is used to check the job status. The following shows the command and a typical successful response.

This code ran successfully and returned the runId of "7cba67de-9e0d-409d-8d93-1b8229432eee".

Step 6 - Check the Job Status Using the runId

The following command shows how to check job status using the runId. Note that when there is more than one job in the flow, the status of all the jobs are checked and returned.

Step 7 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The "ActionIfFailure" object determines what action is taken if a job ends unsuccessfully.

This example contains two jobsCommandJob and ScriptJob. These jobs are contained within a folder named AutomationAPISampleFlow.  To define the sequence of job execution, use the Flow object.  

Step 8 - Modify the Code to Run in Your Environment

In the code above, the following parameters need to be set to run the jobs in your environment: 

RunAs identifies the operating system user that will execute the jobs.

Host defines the machine where you provisioned the Control-M/Agent. 

Command defines the command to run according to your operating system.

FilePath and FileName define the location and name of the file that contains the script to run.

NOTE: In JSON the backslash character must be doubled when used in the Windows file path - \\ .

Step 9 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

You can now see that both jobs EndedOK.

Let's view the output of CommandJob. We need to use the jobId to get this information.

Verify the output contains your script or command details.

Step 10 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

A browser window opens where you can view and manage your jobs.

 

To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services

 

In the next tutorial you will learn how to automate code deployments.

Running File Transfer and Database Queries Job Flow

This example walks you through running file transfer and database query jobs that run in sequence. To complete this tutorial, you need a PostgreSQL database (or you can use other databases) and SFTP server. For this example, you need to install the Agent on a machine that has a network connection to these servers.

Step 1 - Finding the right image to provision

The provision images command lists the images available to install.

 

As you can see, there are three available Linux images and two Windows images:

  • Agent.Linux/Agent.Windows- provides the ability to run scripts and commands. 
  • ApplicationsAgent.Linux/ApplicationsAgent.Windows- in addition to Agent.Linux/Agent.Windows, it adds plugins to run file transfer jobs and database SQL scripts.
  • BigDataAgent.Linux- in addition to Agent.Linux, it adds a plugin to run Hadoop and Spark jobs.

In this example, you will provision ApplicationsAgent.Windows or ApplicationAgent.Linux according to the machine that you would use to run the jobs. 

Step 2 - Provision the image 


After provisioning the Agent successfully, you now have a running instance of Control-M/Agent on your host.

Step 3 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

Step 4 - Verify the Code is Valid for Control-M

Let's take AutomationAPIFileTransferDatabaseSampleFlow.json that contains job definitions and verify that the code is valid. To do so, use the build command. The following shows the command and a typical successful response.

If the code is not valid, an error is returned.

Step 5 - Examine the Source Code

Let's look inside the following source code. You'll learn about the structure of the job flow and what it should contain.

The first object is called "Defaults". It allows you to define a parameter once for all objects. For example, it includes scheduling using the When parameter which configures all jobs to run according to the same scheduling criteria. The Defaults also includes Variables that are referenced several times in the jobs. 

The sample contains two jobsGetData and UpdateRecords. GetData transfers files from the SFTP server to the host machine. The UpdateRecords performs a SQL query on the database. They are contained within a folder named AutomationAPIFileTransferDatabaseSampleFlow. To define the sequence of job execution, use the Flow object.

The sample also includes the following three connection profiles:

  • SFTP-CP is used to define access and security credentials for the SFTP server
  • DB-CP is used to define access and security credentials for the database
  • Local-CP is used to define access and security credentials for files that are transferred to the local machine

Step 6 - Modify the Code to Run in Your Environment

In the code sample, replace the value for "TargetAgent" and "Host" with the host name of the machine where we provisioned the Control-M/Agent.

 

Replace the value of "SrcDataFile" with the file that is transferred from the SFTP server and the value of "DestDataFile" with the path of the transferred file on the host machine.

 

You need to modify the path to the samples directory for them to run successfully in your environment. Replace the path /home/<USER>/automation-api-quickstart/101-running-file-transfer-and-database-query-job-flow with the location of the samples you installed on your machine.

 

Replace these parameters with the credentials used to login to the SFTP server.

 

Replace these parameters with the credentials used to access the database server. 

Replace these parameters with the credentials used for read/write files on the host machine.


Step 7 - Rerun the Code Sample

Now that we've modified the source code in the previous step, let's rerun the sample:

 

Each time you run the code, a new runId is generated. Let's take the new runId, and check the jobs statuses again:

You can now see that both jobs EndedOK.

Let's view the output of GetData. We need to use the jobId to get this information.

 

Let's view the output of UpdateRecords. We need to use the jobId to get this information.

Step 8 - Using the Workbench Debugger

When using Control-M Workbench, you can view the run commands in an interactive user interface by entering "--interactive" at the end of the command or for short you can use -i.

A browser window opens where you can view and manage your jobs.


To learn more about what you can do with the Control-M Automation API, read through the Code Reference and Automation API Services


In the next tutorial you will learn how to automate code deployments.

 

Tutorial - How to Automate Code Deployment

This tutorial teaches you how to use a script to automate DevOps tasks. In order to complete this tutorial, you need a valid Control-M endPoint, username and password.

Step 1 - Setting the Control-M Environment

The first task when starting to work with Control-M Automation API is to configure the Control-M environment you are going to use. An environment is a combination of an endPointusername and password

An endPoint looks like the following: 

Let's add an environment and name it ciEnvironment.

The command below shows you how to do this and the response:

You can also deploy to a Workbench environment. The controlendpoint, user and password can also point to the Workbench as localhost workbench workbench.

Step 2 - Access the tutorial samples

Go to the directory where the tutorial sample is located:

Step 3 - Deploy to the ciEnvironment

You can use the following command to deploy the code to a specific environment. The "-e" is used to specify the destination environment.

Step 3 - Automate Deploy

Let automate the deployment of Control-M object definitions from the source directory to the ciEnvironment.

 

 

This code can be use in Jenkins to push Git changes to Control-M.

Step 4 - Automate Deploy with Python Script

You can automate the deployment of Control-M object definitions from the source directory to the civEnvironment with a Python script using the REST API.

 

Step 5 - Learn More about Writing Code

The next step is to read through the Code Reference or about the available Automation API Services. You'll learn much more about what you can do with the Control-M Automation API.


 

Building a docker container for batch applications

Jobs in Control-M have a host attribute, either a host where the application and Control-M/Agent reside or a host group, a logical name for a collection of hosts.

Control-M/Agents reside on the application hosts. You can group them into a host group. Specifying a job to run on a host group, causes Control-M/Server to balance the load by directing jobs to the various hosts in the host group. 

The container provided includes a Control-M/Agent. When running the container instance, the Control-M/Agent self-registers and is added to the host group. 

When a job is defined to run on the host group, it will wait for at least one registered container in the host group before it starts to run inside the container instance. 

Stopping the container, unregisters the Control-M/Agent and removes itself from the host group. 

To start the tutorial, you need a Linux account where docker is installed. You also need a valid Control-M endpoint, username and password.

Step 1 - Building the container image

Type the following command to fetch the docker example from the github repository: 

 


Review the  docker file to see how the provision of a Control-M/Agent within a docker container is set up.
Type the following commands on the machine to build a docker container image of Control-M filling in valid Control-M endpoint, username and password:

Step 2 - Running a container instance 

Type the following commands where:

CTM_SERVER - Control-M/Server name. You can use ctm conf servers::get to find the accurate name.

CTM_HOSTGROUP - A host group name. The container adds itself to this host group. The name should be without spaces.

CTM_AGENT_PORT - Control-M/Agent port number. This port should be free on the docker host.

 Review the run_register_controlm.sh  to see how the Control-M/Agent is registered to the host group. 

Step 3 - Viewing self-registered container instances in Control-M

This command allows you to see that an agent with the name <docker host>:AGENT_PORT was added to the list of agents of the Control-M/Server.

This command allows you to see that an agent with the name <docker host>:AGENT_PORT was added to the host group.

Step 4 - Running a simple job flow in the docker container  

View the JobsRunOnDockerSample.json file to see what the output of the command is inside the container.

Edit the file ../JobsRunOnDockerSample.json and replace "Host": "<HOSTGROUP>"with the host group name created in Step 2.

Run the jobs in the docker container:

This command enables you to inspect how the jobs run inside the container:

Step 5 - Decommissioning the container instance

To remove and un-register the container Control-M/Agent from the host group and Control-M, type the following command:

1 Comment

  1.  

© Copyright 2015 BMC Software, Inc.