How to reuse control M jobs and workspace for different environments

409 Views Asked by At

I am new to control M, but my team has recently decide to integrate it into our workflow. We currently have a workspace with a set of OS jobs using command to run on unix servers which are each scheduled at different times of the day ie:

cd /path/to/snapshot && docker compose up //job1 at 6 am
cd /path/to/snapshot && docker compose down //job2 at 12 midnight
cp /path/to/some/datafiles-$(date +%Y%m%d) /path/to/snapshot/data //job3 at 545 am 
cd /path/to/snapshot && mv data/* archive //job4 at 1230am

The jobs are very similar, but for different environments/servers (UAT, prod, dev, testing) we use different paths. How can I reuse the jobs but for each different environment I substitute and use a different path? I would also then like to order the workspace once for each environment without having to create new jobs for each environment with each preset with variables specific for that environment.

edit: I m thinking this might have something to do with setting up a Host Group and executing a job on all hosts in the host group, but I would need a place on each host to define variables that can be read by the Job that is executing? edit 2 - the said awk oneliner script I am using:

hostname | awk '$1=="servername"{ path="/app/appname/snapshot/" } END{ "echo $(ls -td "path"* | head -1)" | getline result; print result }'

I m capturing this as an output in the actions, so I think my other jobs should be able to reference this.

edit3: - the output is being captured as a local variable and is being used in my successor jobs, so it sort of works. I would like to try the LIBMEMSYM as mentioned in the answers, however it seems that the control M that my company has installed is really primitive/lacking a lot of features and I also do not have access to the server that the control M is installed on to go and put a file there.

There seems to be a lot of control M related bugs ie the local variable is truncates the captured value so it does not work properly, so I am waiting on the vendor who is coincidentally taking a leave :/

2

There are 2 best solutions below

9
VonC On BEST ANSWER

In Control-M, to reuse jobs and workspaces across different environments without having to create new jobs for each environment, you can leverage parameterization features like variables or built-in parameters that allow you to change certain values dynamically based on the environment where the job is running.

Begin by setting up user-defined variables within Control-M to hold the dynamic parts of your job scripts, such as file paths. For instance, create variables like %%PATH_TO_SNAPSHOT%% to hold the values that will differ between environments.

Determine the most appropriate scope for each of your variables based on where they need to be accessible:

  • Local Scope: Choose this scope for variables that are pertinent to a single job.
  • Named Pool Scope: Utilize this scope for variables that will be shared across a select group of jobs. The jobs can reference variables from a named pool they are associated with.
  • SMART Folder Scope: Opt for this scope when a set of jobs and sub-folders will share the same variables, facilitating an organized structure where all entities within the SMART folder can reference the defined variables.

Take also advantage of system variables to enhance the dynamism of your user-defined variables. These predefined variables can be utilized within user-defined variables to bring system or environment-specific details into your job definitions automatically.

Then, adapt your existing job scripts to use user-defined variables instead of hard-coded values. For instance:

cd %%PATH_TO_SNAPSHOT%% && docker-compose up

As you order jobs to the AJF (Active Job File), make sure to specify the right user-defined variable or named pool to guide the variable values during the execution of the job. That setup enables the flexible and appropriate assignment of variable values depending on the specific run instance.

Utilize the variable simulation functionality to preview the resolved values of variables in your jobs without executing them. That feature allows you to confirm the correct resolution of variables ahead of the actual job execution, helping to prevent issues during real runs.


Does the system (environment?) variable get stored on each Host?

Let's say I have server A and Server B.
I define %%PATH_TO_SNAPSHOT%% as a user defined variable with value as a system variable ie %%ENV_PATH (/path/A on server A and /path/b on server B).
When it resolves, does it read /path/A and /path/B accordingly?

In Control-M, environment system variables are indeed determined based on the host where the job is running. They reflect attributes or properties of the environment and are not specific to a job. However, they are predefined by Control-M and cannot be set to custom values for each host through the Control-M interface; they are meant to automatically reflect details of the system where the job is being executed.

Regarding your specific setup where you have user-defined variable %%PATH_TO_SNAPSHOT%% taking a value based on a system variable %%ENV_PATH%%, you are aiming to use a custom environment variable (ENV_PATH) that will have different values on different servers.

In typical UNIX or Linux environments, environment variables like ENV_PATH can indeed be set on each host individually, and Control-M jobs running on those hosts can reference those variables. But such variables have to be set up outside of Control-M, directly on the hosts themselves, using the operating system's methods for defining environment variables (e.g., in startup scripts, user profiles, or system-wide environment settings).

To have your Control-M jobs reference the ENV_PATH variable and assign its value to %%PATH_TO_SNAPSHOT%%, your job definitions in Control-M would need to include a script command to fetch the value of ENV_PATH from the system environment and use it to determine the value of %%PATH_TO_SNAPSHOT%%. Here is how you might go about it:

In your Control-M job definitions, you would use script commands to read the value of ENV_PATH from the system environment and assign it to %%PATH_TO_SNAPSHOT%%. That might look something like:

export PATH_TO_SNAPSHOT=$(echo $ENV_PATH)
cd $PATH_TO_SNAPSHOT && docker-compose up

ENV_PATH is read from the system's environment variables and its value is assigned to a shell variable PATH_TO_SNAPSHOT, which is then used in the script commands.

To validate that the ENV_PATH variable is being correctly read and used in your jobs, you might use Control-M's variable simulation feature to view the resolved values of variables in the job definitions before running the jobs.

This approach means managing the ENV_PATH variable directly at the operating system level on each host, separate from the Control-M environment. It is a work-around to use host-specific variables in Control-M job definitions by leveraging the host's system environment variables.


We decided to not set the environment variable in the host machine.
For now, we are sticking with using Unix wildcards and globing to match the paths (so we cant arbitrarily set it) as well as the host groups, and running the job on all hosts in the host group. Its not a perfect solution as I would really love to be able to set a path for each host :/ but I guess I ll have to make do.

I was able to make user defined variable, i.e. %%PATH_VAR, to have a value of, lets say, %%PATH_VALUE , but wasn't able to change the PATH_VALUE dynamically for each env.

You are correct, Control-M does not seem to inherently support dynamic assignment of different values to a single variable based on the environment or host it is running on directly within the job definition or SMART folders, where the user-defined variables are typically defined.

Since you are using Unix wildcards and glob patterns to match the paths, you might have to employ certain strategies to distinguish between environments, perhaps leveraging the naming conventions or directory structures that are specific to each environment. It can still offer a workable solution given the limitations.

However, there might be another way around this limitation by creating a preliminary script job in Control-M that runs first to determine the correct path dynamically based on the host it is running on, and then stores that path in a file or a Control-M named pool variable that subsequent jobs can reference.

You would need to:

  • create a script that will be executed on the respective server. That script would contain logic to determine the current host and set the PATH_VALUE accordingly. For instance, it might look up the host name and then use a series of if-else statements to set PATH_VALUE to the correct value for that host.

  • have this script output the determined PATH_VALUE to a file or assign it to a Control-M named pool variable using ctmvar command or a similar mechanism.

  • read, in the jobs that need to use the PATH_VALUE, the value from the file or the named pool variable to get the dynamically determined path.

  • det your jobs in a flow where this preliminary script job runs first before the other jobs that depend on the PATH_VALUE.

That way, while you still would not be defining different values for a variable directly within Control-M, you would be setting up a dynamic mechanism to determine the correct value at runtime based on the execution environment.
It is more complex compared to setting the variable directly, but might achieve what you are looking to do. It is a kind of workaround to fulfill your requirements given the constraints.


The OP is using a awk oneliner script:

hostname | awk '$1=="servername"{ path="/app/appname/snapshot/" } END{ "echo $(ls -td "path"* | head -1)" | getline result; print result }'

Using the output from one job in another job within Control-M can indeed be accomplished using out conditions and the %%%OUTCOND%% system variable.

  • In the job where the awk one-liner script is executed, set up an 'out condition' to capture the output. You will do this by specifying the out condition in the job definition, setting it to capture the output from the script.

  • In the subsequent jobs where you want to use this output, you would reference this out condition using the %%%OUTCOND%% system variable in Control-M. The variable would be used to dynamically get the value from the output of the previous job.

You would structure your jobs in a flow such that the job capturing the path with the awk script is a prerequisite for the subsequent jobs. That ensures that the path is determined before the jobs that need to use it are executed.

In your subsequent jobs, you would use the %%%OUTCOND%% variable to get the path determined by the first job. It might look something like this in a script command:

cd %%%OUTCOND(name_of_out_condition)%%%
#... (rest of your script)

Where name_of_out_condition would be replaced with the actual name of the out condition you defined in the first job.


From the discussion:

I think that the control M versions have some differences from what you may be used to.
The out condition only is a condition and does not capture any output, there is a separate section for that.
I did capture the output into a global variable, but when I try to access it in any of the dependent jobs, I try echo %%\ENV_PATH but it resolves to echo CTMERRENV_PATH, I also tried echo ${ENV_PATH} but thats undefined and becomes "".

The ENV_PATH global variable is created though.
It worked when I changed it to a local variable and my other jobs could use %%ENV_PATH.

  • Setting Up the Job with the AWK Script: In the first job where the AWK script is executed, instead of setting an 'out condition', you should focus on capturing the output correctly into a variable. The script outputs the desired path, and this output is then captured into a Control-M local variable named ENV_PATH.

  • Capturing the Output into a Local Variable: After running the AWK script, the output (which is the path) is captured into a local variable named ENV_PATH. To do this, you must set up an output capture in the job definition to catch the output of the script and store it in the ENV_PATH variable.

  • Using the Local Variable in Subsequent Jobs: In the subsequent jobs that depend on the path determined by the first job, this ENV_PATH local variable can be referenced directly. When referencing it in script commands in the subsequent jobs, the variable should be called with the %%ENV_PATH syntax, as you has discovered.

    cd %%ENV_PATH%%
    #... (rest of your script)
    
  • Setting Job Dependencies: Ensure that the job dependencies are correctly configured so that the job determining the ENV_PATH runs and successfully captures the output before the subsequent jobs that use the ENV_PATH variable are executed. in

0
user22511941 On

You need to look at LIBMEMSYM. These are text files that reside on the Control-M Server (or a location with such access). Create your jobs (i.e. the ones listed in your original post) using variables and in each job define the appropriate LIBMEMSYM entry and point it to the appropriate environment (e.g. PROD, SYSTEM TEST, UAT, DEV, etc).

You can also define the %%LIBMEMSYM value itself using Control-M variables.

So on your CM server you have a txt file like this;

/home/controlm/cntrlm_server/%%APPLGROUP..txt

The job will take whatever is in the Sub Application field of the job definition and look for a txt file in that location. E.g. if sub application = UAT then define a UAT.txt on your Control-M Server (in /home/controlm/cntrlm_server/).

/home/controlm/cntrlm_server/UAT.txt should be something like -

%%CM_ORA_SERVR=UAT
%%CM_ORA_DBASE=UA01
%%CM_ORA_SCRPT=/opt/oracle/local/custom/bin

When you use this for your UAT jobs it will resolve to those values.

However, PROD jobs (so long as "PROD" is in the Sub Application field) would have a LIBMEMSYM defined as so -

/home/controlm/cntrlm_server/PROD.txt should be something like -

%%CM_ORA_SERVR=PRD
%%CM_ORA_DBASE=PR01
%%CM_ORA_SCRPT=/opt/oracle/local/custom/bin

but the Control-M job definitions would be the same (apart from the fields that tag it as prod).

See https://www.youtube.com/watch?v=WWkmEfow5iQ for more details.

Also consider using POOLSYM if you want to get a little more funky.