Product deployment for infrastructure components
Product deployment provides a quick way to deploy the BMC products to other environments such as another SYSPLEX, LPAR in the same sysplex, or DB2 subsystem.
The generated JCL uses $$INC members which make it easy to change values that might be specific to certain environments. Job names and job categories make it easy to identify which jobs are needed depending on your scenario.
Consider the following scenarios when deploying products that use infrastructure components.
Product deployment scenarios involving infrastructure
Assume that the following conditions apply to this scenario:
The sandbox area has two LPARs (S1 and S2) in sysplex X1, and you want these two LPARs to share the same runtime data sets.
In the development environment, sysplexes X1 and X2 share DASD. You want development LPARs D1 and D2 to use different runtime data sets than the sandbox so that you can control when maintenance gets applied.
The test and production environments have two LPARs each (T1 and T2 for test, and P1 and P2 for production) in sysplex X3. Because X3 does not share DASD with the sandbox, the runtime data sets will need to be transported to X3. The LPARs in X3 share DASD with each other (that is, T1, T2, P1, and P2 share DASD), but you want to use different runtime data sets between test and production so that you can control when maintenance is applied. Data sets for test and production will need to be transported separately and given different HLQs to keep them distinct.
On sysplexes X1 and X2, each sysplex uses its own RTCS system registry.
For this environment, the values for the runtime customization instances could be as shown in the following table:
Example of runtime customization values
Runtime data set HLQ
For the environment in the example shown in this section, the infrastructure data sets should be set up as shown in the following table.
Shared between S1 and S2
Shared between D1 and D2
Shared between T1, T2, P1, and P2
Shared between S1 and S2
Shared between D1 and D2
Shared between T1 and T2
Shared between P1 and P2
|NGL||Unique per piid||Unique per piid||Unique per piid||Unique per piid||Unique per piid||Unique per piid||Unique per piid||Unique per piid|
The LGC product-specific registry will be shared per set up for the sandbox, development, test, and production environments. The installation process will configure the DBCENV data set to include &SYSNAME so that it is, by default, unique per LPAR.
The following table shows an example of names of components and members of the infrastructure components. The example shows DBC SSIDs, DBC groups, DBCENV datasets, and shared LGC repositories.
Example of infrastructure component values
|DBC SSID||DBC group||DBCENV data set name||LGC repository|
- The DBC group name is the same per sandbox, development, test, and production environment, respectively. This organization allows communication between the DBC subsystems on LPARS in the sandbox, development, test, and production environment, but not across DBC groups.
- The DBCENV data set contains the SYSCLONE value to make it unique per LPAR. This allows you to deploy maintenance in a controlled manner, one LPAR at a time.
- The LGC registry is shared per sandbox, development, test, and production environment, respectively. This organization allows sharing option sets in each of those environments, but not between environments. For example, option sets for production will not be shared with the test, development, or sandbox environments.
To use product deployment with the infrastructure components
Use this procedure to deploy infrastructure components from one DBC to another by replicating the generated jobs and changing only the values necessary when implementing the next environment. For example, changing values as necessary when deploying from sandbox to dev, dev to test, and test to production.
- Edit and run the $$JCLCPY member in the installation JCL data set to make a copy of the original JCL to be used for deployment.This job will change all references to the original JCL data set name to the new name in all members.
Review the $$INC members and copy the SET statements to the $$INCUSR member for any values that need to be changed for the new LPAR and SYSPLEX (such as data set HLQs).The $$INCINF member is specific to infrastructure components.
TipInstead of editing each $$INC member for new values, copy the SET statements for the variables you want to change to the $$INCUSR member. Any values in the $$INCUSR member will override the values in the other $$INC members.When specifying only the variables that have to change from one system to another in the $$INCUSR member, it speeds up locating and changing the values and helps to keep you organized.
- Run the installation jobs beginning with the appropriate series and continuing to the end:
Run these jobs if you changed the runtime HLQ and you plan to create new runtime data sets populated from the original SMP/E target libraries.Otherwise, copy the current runtime libraries to the runtime libraries for the new environment. A new set of runtime libraries allows you to deploy maintenance in a controlled manner from one environment to another at separate times. Use your own process to transport the runtime data sets to the other environment if necessary.
Run these jobs if you are deploying to a new SYSPLEX environment.For System and SQL Performance products that use the DBC, the $310VDOM job should be run if you are deploying to a different DBC Group that would have its own runtime data sets.
Run these jobs if you are deploying infrastructure to a DBC:
- The $420INF job will be generated to either create the DBCENV data set for the first time or verify that it exists if you indicated in the installation that the DBC PROC referenced an existing one. By default, the Installation System allocates the DBCENV data set containing the &SYSCLONE symbolic variable, so it is important to run this job on the target LPAR. Using SYSCLONE allows maintenance to be deployed by LPAR. The job will also create or copy the customized pppvvrm members (where ppp is the 3-character product code and vvrm is the version, release, and maintenance level of the product being installed) to a staging BMCENV runtime data set.
- The $445COPY job copies customized procedures and CLISTs from the JCL data set to user library data sets.
- The $450STRT job contains instructions for starting the DBC. Comments explain different upgrade scenarios. For example, the DBC does not need to be restarted if it already references the DBCENV data set and none of the products being installed need updated infrastructure maintenance (DBC, LGC, or NGL).
- The $490TRIG job when you are ready for the DBC agents to be upgraded. Everything prior to this job is staged in the BMCENV runtime data set.. This job will update the DBCPRODS member in the DBCENV data set which tells the DBC which product versions to define. Once this job is run, the DBC will begin defining or redefining the products that have been installed new or upgraded and stop and start the agents.
- $500-series (if generated)
- Replicate the installation to other DB2 subsystems using the $700 series of jobs.
For more information, see Replicating the installation to other DB2 subsystems.
(For new installations of MainView for MQ and MainView Transaction Analyzer only) Run the $946xxxx (for MainView Transaction Analyzer) and $948xxxx (for MainView for MQ) jobs from the STGSAMP data set on each LPAR to define the NGL logset structure and definition of the PIIDs.
NoteThese jobs do not need to be regenerated or rerun if you already have a version of these products installed.
- Verify that the products are working correctly:
- Check the DBC log and DBCPRINT for messages to ensure that the product agents are running successfully.
- Invoke the product CLIST (if applicable) or run a batch job to verify that the products are working correctly.