Deploying BMC AMI Platform
Related topic
This topic describes how to download and deploy BMC AMI Platform.
Before you begin
Make sure you've reviewed the System requirements topic before you begin this process.
Downloading BMC AMI Platform from EPD
- Log in to Electronic Product Distribution - BMC Mainframe.
- Select All/Trial Products.
Search for and select BMC AMI Platform, and then download the BMC-AMI-PLATFORM-2.2.00.zip file.
Deployment Structure
After extracting the package, the deployment supports both Kubernetes and OpenShift.
The extracted directory contains:
│
├── kubernetes-deployment/
│ └── Install/
│
└── openshift-deployment/
└── Install/
Select your deployment type:
- For Kubernetes, use the following command:
kubernetes-deployment/Install - For OpenShift, use the following command
openshift-deployment/Install
To deploy BMC AMI Platform on Kubernetes and OpenShift
- Log in to the Control Plane node of the Kubernetes cluster (or OpenShift master node) with a user who has sufficient permissions to run kubectl commands and copy the downloaded zip file to your preferred folder, such as /home.
- Extract all deployment files into this directory by using the following command:The path should be /<extracted_dir>/BMC-AMI-PLATFORM-2.2.00.unzip BMC-AMI-PLATFORM-2.2.00.zip -d BMC-AMI-PLATFORM-2.2.00
- Provide permission to the folder by using the following command:chmod -R 777 /<extracted_dir>/BMC-AMI-PLATFORM-2.2.00
Navigate to the correct deployment folder.
- Verify that the following files and directories are present and correctly structured:Root structureInstall
ansible-playbook/
site.yml
config/
nfs_volume_paths.yaml
helm_charts/
01-helm-service-registry/
02-helm-data-service/
03-helm-milvusdb-service/
04-helm-security-service/
05-helm-zosconnector-service/
06-helm-core-common/
07-helm-amiai-chart/
08-helm-api-gateway/
09-helm-swarm-redis-cluster/
10-datastore-init-scripts/
11-helm-elasticsearch/
12-helm-core-notification/
13-helm-autocomplete/
14-helm-kafka-audit-service/
static/
namespaces.yaml
secrets/
setup-script.sh
scripts/
code-conversion.sh
code-explain.sh
oi-rc.sh
To run the setup and deployment script
Run the following commands:
cd /<extracted_dir>/BMC-AMI-PLATFORM-2.2.00/<deployment_type>/Install
./setup-script.sh- Use the following fields to assist you during the configuration process:
Field Description Kubeconfig file
Select how to configure the Kubernetes access:
- Use the current Kubernetes context (default: ~/.kube/config)
- Provide the path to a kubeconfig file.
- Paste kubeconfig YAML (end input with a single line containing: EOF)
To configure the kubeconfig file, you have the following three options:
- (Recommended) To use your current kubectl context, enter 1.
- If you’re using a different kubeconfig file, enter 2 and provide its full path.
- To paste the kubeconfig content manually, enter 3.
Registry and project
Based on where you plan to download the BMC AMI Platform images, select one of the following values:
- Y—Get default values to use the BMC container (distribution.bmc.com / bmcamiplatform) and BMC project name.
- N—Add your own values to the following fields:
- Docker registry URL
- Project name
Docker image registry user name Enter the Docker image registry user name. It is visible in the UI. Docker image registry password Enter the Docker image registry password. It is visible in the UI. Docker Hub registry for the Uptrace image To use the Uptrace image in BMC AMI Platform, you must provide the following Docker Hub credentials to pull it from the docker.io registry:
- Provide Docker Hub registry credentials (required for pulling the uptrace/uptrace:2.0.1 image).
- Configure Docker Hub credentials for Uptrace? [Y/N]:
- Docker Hub Registry URL [docker.io]:
- Docker Hub Username:
- Docker Hub Password:
TLS certificate path Enter the TLS certificate path.
For more information, see SSL and TLS requirements.
TLS key path Enter the TLS key path.
For more information, see SSL and TLS requirements.
Password for default users
Enter the password for all the default users in the system.
A summary table of the relevant fields is displayed for your confirmation.
NFS server (IP or Hostname) Enter your NFS Server host name or IP. NFS server path Enter your NFS server path details. Local mounted path Enter local mounted path (such as /mnt/data). BMC AMI Platform machine SSH key Enter your SSH key (such as /root/.ssh/id_rsa). BMC AMI Platform application host machine (IP or host name) Enter the IP or host name of the machine where you run the deployment script. BMC AMI Platform machine user Enter the machine user name (such as root or ubuntu).
A summary table of the relevant fields is displayed for your confirmation.
Knowledge hub Embedding Service Configuration (URL and API key)
Enter the embedding service base URL. You must use the following format: <Protocol>://<Hostname>:<Port>. For example: http://127.1.164.139:7070
Enter the embedding service API key.
A summary table of the relevant fields is displayed for your confirmation.
Cloud or On-prem deployment For cloud deployment, enter 1. For On-prem deployment, enter 2.
If you enter 2, three requests appear as follows:
- Enter the address (DNS name, host name, or IP) for BMC AMI Platform UI access.
- Use a custom port instead of the default 443? [Y/N]
- Proceed with this BMC AMI Platform UI URL? [Y/N]
After you select your deployment type, the following question appears: Run Ansible playbook to deploy Helm releases? To complete the process, enter Y. To cancel the entire process, enter N.
After your script has finished running, you get the following message confirming that the task was completed successfully:
Elapsed: <1050s>
Ansible log: /<extracted_dir>/BMC-AMI-PLATFORM-2.2.00/logs/ansible_deploy_20251026_150503.log
To verify the deployment
- Verify that the pods are running successfully under the name spaces by using the commands in the following examples:
NAME READY STATUS RESTARTS AGE
ami-audit-service-5b97fcb8bb-56vcb 1/1 Running 0 22h
ami-audit-service-5b97fcb8bb-q6qgc 1/1 Running 0 22h
ami-core-api-gateway-54bbd97699-wx6wv 1/1 Running 0 22h
ami-core-notification-service-64f8cf9b55-8tf9b 1/1 Running 0 22h
ami-core-security-service-7c98674944-bth96 1/1 Running 0 22h
ami-core-zosconnector-service-6bcc4dcb96-nhcwc 1/1 Running 0 22h
amiai-autocomplete-service-f9f447d8f-9f64p 1/1 Running 0 23h
amiai-security-556c858769-dcrhp 1/1 Running 0 23h
assistant-55d5b9dd55-8k75n 1/1 Running 0 23h
bmcamiplatform-amiai-ocr-6695f44f84-vtsjb 1/1 Running 0 23h
code-conversion-69574bcd96-7sxxm 1/1 Running 0 143m
code-explain-7fcf88c49d-c6cgn 1/1 Running 0 144m
discovery-5679f6b967-2rzz7 1/1 Running 0 23h
docs-expert-b6f854756-qbldj 1/1 Running 0 5h49m
download-artifacts-l5qdm 0/1 Completed 0 23h
download-embeddings-pfvrb 0/1 Completed 0 23h
download-ms-marco-minilm-lfltb 0/1 Completed 0 23h
download-snowflake-arctic-ffhwc 0/1 Completed 0 23h
elasticsearch-0 1/1 Running 0 23h
eurekaserver-7557684d65-98nv8 1/1 Running 0 23h
eurekaserverpeer-5599cc68b-krz94 1/1 Running 0 23h
gateway-5967f5f957-4wzg8 1/1 Running 0 23h
kafka-7dd65687-qqz9m 1/1 Running 0 24h
knowledge-hub-7d45457cb8-gpptj 1/1 Running 4 (3h58m ago) 23h
load-embeddings-xrnlp 0/1 Completed 0 23h
milvus-stack-0 2/2 Running 0 24h
nginx-697456c6bc-k8z8c 1/1 Running 1 23h
ops-insight-root-cause-7cb5bc8994-26hbl 1/1 Running 0 142m
platform-5c4db69ff6-djhwg 1/1 Running 1 23h
postgresql-0 1/1 Running 0 23h
redis-master-0 1/1 Running 0 20h
redis-replica-0 1/1 Running 0 20h
redis-replica-1 1/1 Running 0 20h
redis-sentinel-0 1/1 Running 0 20h
redis-sentinel-1 1/1 Running 0 20h
redis-sentinel-2 1/1 Running 0 20h
NAME READY STATUS RESTARTS AGE
clickhouse-0 1/1 Running 0 23h
jaeger-5f9dcb68d4-pn6bc 1/1 Running 0 23h
kibana-78bb48458f-6kbhm 1/1 Running 0 23h
otel-collector-c96b5c897-9f2mw 1/1 Running 1 23h
uptrace-546647db84-2x7sc 1/1 Running 8 (23h ago) 23h
Before you proceed, make sure you have the following:
- Download Jobs—Confirm that all three download jobs are completed.
- Load Embedding Job—Verify that the load embedding job has finished successfully.
- Check the logs and run the following command:kubectl logs load-embeddings-4pg6r --namespace bmcami-prod-amiai-services
- At the end of the log, look for the following indicators:INFO - Started processing files 255 of 255
.
.
.
.
INFO - Completed processing file
/opt/bmc/app/src/resources/embeddings/DevX_BMC_AMI_DevX_File-AID_Common_Components_v23.01_bmc_docs_df.csv
To configure the LLM for BMC AMI Assistant
You can configure the LLM for the BMC AMI Assistant chat from the UI.
To configure the LLM from the UI, follow the procedure in Deployed Large Language model (LLM).
Where to go from here
After you complete deploying BMC AMI Platform, proceed to Deploying LLM.