Deploying BMC AMI AI Services on ZCX or zLinux instance
The infrastructure required for optimal performance with BMC AMI AI Services is as follows:
- ZCX/zLinux instance (CPU – 1 Core, 8 GB RAM)
Task 1: To acquire the required Infrastructure
You must acquire two different infrastructures to run BMC AMI AI Services.
The following are the types of infrastructure:
To acquire a zCX/zLinux instance
Create a zCX/zLinux instance as per the required configuration. This instance will be used to deploy BMC AMI Platform services.
To acquire the infrastructure for BMC AMI AI Models
BMC AMI AI Services support 3 types of infrastructure as follows:
AWS—To create an AWS instance, see Creating-an-EC2-instance.
Azure—Create an Azure virtual machine, see Creating-a-virtual-machine.
- On-premises x86 Linux distribution (Ubuntu)—For creating an on-prem instance, see Task 1: To acquire the required Infrastructure.
After acquiring the configuration, note the IP or domain name for both machines, as it will be required for application installation.
Task 2: To download the required scripts from EPD
- Before you begin, make sure you complete the steps outlined in Downloading the installation files topic and obtain the BMC-AMI-AI-zCX.zip file.
- Extract the contents from the BMC-AMI-AI-zCX.zip file and save the extracted files in a folder of your choice for future use.
Task 3: To configure BMC AMI AI Services for installation
Update the configuration in the Ansible script. The following are the types of Ansible scripts based on services:
- BMC AMI AI Platform Service Configuration
- BMC AMI AI Model Configuration
BMC AMI AI Platform Service Configuration
You can find the BMC-AMI-AI-Platform.sh shell script file in the extracted folder. This script installs BMC AMI AI Platform services.
The following table includes the required fields you must configure and modify before you run the Ansible script.
Field | Description |
---|---|
gateway_host | Add the IP address or Domain name of the zCX/zLinux machine acquired. BMC AMI AI Platform services are communicated internally by using the gateway_host field. |
ces_scheme | Protocol of CES instance |
ces_host | Host name or IP address of CES instance |
ces_port | Port of the CES instance |
(Optional) ces_username | If the CES instance requires a user name, then provide it. |
(Optional) ces_password | If the CES instance requires a password, then provide it. |
zos_host | Host name or IP address of zos machine Important Set the mainframe machine's host or IP address in zos_host. The use of zos_host is to authenticate users. All authenticated users can access the BMC AMI AI Manager console. |
admin_user_id | Set the administrator user name Important We added a default admin user to access the BMC AMI AI Manager console. Use this if you can't provide zos_host. If you already have zos_host, reset the values set for the admin user. The default is admin. |
admin_password | Set admin user password. The default set value is base64 encoded of amiaiadmin—YW1pYWlhZG1pbg== To configure other properties, see Configuring the BMC AMI Platform Service. |
BMC AMI AI Model Configuration
Based on the selected configuration type, you must update the model configuration.
Model type | Configuration |
---|---|
Mixtral8x7B-instruct Quantized | The extracted folder contains the BMC-AMI-AI-Mixtral-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Mixtral model. |
Meta-Llama-3-8B-instruct 4K Quantized GPU | The extracted folder contains the BMC-AMI-AI-Llama3-GPU.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 GPU model. |
Meta-Llama-3-8B-instruct 4K Quantized CPU | The extracted folder contains the BMC-AMI-AI-Llama3-Service.yml Ansible file. This Ansible script installs the BMC AMI AI Llama3 CPU model. |
The following table includes the required field you must configure and modify before you run the Ansible script for the selected model.
Field | Description |
---|---|
no_of_threads | Set of no_of_threads to 42. With this configuration, you get optimal performance. |
llm_host | You must set this field as IP Address or host name of the BMC AMI AI Models instance. |
discovery_host | IP Address or hostname of zCX/zLinux machine. |
To configure other properties, see Configuring the BMC AMI Platform Service.
Task 4: To copy the script and files to the machine
After configuration changes are completed, you must copy the folder to the acquired instance.
- Locate the folder where you stored the following files:
- BMC-AMI-AI-Platform.sh
- BMC-AMI-AI-Llama3-Service.yml
- BMC-AMI-AI-Llama3-GPU.yml
- BMC-AMI-AI-Mixtral-Service.yml
- BMC-AMI-AI-Llama.sh
- BMC-AMI-AI-Llama-GPU.sh
- BMC-AMI-AI-Mixtral.sh
- Move the BMC-AMI-AI-Platform.sh file to the zCX/zLinux and rest of the files to AWS/Azure/x86 Linux instance.
You can use any scp client to copy all files.
Task 5 - To deploy BMC AMI AI Services
- BMC AMI Platform services deployment
- Connect to the newly acquired zCX/zLinux instance via SSH. Make sure that the credentials used for login have admin access.
Run the following commands to deploy AMI AI Platform Services:
chmod +x BMC-AMI-AI-Platform.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Platform.sh
./BMC-AMI-AI-Platform.sh- When prompted, enter your EPD portal's username and password.
- Model Deployment
- Connect to the newly acquired AWS/Azure/x86 Linux instance via ssh. Make sure that credentials used for login have admin access.
- Install Python and its dependency on-premises x86 machine.
Verify whether Python is already installed by entering the python3 command. If Python is not installed, then run the following commands:
sudo apt update
sudo apt install python3Install pip by using the following commands:
sudo apt update
sudo apt-get -y install python3-pip
Based on the selected configuration, you must run the command. Do not run all model commands.
LLM
command
Mixtral8x7B-instruct Quantized
chmod +x BMC-AMI-AI-Mixtral.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Mixtral.sh
./BMC-AMI-AI-Mixtral.shMeta-Llama-3-8B-instruct 4K Quantized (GPU)
chmod +x BMC-AMI-AI-Llama-GPU.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Llama-GPU.sh
./BMC-AMI-AI-Llama-GPU.shMeta-Llama-3-8B-instruct 4K Quantized (CPU)
chmod +x BMC-AMI-AI-Llama.sh
sed -i -e 's/\r$//' BMC-AMI-AI-Llama.sh
./BMC-AMI-AI-Llama.sh- When prompted, enter your EPD portal's username and password.
Task 6 – To verify the deployment of BMC AMI AI Services
For more information about how to verify the deployment of BMC AMI AI Services , see Verifying the installation.
Where to go from here