Setting up your environment to leverage agentic AI capabilities in BMC Helix AIOps
As a tenant administrator, perform these steps to set up your environment to leverage the generative AI capabilities available in BMC Helix AIOps.
BMC Helix AIOps connects to BMC HelixGPT, a generative artificial intelligence (AI) that enables organizations to use autonomous agents, virtual assistants, and AI-driven insights for faster incident resolution, change risk analysis, intelligent chat responses, and automated operations.
BMC Helix provides the capability to bring your own GPU processing. However, you must use the fine-tuned model provided for BMC Helix AIOps.
Required BMC Helix products
Product | Licenses required |
---|---|
BMC Helix AIOps (includes the BMC HelixGPT for AIOps service) | BMC Helix AIOps & Observability |
BMC Helix ITSM (Optional; if using BMC Helix ITSM for change and incident management) | BMC Helix ITSM Suite |
Supported cloud platforms
You can deploy the fine-tuned model for BMC Helix AIOps on the following cloud platforms:
- Google Cloud Platform (GCP) Vertex AI
- Microsoft Azure AI
Hardware and software requirements
Google Cloud Platform Vertex AI | Microsoft Azure AI | |
---|---|---|
Machine type | a2-highgpu-1g 12 vCPUs 85 GiB Memory | Standard_NCADSA100v4 Family Cluster Dedicated vCPUs |
GPU | NVIDIA Tesla A100 | NVIDIA Tesla A100 |
Process overview
The following graphic provides an overview of the steps required to set up your environment:
Before you begin
Perform the following steps before deploying the BMC Helix AIOps fine-tuned model in your cloud:
Google Cloud Platform requirements:
- You have an active Google Cloud Platform subscription and a GCP project in a Vertex AI-supported region.
All resources and artifacts must be kept in the same region. - You have the Identity and Access Management (IAM) permissions to perform the following tasks:
Write to the target Google Cloud Storage bucket (roles/storage.admin).
Register and deploy models: Vertex AI Admin (roles/aiplatform.user) or equivalent role.
Access the Artifact Registry or Container Registry (if using custom containers stored in GCP).
Local host requirements:
- Google Cloud SDK is installed, and the project where you want to deploy the model is set.
- The gsutil tool is available. You need this tool to upload the model artifacts to the Google Cloud Storage Bucket.
- Docker Engine is installed and running.
Microsoft Azure AI requirements:
- You have an active Microsoft Azure subscription.
- You must be assigned the Contributor or appropriate role to upload resources in the Microsoft Azure CLI.
- You have the Standard_NC24ads_A100_v4 quota assigned to your region:
- In the Azure Machine Learning studio, click Quota.
- Click the subscription name for the subscription where you want to host the model.
- Select the region.
- Search and select Standard NCADSA100v4 Family Cluster Dedicated vCPUs.
- Click Request quota.
If the unused quota is 0 or less than 24, click Request quota and set the New cores limit to whatever the Usage is plus 24. So if Usage is 0, set Quota to 24. This quota is enough for a machine type with one A100 accelerator. - Click Submit.
After your quota limit is approved, you must assign it to the workspace later.
Local host requirements:
- The Microsoft Azure CLI is installed with the Azure Machine Learning (ml) extension.
- Docker Engine is installed and running.
Task 1: To obtain a model from BMC Helix
Contact BMC Helix support to obtain the fine-tuned model for BMC Helix AIOps.
BMC Helix provides a fine-tuned model by using one of the following approaches:
- A Docker image tarball file with all model artifacts.
- The credentials and details to access the container registry where the model is available.
After you obtain the latest model for BMC Helix AIOps, note details such as the name of the model, the model artifact path name, and the model registry path name. This information is required when you configure the model in your cloud environment.
Task 2: To deploy the model in your cloud
Depending on the cloud environment, perform the steps to deploy the BMC Helix AIOps fine-tuned model.
You import a model into the model registry and associate it with a container. From the model registry, you can deploy your imported model to an endpoint.
To import the model
- On a local host, extract the model artifacts provided by BMC Helix:
tar -xzvf <helix_gpt_model_version>.tar.gz - Upload the model to the Google Cloud Storage bucket:
gsutil cp -r <helix_gpt_model_version> gs://<your-bucket>/model/ - Prepare the Custom Inference Docker image:
- If you have a Docker image tarball, load the image file:
docker load -i /path/to/model_container.tar - If using the container registry, log in and pull the image from the registry:
# docker login containers.bmc.com
(Specify the credentials provided by BMC Helix)
#docker pull containers.bmc.com/bmc/lpade:helix-gpt-vllm-docker-<build_number>
(Specify the image tag provided by BMC Helix)
- If you have a Docker image tarball, load the image file:
- Push the Docker image to the Google Cloud container registry:
#docker tag <bmc helix image> <Google Cloud Container Registry tag>
#docker push <Google Cloud Container Registry path>
Now, the model and its artifacts are available in the Google Cloud Model Store and the Google Cloud Container Registry. - Navigate to Model Registry from the Vertex AI navigation menu.
- Click Import and then click Import as new model.
- On the Import Model page, provide the name of the model, select the region, and click Continue.
Select the region that matches both your bucket's region, and the Vertex AI regional endpoint you're using. - Navigate to the Model settings page and select Import an existing container.
- In the Custom container settings section, click Browse in the Container image field and then click the Container Registry tab to select the container image.
- Click Browse in the Model artifact location and select the Cloud Storage path to the directory that contains your model artifacts.
- In the Arguments section, specify the following parameters and click Continue:
Field Description Example value Environment variables Specify the file name of the Deployment spec (without the file extension) included in the model artifacts. DEPLOYMENT_SPEC= zhp52uqvaxvacmt4u2tbezojfucjkf4f-helix-gpt-v6-instruct
Prediction route - /predictions Health route - /ping Port - 8080 - On the Explainability options page, retain the default No explainability option, and click Import.
After a few minutes, the model is displayed on the Models page.
For more information about importing models in GCP Vertex AI, see the online documentation https://cloud.google.com/vertex-ai/docs/model-registry/import-model#custom-container.
To deploy the model and create an endpoint
- Select the model and then click Deploy and test.
- Click Deploy to endpoint and then click Create new endpoint.
- Type the name of the endpoint and make sure that the region is the same as that of the model.
- Retain the access setting value to Standard and click Continue.
- In the Model Setting Page, use the following values and keep the rest as the default:
- Machine Type: a2-highgpu-1g, 12 vCPUs, 85 GiB Memory
- Accelerator Type: NVIDIA Tesla A100
- Accelerator Count: 1
- Click Continue and then click Deploy.
After the model is deployed, note the following information. These parameters are required when you configure the model in BMC HelixGPT Manager in the next step.Field Description ID Contains the endpoint ID. Region The region where the model is deployed.
For example, us-central1.
Project ID Contains the project ID.
For example, sso-gcp-dsom-sm-pub-cc39921.
To obtain the API key for Google
For Google, only the API Key method for authentication is supported. You need the service account API key and other details to configure the model in BMC HelixGPT Manager in the next step.
You can deploy the model by using the Microsoft Azure Machine Learning Studio, however, this section explains how to use the Azure command line interface (CLI) for more control over the deployment.
Perform the following steps:
- On a local host, extract the model artifacts provided by BMC Helix: tar -xzvf <helix_gpt_model_version>.tar.gz
- Log in to the Microsoft Azure CLI:
az login
Create a resource group:
az group create --name <resource-group-name> --location <azure-region>- Create an Azure Container Registry:az acr create
--resource-group <resource-group-name>
--name <name of the registry>
--sku Basic
--admin-enabled trueParameter Description resource-group Specify the name of the resource group created in the previous step. name Specify the name of the Azure Container Registry. sku Specify the pricing tier: Basic, Standard, or Premium. Most users start with Basic. admin Specify true to make sure that the user can upload resources to the registry. Tag the docker image:
docker tag <local-image>:<tag> helixgptreg.azurecr.io/vllm-vertex:<tag>- Push the docker image to the Azure Container Registry:
docker push helixgptreg.azurecr.io/vllm-vertex:<tag> - Create an Azure ML workspace:
az ml workspace create
--name <workspace name> \
--resource-group <resource-group-name> \
--location <region> \ - Set Azure CLI defaults:
az configure --defaults workspace=<workspace name> group=<resource-group-name> location=<region> - Link the Azure Container Registry to the workspace:
az ml workspace update \
--name <workspace-name> \
--resource-group <resource-group-name>\
--container-registry
/subscriptions/73ae88b6-0681-4b5f-839d-e331f68a59ae/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/helixgptreg \
--update-dependent-resources - Create an online endpoint:az ml online-endpoint create --file <endpoint>.ymlendpoint.yaml:
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
name: helix-gpt-v7-25-3-endpoint
auth_mode: key - Create an online deployment:
az ml online-deployment create --file azure.yml --all-traffic
Sample YAML file:
Azure.yaml:
name: helix-gpt-v7-25-3-deploy
endpoint_name: helix-gpt-v7-25-3-endpoint
model:
name: helix-gpt-v7-25-3
path: ./helix-gpt-v7-25-3
version: 1
environment_variables:
AIP_HEALTH_ROUTE: "/ping"
AIP_PREDICT_ROUTE: "/score"
MODEL_BASE_PATH: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
DEPLOYMENT_SPEC: "agaqnayhu2tstm7s3z5xmnmdugrzccsa-helix-gpt-v7_2"
AIP_STORAGE_URI: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
environment:
image:attach:xwiki:IT-Operations-Management.Operations-Management.BMC-Helix-AIOps.aiops252.Setting-up-and-going-live.Setting-up-your-environment-to-leverage-agentic-AI-capabilities-in-BMC-Helix-AIOps.WebHome@filename helixgptreg.azurecr.io/vllm-vertex:dfe4802-43
inference_config:
liveness_route:
port: 8080
path: /ping
readiness_route:
port: 8080
path: /ping
scoring_route:
port: 8080
path: /score
request_settings:
request_timeout_ms: 180000
instance_type: Standard_NC24ads_A100_v4
instance_count: 1 - (Optional) Get container logs:
az ml online-deployment get-logs \
--endpoint-name <name of the endpoint> \
--name <name of the deployment> - Set traffic:
When you set traffic to 100%, all requests sent to the endpoint are routed to that single deployment.
az ml online-endpoint update \
--name <name of the endpoint> \
--resource-group <resource-group-name>\
--workspace-name <workspace-name> \
--traffic "<workspace-name>-deploy=100" - Get scoring URI:
The scoring URI is the REST API endpoint you use to send data and get predictions from your deployed model.
az ml online-endpoint show \
--name <endpoint-name>-endpoint \
--resource-group <resource-group>\
--workspace-name <workspace-name> \
--query "scoring_uri" \
--output tsv
Summary
| Component | Value |
| Workspace | <workspace-name> |
| Endpoint Name | <endpoint-name> |
| Deployment Name | <deployment-name> |
| Region | <region>|
| Scoring URL | https://<name>-endpoint.westus.inference.ml.azure.com/score - Test the endpoint:
az ml online-endpoint show --name helix-gpt-v7-25_3-endpoint
az ml online-endpoint get-credentials --name helix-gpt-v7-25_3-endpoint
To test:
curl -X POST <scoring-uri> \
-H "Authorization: Bearer <key>" \
-H "Content-Type: application/json" \
-d '{"input": "your input here"}' - Continue to configure the model in BMC HelixGPT Manager in the next step.
For more information about deploying models on Microsoft Azure AI, see Microsoft Azure Machine Learning documentation
Task 3: To configure model settings in BMC HelixGPT Manager
After a model is deployed, provide the details in BMC HelixGPT Manager.
- Log in to BMC Helix Innovation Studio.
- Select Workspace > HelixGPT Manager.
- Select the Model record definition, and click Edit data.
- In the Data editor (Model) page, click New and provide the following information about the model that you deployed in your cloud environment:
Field Description Default or recommended value Auth Type A unique authorization key to validate secure communication between BMC HelixGPT and the model. Google Cloud Platform Vertex AI: API Key
Microsoft Azure AI: API Key
Company The name of the customer company or business unit associated with this model configuration. - Created By The user name or ID of the individual who created this model record. - Description A brief overview of the model, its purpose, and usage within BMC HelixGPT. - Name The display name of the model. - Status Indicates the current operational state of the model. The following options are available:
- New
- Assigned
- Fixed
- Rejected
- Close
Select New. Vendor The name of the organization or provider offering the model. Supported providers: Google Cloud Platform, Vertex AI, or Microsoft Azure AI. API Endpoint Url
The specific URL to access the model. GCP Vertex AI: Endpoint ID
API Key A unique key provided by the model vendor to authenticate API requests. GCP Vertex AI: Service account API key generated in Task 2.
Assignee An individual responsible for managing or maintaining this model record. - Auth Client ID A unique identifier for the client application used during authentication - Auth Grant Type The authentication method used to obtain access tokens. - Auth Headers - Auth Scopes The permissions or access levels requested for the authentication. - Auth Secret Password used for authentication. - Auth URL The endpoint URL used to initiate the authentication process. - Auth User Name The user name required for authentication - Default Config The predefined settings or parameters applied to the model.
An administrator can modify the default configuration.
GCP Vertex AI: Provide the information in the following format:
{
"apiType": "vertexaimodelgarden",
"deployedModelType": "HelixGPT-v6",
"deploymentName": "<Google Cloud account name>",
"location": "<Name of the Google Cloud region; example: us-central1>"
}Max Prompt Tokens The maximum number of tokens allowed in a single prompt.
- Version The specific version or release number of the model. - - Click Save.
After the model is saved, it is displayed in the Data editor (Model) page. - Click Close and continue with the next step to configure the pass-through agents for BMC Helix AIOps.
Task 4: To configure pass-through agents
Agents in BMC HelixGPT are intelligent, generative AI entities that can automate tasks, resolve queries, and streamline workflows.
- Log in to BMC Helix Innovation Studio.
- Click the Application launcher
and select HelixGPT Manager.
- In BMC HelixGPT Manager, click Settings
.
- Select HelixGPT > Agents> Pass-through Agents.
- Click Add Agent.
- From the list of agents, select one or more of the following options and click Add:
- Best Action Recommendation:
- Change Risk Advisor
- Log Insights
- Configure the connection for the passthrough agents:
- On the Edit Pass-through Agent panel, select the connection name.
- Click Edit configuration.
- Specify the configuration details based on the agent that you are editing.
For BMC Helix ITSM, no configurations are required. - Click Save.
- On the Edit Pass-through Agent panel, select the connection name.
For more information about configuring pass-through agents for Best Action Recommender, Change Risk Advisor, and Log Insights, see Adding agents for BMC Helix AIOps.
FAQ
Where to go from here