Configuring settings to use the AI-powered capabilities in BMC Helix AIOps


Warning

Important

This topic is applicable only for BMC Helix AIOps SaaS subscribers.

BMC Helix AIOps connects with BMC HelixGPT, a generative AI capability available for BMC Helix applications, to use autonomous agents for operators or SREs who can use them to get AI-driven insights for managing complex IT environments.

A fine-tuned model is available for using agentic AI capabilities in BMC Helix AIOps. As a tenant administrator, you deploy the fine-tuned model in one of the supported cloud platforms. 

License requirements

ProductLicenses required
BMC Helix AIOps (includes the BMC HelixGPT for AIOps service)BMC Helix AIOps & Observability
BMC Helix ITSM
(Optional; if using BMC Helix ITSM for change and incident management)
BMC Helix ITSM Suite
BMC Helix Automation Console
(Optional; if using BMC Helix Automation Console for vulnerability management)
BMC Helix Automation Console service
Warning

Important

If you do not have a subscription to BMC Helix ITSM, you need some ITSM components (BMC Helix Innovation Studio and BMC Helix ITSM Insights) to configure model settings and agents in BMC Helix Agent Studio. These components are available as part of the BMC Helix ITSM CORE version. To obtain these components, contact BMC Helix Support.


Pricing considerations

The following considerations are applicable while determining additional cost for the agentic AI capabilities, which are not covered by BMC Helix:

  • Pricing is based on the instance type, rather than a token-based system.
  • Pricing depends on the GPU-enabled machines for the supported platforms.
    Learn about the Google Platform Vertex AI pricing at Vertex AI pricing and for Microsoft Azure, at Azure Machine Learning pricing

Supported cloud platforms

You can deploy the BMC Helix AIOps fine-tuned model on one of the following cloud platforms:

Cloud platformsModel name and versionBMC Helix AIOps supported versionsBMC Helix ITSM supported versions
  • Google Cloud Platform Vertex AI
  • Microsoft Azure AI
HelixGPT-v7

25.3 and later

25.3 and later


Hardware and software requirements

The following table describes the minimum hardware and software requirements for deploying the fine-tuned model. BMC Helix does not require any additional configurations for deploying the model.  

ParameterGoogle Cloud Platform Vertex AIMicrosoft Azure AI
Machine type

a2-highgpu-1g

12 vCPUs

85 GiB Memory

Standard_NCADSA100v4 Family Cluster Dedicated vCPUs
GPUNVIDIA Tesla A100NVIDIA Tesla A100

Configuration options

Depending on your BMC Helix subscription, perform the steps to configure AI-driven capabilities by choosing one of the following options:


Process overview for BMC Helix AIOps and BMC Helix ITSM subscribers

The following graphic provides an overview of the steps required to configure AI-driven capabilities if you have both BMC Helix AIOps and BMC Helix ITSM subscriptions:

Process overview for setting up agentic AI capabilities for BMC Helix AIOps

  1. To obtain a model from BMC Helix
  2. To deploy the model in your cloud
  3. To update model settings in BMC Helix Agent Studio
  4. To verify the model ID in AI agents in BMC Helix Agent Studio
  5. To configure pass-through agents
  6. (Optional) To view information from BMC Helix AIOps by using the BMC Helix Ops Swarmer agent

Before you begin

Perform the following steps before deploying the fine-tuned model in your cloud:

Warning
Important

These are the minimum requirements to deploy the fine-tuned model in your cloud environment. For details, see the documentation for your selected cloud.

The process to deploy the model remains the same irrespective of your license entitlements. 

Google Cloud Platform requirements:

  • You have an active Google Cloud Platform subscription and a GCP project in a Vertex AI-supported region.
    All resources and artifacts must be kept in the same region. 
  • You have the Identity and Access Management (IAM) permissions to perform the following tasks:
    • Write to the target Google Cloud Storage bucket (roles/storage.admin).

    • Register and deploy models: Vertex AI Administrator (roles/aiplatform.user) or equivalent role.

    • Access the Artifact Registry or Container Registry (if using custom containers stored in GCP).

Local host requirements:

  • Google Cloud SDK is installed, and the project where you want to deploy the model is set.
  • The gsutil tool is available. You need this tool to upload the model artifacts to the Google Cloud Storage Bucket. 
  • ​​​​Docker Engine is installed and running.

Microsoft Azure AI requirements:

  • You have an active Microsoft Azure subscription. 
  • You must have the following roles and permissions in Microsoft Azure:
    • Contributor OR ML Data Scientist and Compute Operator
    • Storage Blob Data Contributor
    • AcrPull/AcrPush
    • KeyVault Secrets User
    • Application Insights Contributor
    • Log Analytics Contributor ​​​​​​
  • You have the Standard_NC24ads_A100_v4 or Standard NCADSA100v4 quota assigned to your region:
    1. In the Azure Machine Learning studio, click Quota.
    2. Click the subscription name for the subscription where you want to host the model.
    3. Select the region.
    4. Search and select Standard NCADSA100v4 Family Cluster Dedicated vCPUs.
    5. Click Request quota.
       If the unused quota is 0 or less than 24, click Request quota and set the New cores limit to whatever the Usage is plus 24. So if Usage is 0, set Quota to 24. This quota is enough for a machine type with one A100 accelerator.  
    6. Click Submit.
      After your quota limit is approved, you must assign it to the workspace later. 

Local host requirements:

  • The Microsoft Azure CLI is installed with the Azure Machine Learning (ml) extension.
  • Docker Engine is installed and running.

Task 1: To obtain a model from BMC Helix

BMC Helix requires you to contact BMC Helix support to obtain the latest fine-tuned model for AIOps from BMC Helix. You may select from the following fine-tuned foundation models:

  • QWEN
  • LLaMA

During the term of the customer's license, you may request a change of the fine-tuned foundation model by contacting BMC Helix support. 

Note that BMC Helix provides a Docker image tarball file with all model artifacts.


Task 2: To deploy the model in your cloud

Based on your cloud platform, perform the steps to deploy the fine-tuned model.  

These are the minimum steps required to deploy the fine-tuned model in your cloud environment. For details, see the documentation for your selected cloud. 

You import a model into the model registry and associate it with a container. From the model registry, you can deploy your imported model to an endpoint.

To import the model

  1. On a local host, extract the model artifacts provided by BMC Helix: 
    tar -xzvf <helix_gpt_model_version>.tar.gz
  2. Upload the model to the Google Cloud Storage bucket:
    gsutil cp -r <helix_gpt_model_version> gs://<your-bucket>/model/
  3. Prepare the Custom Inference Docker image by loading it on your host:
    docker load -i /path/to/model_container.tar​​​​​​
  4. Push the Docker image to the Google Cloud container registry: 
    #docker tag <bmc helix image> <Google Cloud Container Registry tag>

    Example: docker tag helix-gpt-inference:<version_number> \gcr.io/my-gcp-project/helix-gpt-inference:<version_number>

    #docker push <Google Cloud Container Registry path>

    Example: docker push gcr.io/my-gcp-project/helix-gpt-inference:<version_number>

    Now, the model and its artifacts are available in the Google Cloud Model Store and the Google Cloud Container Registry.
  5. Navigate to Model Registry from the Vertex AI navigation menu.
  6. Click Import and then click Import as new model.
  7. On the Import Model page, provide the name of the model, select the region, and click Continue.
    Select the region that matches both your bucket's region and the Vertex AI regional endpoint that you are using. 
  8. Navigate to the Model settings page and select Import an existing container.
  9. In the Custom container settings section, click Browse in the Container image field and then click the Container Registry tab to select the container image.
  10. Click Browse in the Model artifact location and select the Cloud Storage path to the directory that contains your model artifacts.
  11. In the Arguments section, specify the following parameters and click Continue:
    FieldDescriptionExample value
    Environment variables Specify the file name of the Deployment spec (without the file extension) included in the model artifacts.DEPLOYMENT_SPEC=

    zhp52uqvaxvacmt4u2tbezojfucjkf4f-helix-gpt-v6-instruct

    Prediction routeSpecify the HTTP path to send prediction requests to. /predictions
    Health routeSpecify the HTTP path to send health checks to./ping
    PortSpecify the port number to expose from the container. 8080
  12. On the Explainability options page, retain the default No explainability option, and click Import.
    After a few minutes, the model is displayed on the Models page.
    For more information about importing models in GCP Vertex AI, see the online documentation https://cloud.google.com/vertex-ai/docs/model-registry/import-model#custom-container.  

To deploy the model and create an endpoint

  1. Select the model and then click Deploy and test.
  2. Click Deploy to endpoint and then click Create new endpoint.
  3. Type the name of the endpoint and make sure that the region is the same as that of the model.
  4. Retain the access setting value to Standard and click Continue.
  5. On the Model Setting Page, specify the values for the following fields and continue with default values for other fields:
    • Machine Type: a2-highgpu-1g, 12 vCPUs, 85 GiB Memory
    • Accelerator Type: NVIDIA Tesla A100
    • Accelerator Count: 1
  6. Click Continue and then click Deploy.
    After the model is deployed, note the following information. These parameters are required when you configure the model in BMC HelixGPT Manager in the next step.
    FieldDescription
    IDContains the endpoint ID.
    Region

    The region where the model is deployed.

    For example, us-central1.

    Project ID

    Contains the project ID.

    For example, sso-gcp-dsom-sm-pub-cc39921.

To obtain the API key for Google

For Google, only the API Key method for authentication is supported. You need the service account API key and other details to configure the model in BMC HelixGPT Agent Studio in the next step. The service account must have the following Identity and Access Management (IAM) permissions:

  • aiplatform.endpoints.get
  • aiplatform.endpoints.predict

​​

Perform the following steps to obtain the API key for Google
  1. Log on to the Google Cloud Console with the same credentials you used while deploying the model.
  2. In the Admin account from the Main menu, select IAM & Admin >  Service Accounts.
  3. On the Service Accounts page, select the Keys tab and click Add keys, and then select Create New keys.
  4. Select the key type as JSON.
    The API key is downloaded.
    {
      "type":"service_account",
      "project_id":"redacted",
      "private_key_id":" redacted ",
      "private_key": redacted ",
      "
    client_email": " redacted ",
      "
    client_id": " redacted ",
      "
    auth_uri": "https://accounts.google.com/o/oauth2/auth",
      "
    token_uri": "https://oauth2.googleapis.com/token",
      "
    auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
      "
    client_x509_cert_url": " redacted ",
      "
    universe_domain": "googleapis.com" 
    ​​​

 

You can deploy the model by using the Microsoft Azure Machine Learning Studio, however, the following steps explain how to use the Azure command line interface (CLI) for more control over the deployment.

Perform the following steps:

  1. On a local host, extract the model artifacts provided by BMC Helix: tar -xzvf <helix_gpt_model_version>.tar.gz
  2. Log in to the Microsoft Azure CLI: 
    az login
  3. Create a resource group:
    az group create --name <resource-group-name> --location <azure-region>

  4. Create an Azure Container Registry:az acr create

    --resource-group <resource-group-name>
    --name <name of the registry>
    --sku Basic
    --admin-enabled true
    ParameterDescription
    resource-groupSpecify the name of the resource group created in the previous step.
    nameSpecify the name of the Azure Container Registry.
    skuSpecify the pricing tier: Basic, Standard, or Premium. Most users start with Basic.
    adminSpecify true to make sure that the user can upload resources to the registry.
  5. Tag the docker image:
    docker tag <local-image>:<tag> helixgptreg.azurecr.io/vllm-vertex:<tag>

  6. Push the docker image to the Azure Container Registry:
    docker push helixgptreg.azurecr.io/vllm-vertex:<tag>
  7. Create an Azure ML workspace:
    az ml workspace create
      --name <workspace name> \
      --resource-group <resource-group-name> \
      --location <region> \
  8. Set Azure CLI default values:
    az configure --defaults workspace=<workspace name> group=<resource-group-name> location=<region>
  9. Link the Azure Container Registry to the workspace:
    az ml workspace update \
    --name <workspace-name> \
    --resource-group <resource-group-name>\
    --container-registry
    /subscriptions/73ae88b6-0681-4b5f-839d-e331f68a59ae/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/helixgptreg \
    --update-dependent-resources
  10. Create an online endpoint:az ml online-endpoint create --file <endpoint>.yml
    endpoint.yaml:
    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
    name: helix-gpt-v7-25-3-endpoint
    auth_mode: key
  11. Create an online deployment:
    az ml online-deployment create --file azure.yml --all-traffic
    Sample YAML file:
    Azure.yaml:
    name: helix-gpt-v7-25-3-deploy
    endpoint_name: helix-gpt-v7-25-3-endpoint
    model:
      name: helix-gpt-v7-25-3
      path: ./helix-gpt-v7-25-3
      version: 1
    environment_variables:
      AIP_HEALTH_ROUTE: "/ping"
      AIP_PREDICT_ROUTE: "/score"
      MODEL_BASE_PATH: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
      DEPLOYMENT_SPEC: "agaqnayhu2tstm7s3z5xmnmdugrzccsa-helix-gpt-v7_2"
      AIP_STORAGE_URI: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
    environment:
      image:attach:xwiki:IT-Operations-Management.Operations-Management.BMC-Helix-AIOps.aiops254.Setting-up-and-going-live.Configuring-settings-to-use-the-AI-powered-capabilities-in-BMC-Helix-AIOps.WebHome@filename helixgptreg.azurecr.io/vllm-vertex:dfe4802-43
      inference_config:
        liveness_route:
          port: 8080
          path: /ping
        readiness_route:
          port: 8080
          path: /ping
        scoring_route:
          port: 8080
          path: /score
    request_settings:
      request_timeout_ms: 180000
    instance_type: Standard_NC24ads_A100_v4
    instance_count: 1
  12. (_Optional_) Get container logs:
    az ml online-deployment get-logs \
    --endpoint-name <name of the endpoint> \
    --name <name of the deployment>
  13. Set traffic:
    When you set traffic to 100%, all requests sent to the endpoint are routed to that single deployment.
    az ml online-endpoint update \
    --name <name of the endpoint> \
    --resource-group <resource-group-name>\
    --workspace-name <workspace-name> \
    --traffic "<workspace-name>-deploy=100"
  14. Get the scoring URI:
    The scoring URI is the REST API endpoint you use to send data and get predictions from your deployed model.
    az ml online-endpoint show \
    --name <endpoint-name>-endpoint \
    --resource-group <resource-group>\
    --workspace-name <workspace-name> \
    --query "scoring_uri" \
    --output tsv

    Summary
    | Component | Value |
    | Workspace | <workspace-name> |
    | Endpoint Name | <endpoint-name> |
    | Deployment Name | <deployment-name> |
    | Region | <region>|
    | Scoring URL | https://<name>-endpoint.westus.inference.ml.azure.com/score
  15. Test the endpoint:
    az ml online-endpoint show --name helix-gpt-v7-25_3-endpoint
    az ml online-endpoint get-credentials --name helix-gpt-v7-25_3-endpoint
    To test:
    curl -X POST <scoring-uri> \
      -H "Authorization: Bearer <key>" \
      -H "Content-Type: application/json" \
      -d '{"input": "your input here"}'

For more information about deploying models on Microsoft Azure AI, see Microsoft Azure Machine Learning documentation


Task 3: To update model settings in BMC Helix Agent Studio

After deploying the model, copy the endpoint ID and the API key and perform the following steps:

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
    Helix Agent Studio in IS Workspace
  3. Select the Model record definition, and click Edit data
    Model page in BMC Helix Agent Studio
  4. On the Data editor (Model) page, search for the HelixGPT model name.
    The model ID AGGFV7W2IC923AST09XYST09XYXXJM is a unique identifier for the BMC Helix AIOps fine-tuned model. You are required to provide this exact ID to all agents in Task 5.  Update model settings in Agent Studio
  5. Select the model and click Edit.
  6. On the Edit record pane, turn off the Seed Data option, and provide the following information about the model that you deployed in your cloud environment:
    FieldDescriptionDefault or recommended value
    Auth TypeA unique authorization key used to ensure secure communication between BMC HelixGPT and the model.

    Google Cloud Platform Vertex AI: API Key

    Microsoft Azure AI: API Key

    Status

    Indicates the current operational state of the model. The following options are available:

    • New
    • Assigned
    • Fixed
    • Rejected 
    • Close
    Select New.
    VendorThe name of the organization or provider offering the model.Supported providers: Google Cloud Platform, Vertex AI, or Microsoft Azure AI.

    API Endpoint Url

    The specific URL to access the model.

    GCP Vertex AI: Endpoint ID

    Microsoft Azure AI: Azure ML Online Endpoint 

     

    API Key

    A unique key provided by the model vendor to authenticate API requests.

     

    GCP Vertex AI: Service account API key generated in Task 2.
    The service account must have the roles/aiplatform.user role. For more information, see Vertex AI roles and permissions.

    Microsoft Azure AI: Service account API key, encoded in a Base64 format.

    Default Config

    The predefined settings or parameters are applied to the model. 

    An administrator can modify the default configuration.

    GCP Vertex AI: Provide the information in the following format: 

    {
      "apiType": "vertexaimodelgarden",
      "deployedModelType": "HelixGPT-v6",
      "deploymentName": "<Google Cloud Project Name>",
      "location": "<Name of the Google Cloud region; example: us-central1>"
    }

    Microsoft Azure AI: Provide the information in the following format: 

    {"apiType":"azure_ml","deploymentName":"<Name of the deployment>"}
  7. Save changes.


Task 4: To verify the model ID in AI agents in BMC Helix Agent Studio

After configuring the model settings, you can view the AI agents available for BMC Helix AIOps. By default, the model ID of the fine-tuned model is provided for the agents. AI agents can operate autonomously without human intervention, providing seamless interaction between AI agents and humans. For more information about AI agents, see AI agents in BMC HelixGPT

To verify whether the AI agents for BMC Helix AIOps have the required model ID, perform the following steps: 

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
    Helix Agent Studio in IS Workspace
  3. Select the Agent record definition, and click Edit data.
    The following agents are available for BMC Helix AIOps:
    • AIOps Change Agent
    • AIOps Ask HelixGPT Agent
    • AIOps BAP Agent
    • AIOps Situation Summary Agent
    • AIOps Ranker Agent
    • AIOps Chat Agent
    • AIOps Inference Agent
    • Vulnerability Classification Agent
    • Best Action Recommendation
    • Change Risk Advisor
    • Log Insights
      Model page in BMC Helix Agent Studio
  4. In the Model ID field, verify whether the unique model ID, AGGFV7W2IC923AST09XYST09XYXXJM is displayed for all the agents.

Task 5: To configure pass-through agents

Pass-through agents in BMC HelixGPT are intelligent, generative AI entities that can automate tasks, resolve queries, and streamline workflows. These agents are available in BMC HelixGPT and are required for generating the best action recommendations, getting insights into the problem from associated logs, and getting change risk assessment (only for controlled availability customers), which helps resolve situations. For the Best Action Recommendation and Log Insight agents, you can also configure supported data sources. For more information, see Adding agents for BMC Helix AIOps.

  1. Log in to BMC Helix Innovation Studio.
  2. Click the Application launcher 25_1_Application_Launcher.pngand select HelixGPT Agent Studio.
  3. In BMC HelixGPT Agent Studio, click Settings Settings icon.
  4. Select HelixGPT > Agents> Pass-through Agents.
  5. Click Add Agent.
    25_1_Add_Agent_options.png
     
  6. From the list of agents, select one or more of the following options and click Add:
    • Best Action Recommendation
    • Change Risk Advisor
    • Log Insights
  7. Configure the connection for the pass-through agents:
    1. On the Edit Pass-through Agent panel, select the connection name.
      BMC HelixGPT Manager pass-through agents
    2. Click Edit configuration
    3. Specify the configuration details based on the agent that you are editing.
      For BMC Helix ITSM, no configurations are required.
    4. Click Save

For more information about configuring pass-through agents for Best Action Recommender, Change Risk Advisor, and Log Insights, see Adding agents for BMC Helix AIOps


(Optional) Task 6: To view information from BMC Helix AIOps by using the BMC Helix Ops Swarmer agent

Operators or SREs can use BMC Helix Ops Swarmer with Microsoft Teams to investigate and resolve situations faster without leaving the Teams interface.

To connect the BMC Helix Ops Swarmer agent to BMC Helix AIOps, perform the following steps:

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
  3. Click Visit deployed application.
    The Skills page in HelixGPT Agent Studio is displayed.  
  4. From the Application menu, select BMC Helix Ops Swarmer and then select Collaboration Agent or Collaboration Agent ServiceNow, depending on your chosen IT service management system. 
  5. Click Agents and then click the Collaboration Supervisor agent.
    Collaboration Supervisor Agent
     
  6. On the Edit agent page, navigate to the Sub-agents panel in the Properties section, and select AIOps Chat Agent.
    Collaboration Supervisor agent_Select AIOps chat.png
  7. Save changes. 
    Operators or SREs can use the BMC Helix Ops Swarmer agent with Microsoft Teams to investigate and resolve situations. For more information, see Using the BMC Helix Ops Swarmer agent with Microsoft Teams to help resolve situations

Process overview for only BMC Helix AIOps subscribers

The following graphic provides an overview of the steps required to configure AI-driven capabilities if you have BMC Helix AIOps only:

Process overview for setting up agentic AI capabilities for BMC Helix AIOps only

  1. To obtain a model from BMC Helix and enable BMC Helix ITSM CORE
  2. To deploy the model in your cloud
  3. To load existing BMC Helix AIOps users in BMC Helix ITSM
  4. To update model settings in BMC Helix Agent Studio
  5. To verify the model ID in AI agents in BMC Helix Agent Studio
  6. To configure pass-through agents
  7. (Optional) To view information from BMC Helix AIOps by using the BMC Helix Ops Swarmer agent

Before you begin

Perform the following steps before deploying the fine-tuned model in your cloud.

Warning
Important

These are the minimum requirements to deploy the fine-tuned model in your cloud environment. For details, see the documentation for your selected cloud.

The process to deploy the model remains the same irrespective of your license entitlements. 

Google Cloud Platform requirements:

  • You have an active Google Cloud Platform subscription and a GCP project in a Vertex AI-supported region.
    All resources and artifacts must be kept in the same region. 
  • You have the Identity and Access Management (IAM) permissions to perform the following tasks:
    • Write to the target Google Cloud Storage bucket (roles/storage.admin).

    • Register and deploy models: Vertex AI Administrator (roles/aiplatform.user) or equivalent role.

    • Access the Artifact Registry or Container Registry (if using custom containers stored in GCP).

Local host requirements:

  • Google Cloud SDK is installed, and the project where you want to deploy the model is set.
  • The gsutil tool is available. You need this tool to upload the model artifacts to the Google Cloud Storage Bucket. 
  • ​​​​Docker Engine is installed and running.

Microsoft Azure AI requirements:

  • You have an active Microsoft Azure subscription. 
  • You must be assigned the Contributor or appropriate role to upload resources in the Microsoft Azure CLI.
  • You have the Standard_NC24ads_A100_v4 quota assigned to your region:
    1. In the Azure Machine Learning studio, click Quota.
    2. Click the subscription name for the subscription where you want to host the model.
    3. Select the region.
    4. Search and select Standard NCADSA100v4 Family Cluster Dedicated vCPUs.
    5. Click Request quota.
       If the unused quota is 0 or less than 24, click Request quota and set the New cores limit to whatever the Usage is plus 24. So if Usage is 0, set Quota to 24. This quota is enough for a machine type with one A100 accelerator.  
    6. Click Submit.
      After your quota limit is approved, you must assign it to the workspace later. 

Local host requirements:

  • The Microsoft Azure CLI is installed with the Azure Machine Learning (ml) extension.
  • Docker Engine is installed and running.

Task 1: To obtain a model from BMC Helix and enable BMC Helix ITSM CORE

Information

Why do I need BMC Helix ITSM CORE?

If you do not have a BMC Helix ITSM subscription, you still need specific components, such as BMC Helix Agent Studio, to configure the model and agents after deploying the model in your cloud. The BMC Helix ITSM CORE variant, included in your BMC Helix AIOps subscription at no additional cost, provides access to all necessary components.

Perform the following steps to obtain the fine-tuned model and get BMC Helix ITSM CORE enabled for your tenant: 

  1. To obtain the fine-tuned model, contact BMC Helix Support. 
    BMC Helix provides a Docker image tarball file with model artifacts.

    BMC Helix requires you to contact BMC Helix support to obtain BMC Helix's latest fine-tuned model for BMC Helix AIOps. You may select from the following fine-tuned foundation models:

    • QWEN
    • LLaMA
      During the term of the customer's license, you may request a change of the fine-tuned foundation model by contacting BMC Helix support. 
  2. Inform BMC Helix Support to enable BMC Helix ITSM CORE.
    You will receive a request (sub-processor notification) to approve the enablement. After you approve the request, BMC Helix ITSM CORE is deployed within 7 days, and you receive an email with the activation user details. 

Task 2: To deploy the model in your cloud

Depending on the cloud environment, GCP Vertex AI or Microsoft Azure AI, perform the steps to deploy the fine-tuned model.  

Warning
Important

These are the minimum steps required to deploy the fine-tuned model in your cloud environment. For details, see the documentation for your selected cloud. 

You import a model into the model registry and associate it with a container. From the model registry, you can deploy your imported model to an endpoint.

To import the model

  1. On a local host, extract the model artifacts provided by BMC Helix: 
    tar -xzvf <helix_gpt_model_version>.tar.gz
  2. Upload the model to the Google Cloud Storage bucket:
    gsutil cp -r <helix_gpt_model_version> gs://<your-bucket>/model/
  3. Prepare the Custom Inference Docker image by loading it on your host:
    docker load -i /path/to/model_container.tar​​​​​​
  4. Push the Docker image to the Google Cloud container registry: 
    #docker tag <bmc helix image> <Google Cloud Container Registry tag>

    Example: docker tag helix-gpt-inference:<version_number> \gcr.io/my-gcp-project/helix-gpt-inference:<version_number>

    #docker push <Google Cloud Container Registry path>

    Example: docker push gcr.io/my-gcp-project/helix-gpt-inference:<version_number>

    Now, the model and its artifacts are available in the Google Cloud Model Store and the Google Cloud Container Registry.
  5. Navigate to Model Registry from the Vertex AI navigation menu.
  6. Click Import and then click Import as new model.
  7. On the Import Model page, provide the name of the model, select the region, and click Continue.
    Select the region that matches both your bucket's region and the Vertex AI regional endpoint that you are using. 
  8. Navigate to the Model settings page and select Import an existing container.
  9. In the Custom container settings section, click Browse in the Container image field and then click the Container Registry tab to select the container image.
  10. Click Browse in the Model artifact location and select the Cloud Storage path to the directory that contains your model artifacts.
  11. In the Arguments section, specify the following parameters and click Continue:
    FieldDescriptionExample value
    Environment variables Specify the file name of the Deployment spec (without the file extension) included in the model artifacts.DEPLOYMENT_SPEC=

    zhp52uqvaxvacmt4u2tbezojfucjkf4f-helix-gpt-v6-instruct

    Prediction routeSpecify the HTTP path to send prediction requests to. /predictions
    Health routeSpecify the HTTP path to send health checks to./ping
    PortSpecify the port number to expose from the container. 8080
  12. On the Explainability options page, retain the default No explainability option, and click Import.
    After a few minutes, the model is displayed on the Models page.
    For more information about importing models in GCP Vertex AI, see the online documentation https://cloud.google.com/vertex-ai/docs/model-registry/import-model#custom-container.  

To deploy the model and create an endpoint

  1. Select the model and then click Deploy and test.
  2. Click Deploy to endpoint and then click Create new endpoint.
  3. Type the name of the endpoint and make sure that the region is the same as that of the model.
  4. Retain the access setting value to Standard and click Continue.
  5. On the Model Setting Page, specify the values for the following fields and continue with default values for other fields:
    • Machine Type: a2-highgpu-1g, 12 vCPUs, 85 GiB Memory
    • Accelerator Type: NVIDIA Tesla A100
    • Accelerator Count: 1
  6. Click Continue and then click Deploy.
    After the model is deployed, note the following information. These parameters are required when you configure the model in BMC HelixGPT Manager in the next step.
    FieldDescription
    IDContains the endpoint ID.
    Region

    The region where the model is deployed.

    For example, us-central1.

    Project ID

    Contains the project ID.

    For example, sso-gcp-dsom-sm-pub-cc39921.

To obtain the API key for Google

For Google, only the API Key method for authentication is supported. You need the service account API key and other details to configure the model in BMC HelixGPT Manager in the next step.

Perform the following steps to obtain the API key for Google
  1. Log on to the Google Cloud Console with the same credentials you used while deploying the model.
  2. In the Admin account from the Main menu, select IAM & Admin >  Service Accounts.
  3. On the Service Accounts page, select the Keys tab and click Add keys, and then select Create New keys.
  4. Select the key type as JSON.
    The API key is downloaded.
    {
      "type":"service_account",
      "project_id":"redacted",
      "private_key_id":" redacted ",
      "private_key":" redacted ",
      "client_email":" redacted ",
      "client_id":" redacted ",
      "auth_uri":"https://accounts.google.com/o/oauth2/auth",
      "token_uri":"https://oauth2.googleapis.com/token",
      "auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
      "client_x509_cert_url":" redacted "
    ​​​

 

You can deploy the model by using the Microsoft Azure Machine Learning Studio, however, the following steps explain how to use the Azure command line interface (CLI) for more control over the deployment.

Perform the following steps:

  1. On a local host, extract the model artifacts provided by BMC Helix: tar -xzvf <helix_gpt_model_version>.tar.gz
  2. Log in to the Microsoft Azure CLI: 
    az login
  3. Create a resource group:
    az group create --name <resource-group-name> --location <azure-region>

  4. Create an Azure Container Registry:az acr create

    --resource-group <resource-group-name>
    --name <name of the registry>
    --sku Basic
    --admin-enabled true
    ParameterDescription
    resource-groupSpecify the name of the resource group created in the previous step.
    nameSpecify the name of the Azure Container Registry.
    skuSpecify the pricing tier: Basic, Standard, or Premium. Most users start with Basic.
    adminSpecify true to make sure that the user can upload resources to the registry.
  5. Tag the docker image:
    docker tag <local-image>:<tag> helixgptreg.azurecr.io/vllm-vertex:<tag>

  6. Push the docker image to the Azure Container Registry:
    docker push helixgptreg.azurecr.io/vllm-vertex:<tag>
  7. Create an Azure ML workspace:
    az ml workspace create
      --name <workspace name> \
      --resource-group <resource-group-name> \
      --location <region> \
  8. Set Azure CLI default values:
    az configure --defaults workspace=<workspace name> group=<resource-group-name> location=<region>
  9. Link the Azure Container Registry to the workspace:
    az ml workspace update \
    --name <workspace-name> \
    --resource-group <resource-group-name>\
    --container-registry
    /subscriptions/73ae88b6-0681-4b5f-839d-e331f68a59ae/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/helixgptreg \
    --update-dependent-resources
  10. Create an online endpoint:az ml online-endpoint create --file <endpoint>.yml
    endpoint.yaml:
    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
    name: helix-gpt-v7-25-3-endpoint
    auth_mode: key
  11. Create an online deployment:
    az ml online-deployment create --file azure.yml --all-traffic
    Sample YAML file:
    Azure.yaml:
    name: helix-gpt-v7-25-3-deploy
    endpoint_name: helix-gpt-v7-25-3-endpoint
    model:
      name: helix-gpt-v7-25-3
      path: ./helix-gpt-v7-25-3
      version: 1
    environment_variables:
      AIP_HEALTH_ROUTE: "/ping"
      AIP_PREDICT_ROUTE: "/score"
      MODEL_BASE_PATH: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
      DEPLOYMENT_SPEC: "agaqnayhu2tstm7s3z5xmnmdugrzccsa-helix-gpt-v7_2"
      AIP_STORAGE_URI: "/var/azureml-app/azureml-models/helix-gpt-v7-25-3/1/helix-gpt-v7-25-3"
    environment:
      image:attach:xwiki:IT-Operations-Management.Operations-Management.BMC-Helix-AIOps.aiops254.Setting-up-and-going-live.Configuring-settings-to-use-the-AI-powered-capabilities-in-BMC-Helix-AIOps.WebHome@filename helixgptreg.azurecr.io/vllm-vertex:dfe4802-43
      inference_config:
        liveness_route:
          port: 8080
          path: /ping
        readiness_route:
          port: 8080
          path: /ping
        scoring_route:
          port: 8080
          path: /score
    request_settings:
      request_timeout_ms: 180000
    instance_type: Standard_NC24ads_A100_v4
    instance_count: 1
  12. (_Optional_) Get container logs:
    az ml online-deployment get-logs \
    --endpoint-name <name of the endpoint> \
    --name <name of the deployment>
  13. Set traffic:
    When you set traffic to 100%, all requests sent to the endpoint are routed to that single deployment.
    az ml online-endpoint update \
    --name <name of the endpoint> \
    --resource-group <resource-group-name>\
    --workspace-name <workspace-name> \
    --traffic "<workspace-name>-deploy=100"
  14. Get the scoring URI:
    The scoring URI is the REST API endpoint you use to send data and get predictions from your deployed model.
    az ml online-endpoint show \
    --name <endpoint-name>-endpoint \
    --resource-group <resource-group>\
    --workspace-name <workspace-name> \
    --query "scoring_uri" \
    --output tsv

    Summary
    | Component | Value |
    | Workspace | <workspace-name> |
    | Endpoint Name | <endpoint-name> |
    | Deployment Name | <deployment-name> |
    | Region | <region>|
    | Scoring URL | https://<name>-endpoint.westus.inference.ml.azure.com/score
  15. Test the endpoint:
    az ml online-endpoint show --name helix-gpt-v7-25_3-endpoint
    az ml online-endpoint get-credentials --name helix-gpt-v7-25_3-endpoint
    To test:
    curl -X POST <scoring-uri> \
      -H "Authorization: Bearer <key>" \
      -H "Content-Type: application/json" \
      -d '{"input": "your input here"}'

For more information about deploying models on Microsoft Azure AI, see Microsoft Azure Machine Learning documentation


Task 3: To load existing BMC Helix AIOps user data in BMC Helix ITSM CORE

After the BMC Helix SaaS operations team enables BMC Helix ITSM CORE, you get an activation email with details of a BMC Helix ITSM CORE activation user. You must use the BMC Helix ITSM CORE activation user to load the BMC Helix AIOps user data in BMC Helix ITSM. 

To load existing user data in BMC Helix ITSM, use the Data Management Tool in Mid Tier. For more information, see Loading Foundation data by using Data Management.

Warning

Important

While loading user data, make sure that the first name, last name, email address, and login ID of the users in BMC Helix AIOps and BMC Helix ITSM are the same. 

After the foundation data is imported into BMC Helix ITSM, it is automatically synchronized with BMC Helix Portal. 

Task 4: To update model settings in BMC Helix Agent Studio

After deploying the model, copy the endpoint ID and the API key and perform the following steps:

  1. Log in to BMC Helix Innovation Studio as a BMC Helix ITSM CORE activation user.
  2. Select Workspace > HelixGPT Agent Studio.
    Helix Agent Studio in IS Workspace
  3. Select the Model record definition, and click Edit data
    Model page in BMC Helix Agent Studio
  4. On the Data editor (Model) page, search for the HelixGPT model name.
    The model ID AGGFV7W2IC923AST09XYST09XYXXJM is a unique identifier for the BMC Helix AIOps fine-tuned model. Update model settings in Agent Studio
  5. Select the model and click Edit.
  6. On the Edit record pane, turn off the Seed Data option, and provide the following information about the model that you deployed in your cloud environment:
    FieldDescriptionDefault or recommended value
    Auth TypeA unique authorization key used to ensure secure communication between BMC HelixGPT and the model.

    Google Cloud Platform Vertex AI: API Key

    Microsoft Azure AI: API Key

    Status

    Indicates the current operational state of the model. The following options are available:

    • New
    • Assigned
    • Fixed
    • Rejected 
    • Close
    Select New.
    VendorThe name of the organization or provider offering the model.Supported providers: Google Cloud Platform, Vertex AI, or Microsoft Azure AI.

    API Endpoint Url

    The specific URL to access the model.

    GCP Vertex AI: Endpoint ID

    Microsoft Azure AI: Azure ML Online Endpoint 

     

    API Key

    A unique key provided by the model vendor to authenticate API requests.

     

    GCP Vertex AI: Service account API key generated in Task 2.
    The service account must have the roles/aiplatform.user role. For more information, see Vertex AI roles and permissions.

    Microsoft Azure AI: Service account API key, encoded in a Base64 format.

    Default Config

    The predefined settings or parameters are applied to the model. 

    An administrator can modify the default configuration.

    GCP Vertex AI: Provide the information in the following format: 

    {
      "apiType": "vertexaimodelgarden",
      "deployedModelType": "HelixGPT-v6",
      "deploymentName": "<Google Cloud Project Name>",
      "location": "<Name of the Google Cloud region; example: us-central1>"
    }

    Microsoft Azure AI: Provide the information in the following format: 

    {"apiType":"azure_ml","deploymentName":"<Name of the deployment>"}
  7. Save changes.


Task 5: To verify the model ID in AI agents in BMC Helix Agent Studio

After configuring the model settings, you can view the AI agents available for BMC Helix AIOps. By default, the model ID of the fine-tuned model is provided for the agents. AI agents can operate autonomously without human intervention, providing seamless interaction between AI agents and humans. For more information about AI agents, see AI agents in BMC HelixGPT

To verify whether the AI agents for BMC Helix AIOps have the required model ID, perform the following steps: 

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
    Helix Agent Studio in IS Workspace
  3. Select the Agent record definition, and click Edit data.
    The following agents are available for BMC Helix AIOps:
    • AIOps Change Agent
    • AIOps Ask HelixGPT Agent
    • AIOps BAP Agent
    • AIOps Situation Summary Agent
    • AIOps Ranker Agent
    • AIOps Chat Agent
    • AIOps Inference Agent
    • Vulnerability Classification Agent
    • Best Action Recommendation
    • Change Risk Advisor
    • Log Insights
      Model page in BMC Helix Agent Studio
  4. In the Model ID field, verify whether the unique model ID, AGGFV7W2IC923AST09XYST09XYXXJM is displayed for all the agents.

Task 6: To configure pass-through agents

Pass-through agents in BMC HelixGPT are intelligent, generative AI entities that can automate tasks, resolve queries, and streamline workflows. These agents are available in BMC HelixGPT and are required for generating the best action recommendations, getting insights into the problem from associated logs, and getting change risk assessment (only for controlled availability customers), which helps resolve situations. For the Best Action Recommendation and Log Insight agents, you can also configure supported data sources. For more information, see Adding agents for BMC Helix AIOps.

  1. Log in to BMC Helix Innovation Studio.
  2. Click the Application launcher 25_1_Application_Launcher.pngand select HelixGPT Agent Studio.
  3. In BMC HelixGPT Agent Studio, click Settings Settings icon.
  4. Select HelixGPT > Agents> Pass-through Agents.
  5. Click Add Agent.
    25_1_Add_Agent_options.png
     
  6. From the list of agents, select one or more of the following options and click Add:
    • Best Action Recommendation
    • Change Risk Advisor
    • Log Insights
  7. Configure the connection for the pass-through agents:
    1. On the Edit Pass-through Agent panel, select the connection name.
      BMC HelixGPT Manager pass-through agents
    2. Click Edit configuration
    3. Specify the configuration details based on the agent that you are editing.
      For BMC Helix ITSM, no configurations are required.
    4. Click Save

For more information about configuring pass-through agents for Best Action Recommender, Change Risk Advisor, and Log Insights, see Adding agents for BMC Helix AIOps


(Optional) Task 7: To view information from BMC Helix AIOps by using the BMC Helix Ops Swarmer agent

Operators or SREs can use BMC Helix Ops Swarmer with Microsoft Teams to investigate and resolve situations faster without leaving the Teams interface.

To connect the BMC Helix Ops Swarmer agent to BMC Helix AIOps, perform the following steps:

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
  3. Click Visit deployed application.
    The Skills page in HelixGPT Agent Studio is displayed.  
  4. From the Application menu, select BMC Helix Ops Swarmer and then select Collaboration Agent or Collaboration Agent ServiceNow, depending on your chosen IT service management system. 
  5. Click Agents and then click the Collaboration Supervisor agent.
    Collaboration Supervisor Agent
     
  6. On the Edit agent page, navigate to the Sub-agents panel in the Properties section, and select AIOps Chat Agent.
    Collaboration Supervisor agent_Select AIOps chat.png
  7. Save changes. 
    Operators or SREs can use the BMC Helix Ops Swarmer agent with Microsoft Teams to investigate and resolve situations. For more information, see Using the BMC Helix Ops Swarmer agent with Microsoft Teams to help resolve situations

FAQ

Can I use my own large language model for the agentic AI capabilities in BMC Helix AIOps?

No. Currently, BMC Helix provides a fine-tuned model, which you can deploy on either of the supported platforms. 

Does BMC Helix provide its own cloud platform for the model?

No. You can deploy the fine-tuned model on Google Cloud Platform (GCP) Vertex AI or Microsoft Azure AI platforms.

Does BMC Helix release updated versions of the fine-tuned model?

Yes. To stay updated about the latest releases of the fine-tuned AI model for BMC Helix AIOps, keep reviewing the release notes for each version. When a new version is available, you can deploy it to your environment. For more information, see Upgrading to the latest AI model.

Where to go from here

Upgrading to the latest AI model 

Agentic AI capabilities in BMC Helix AIOps

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC Helix AIOps 25.4