Troubleshooting LLM connectivity issues


The connectivity issues between BMC HelixGPT and LLM service providers (such as Microsoft Azure OpenAI) might occur due to network configuration errors, authentication failures, or unsupported streaming protocols. These issues can arise during initial setup, after infrastructure changes, or when service provider configurations are updated.

As a BMC HelixGPT administrator, perform the following tasks to troubleshoot both direct and proxy-based network paths:

  1. Validate network access to the LLM endpoint.
  2. Check streaming support compatibility.
  3. Confirm authentication and token configurations.
  4. Analyze logs for error patterns and connectivity failures.

Best practices for troubleshooting LLM connectivity issues

We recommend that you perform:

  • Verification of the connectivity between BMC HelixGPT and an LLM, and perform IP whitelisting.
  • Collection and validation of authentication parameters (OAuth or API Key).
  • End-to-end API call using Postman or CLI, and use the parameters defined in these clients as a reference to configure the LLM connection in BMC HelixGPT.
  • Configuration of the AI service provider in BMC HelixGPT by using the collected details followed by troubleshooting in case any issue is observed.

Issue symptoms

End users are unable to establish a connection between BMC HelixGPT and the LLM service provider. You might observe the following issues:

  • Chat is not working or is failing to respond.
    Example:
    Chat failing to respond.png

  • Authentication errors

  • Service unavailable or timeout messages

  • Inconsistent or broken chat behavior

Issue scope

This issue can occur in the following scenarios:

  • BMC HelixGPT is configured to connect directly or via a proxy or gateway to the LLM provider.
  • Network connectivity, firewall rules, or proxy restrictions block access to LLM endpoints.
  • Streaming is not enabled on the LLM provider side.
  • Incorrect or incomplete authentication details (such as an OAuth or API key) are configured.
  • API endpoints are not configured correctly or not validated.

Resolution

Before you begin

Perform the following steps:

  • Determine if the customer uses a proxy or gateway between BMC HelixGPT and the LLM provider or if they connect BMC HelixGPT directly to the LLM provider.

  • Make sure the streaming is enabled.
    If the streaming is not enabled on the customer side, the Employee navigator will not work.

Task 1: To validate the network connectivity

  1. Verify if BMC HelixGPT is connected to an LLM service through the correct protocols, such as HTTPS.

  2. If the LLM service is not public, get BMC HelixGPT’s IPs and make sure they are allowed through your environment firewall.

Success

Tip

If you see a connection failure message even when the configuration is correct, check the logs to verify connectivity. You can also test connectivity by creating a new skill with reference to the model you have configured.

Task 2: To analyze the proxy or gateway behavior

If you are using a proxy or gateway, perform the following checks and make sure that there are no restrictions that interfere with BMC HelixGPT’s API calls.

  1. Identify and remove the blocked URI paths or unexpected parameters.
  2. Disable request body validations or filters that reject BMC HelixGPT’s payloads.
    An example of the body sent by BMC HelixGPT to an LLM:
    An example of a body sent by GPT to LLM.png
  3. Resolve any form of interference that alters or restricts API traffic to make sure there is a smooth operation.

Task 3: To verify whether streaming is enabled

Warning

Important

Streaming is required for real-time chat and conversational features.

BMC HelixGPT sets the streaming flag to true by default in every API call. To avoid failures, make sure that streaming is enabled on your end.

Task 4: To collect, configure, and validate API connection details

  1. Collect all required API connection parameters, depending on whether they use OAuth or a direct API key or token.
  2. Collect the following details for the OAuth authentication type and validate the connection by using a client; for example, Postman client:
    ParameterDescription
    API Endpoint URLURL for token generation
    Auth Endpoint URLOAuth token URL
    ; for example, /oauth2/token
    Client IDProvided by the LLM service provider
    Client SecretSecret key paired with Client ID
    Deployment NameName of the deployment
    ; for example, HelixBMC
    Grant TypeUsually client_credentials
    ScopesRequired access scopes
    VendorName of the LLM vendor
    ; for example, Azure OpenAI
    AUTH TypeOAuth2
    Version

    API version

    ; for example,
    2024-06-01
    2024-06-01-preview

    Extra Headers

    Any custom headers required during API calls.
    ; for example,
    {
      "deploymentName": "HelixBMC",
      "apiType": "azure",
      "supportsJsonResponse": true,
      "temperature": 0,
      "top_p": 0.1,
      "read_timeout_in_seconds": 300,
      "maxNumOfImages": 3,
      "imageUploadSizeInMB": 2,
      "captureImageDetail": "low",
      "includeImgDataInPersistHistory": true,
      "customHeaders": [
        {
          "name": "YourHeaderName","value": "YourHeaderValue"
        }
      ]
    }

    You can add multiple headers.
    ; for example,

    [{ "name": "YourHeaderName","value": "YourHeaderValue" },

    { "name": "Your2ndHeaderName","value": "Your2ndHeaderValue" }]

  3. Make sure that the endpoints respond correctly and that authentication works as expected.
  4. Confirm that the network path is open and properly configured.
    Postman results depend on connectivity between your source environment and the destination environment.

Task 5: To view and download the BMC HelixGPT chat logs

  1. Configure the AI service provider in BMC HelixGPT by using the collected details and then troubleshoot any API issues.
  2. Make sure logging is set to DEBUG mode for detailed insights. For instructions, see Enabling and downloading logs from BMC Helix Innovation Studio .
    You can access logs by using the following URLs:
    • View in a browser: https://<customer-innovationsuite-URL>/api/rx/application/chat/helixgpt/log 
    • Download as a file: https://<customer-innovationsuite-URL>/api/rx/application/chat/helixgpt/logs/file 
  3. After downloading the logs, open them in Notepad++.
    To view them in a readable format, install the JSON Viewer plug-in. After installing it, open the log file and use the plug-in to format the JSON properly. 

    Notepad++.png
     
  4. Investigate the logs based on the input query that you entered in the chat.
  5. (Optional but recommended) Revert the log level to ERROR. 

Related topics

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC HelixGPT 25.4