Troubleshooting LLM connectivity issues
The connectivity issues between BMC HelixGPT and LLM service providers (such as Microsoft Azure OpenAI) might occur due to network configuration errors, authentication failures, or unsupported streaming protocols. These issues can arise during initial setup, after infrastructure changes, or when service provider configurations are updated.
As a BMC HelixGPT administrator, perform the following tasks to troubleshoot both direct and proxy-based network paths:
- Validate network access to the LLM endpoint.
- Check streaming support compatibility.
- Confirm authentication and token configurations.
- Analyze logs for error patterns and connectivity failures.
Best practices for troubleshooting LLM connectivity issues
We recommend that you perform:
- Verification of the connectivity between BMC HelixGPT and an LLM, and perform IP whitelisting.
- Collection and validation of authentication parameters (OAuth or API Key).
- End-to-end API call using Postman or CLI, and use the parameters defined in these clients as a reference to configure the LLM connection in BMC HelixGPT.
- Configuration of the AI service provider in BMC HelixGPT by using the collected details followed by troubleshooting in case any issue is observed.
Issue symptoms
End users are unable to establish a connection between BMC HelixGPT and the LLM service provider. You might observe the following issues:
Chat is not working or is failing to respond.
Example:
Authentication errors
Service unavailable or timeout messages
Inconsistent or broken chat behavior
Issue scope
This issue can occur in the following scenarios:
- BMC HelixGPT is configured to connect directly or via a proxy or gateway to the LLM provider.
- Network connectivity, firewall rules, or proxy restrictions block access to LLM endpoints.
- Streaming is not enabled on the LLM provider side.
- Incorrect or incomplete authentication details (such as an OAuth or API key) are configured.
- API endpoints are not configured correctly or not validated.
Resolution
Before you begin
Perform the following steps:
Determine if the customer uses a proxy or gateway between BMC HelixGPT and the LLM provider or if they connect BMC HelixGPT directly to the LLM provider.
- Make sure the streaming is enabled.
If the streaming is not enabled on the customer side, the Employee navigator will not work.
Task 1: To validate the network connectivity
Verify if BMC HelixGPT is connected to an LLM service through the correct protocols, such as HTTPS.
If the LLM service is not public, get BMC HelixGPT’s IPs and make sure they are allowed through your environment firewall.
Task 2: To analyze the proxy or gateway behavior
If you are using a proxy or gateway, perform the following checks and make sure that there are no restrictions that interfere with BMC HelixGPT’s API calls.
- Identify and remove the blocked URI paths or unexpected parameters.
- Disable request body validations or filters that reject BMC HelixGPT’s payloads.
An example of the body sent by BMC HelixGPT to an LLM:
- Resolve any form of interference that alters or restricts API traffic to make sure there is a smooth operation.
Task 3: To verify whether streaming is enabled
BMC HelixGPT sets the streaming flag to true by default in every API call. To avoid failures, make sure that streaming is enabled on your end.
Task 4: To collect, configure, and validate API connection details
- Collect all required API connection parameters, depending on whether they use OAuth or a direct API key or token.
- Collect the following details for the OAuth authentication type and validate the connection by using a client; for example, Postman client:
Parameter Description API Endpoint URL URL for token generation Auth Endpoint URL OAuth token URL
; for example, /oauth2/tokenClient ID Provided by the LLM service provider Client Secret Secret key paired with Client ID Deployment Name Name of the deployment
; for example, HelixBMCGrant Type Usually client_credentials Scopes Required access scopes Vendor Name of the LLM vendor
; for example, Azure OpenAIAUTH Type OAuth2 Version API version
; for example,
2024-06-01
2024-06-01-previewExtra Headers Any custom headers required during API calls.
; for example,
{
"deploymentName": "HelixBMC",
"apiType": "azure",
"supportsJsonResponse": true,
"temperature": 0,
"top_p": 0.1,
"read_timeout_in_seconds": 300,
"maxNumOfImages": 3,
"imageUploadSizeInMB": 2,
"captureImageDetail": "low",
"includeImgDataInPersistHistory": true,
"customHeaders": [
{
"name": "YourHeaderName","value": "YourHeaderValue"
}
]
}You can add multiple headers.
; for example,[{ "name": "YourHeaderName","value": "YourHeaderValue" },
{ "name": "Your2ndHeaderName","value": "Your2ndHeaderValue" }]
- Make sure that the endpoints respond correctly and that authentication works as expected.
- Confirm that the network path is open and properly configured.
Postman results depend on connectivity between your source environment and the destination environment.
Task 5: To view and download the BMC HelixGPT chat logs
- Configure the AI service provider in BMC HelixGPT by using the collected details and then troubleshoot any API issues.
- Make sure logging is set to DEBUG mode for detailed insights. For instructions, see Enabling and downloading logs from BMC Helix Innovation Studio .
You can access logs by using the following URLs:- View in a browser: https://<customer-innovationsuite-URL>/api/rx/application/chat/helixgpt/log
- Download as a file: https://<customer-innovationsuite-URL>/api/rx/application/chat/helixgpt/logs/file
- After downloading the logs, open them in Notepad++.
To view them in a readable format, install the JSON Viewer plug-in. After installing it, open the log file and use the plug-in to format the JSON properly.
- Investigate the logs based on the input query that you entered in the chat.
- (Optional but recommended) Revert the log level to ERROR.
Related topics