Best practices for designing prompts in BMC HelixGPT


Prompting is the method by which administrators communicate goals, instructions, and context to AI agents. For more information about creating and managing prompts, see Creating and managing prompts.

BMC HelixGPT supports two prompting paradigms—chained and agentic:

  • Chained prompting uses a predefined, linear sequence of prompts. Each step provides specific instructions and expected outcomes, making this approach predictable and well-suited for scripted workflows or decision trees.

  • Agentic prompting follows the Reasoning and Action (ReAct) paradigm. A single prompt defines the goal and offers high-level guidance. The agent then autonomously determines how to achieve the goal—choosing tools, retrieving data, and asking clarifying questions as needed. This paradigm enables dynamic, context-aware interactions that adapt to the user's needs and environments.

We recommend that you review the following best practices for designing prompts in BMC HelixGPT, with a primary focus on the agentic approach based on the ReAct paradigm. 

Fundamental best practices

The following foundational best practices focus on building clear, reusable, and goal-driven prompts that align with enterprise tools and standards.

Principle

Why it matters

Best practice

Understand skills, agents, tools, and versioning.

BMC HelixGPT architecture is modular and versioned, offering precise control and traceability across applications.

Understand the key principles of working with BMC HelixGPT. 

A quick overview of the key principles
  • Applications such as Employee Navigator, Service Collaborator, and Insight Finder connect to their agents via BMC HelixGPT skills. 
  • A skill is a configuration layer that links an application to a specific agent. Two types of skills are supported:
    • Agentic skills - Designed for autonomous reasoning and action. 
    • Chained skills - Based on predefined prompt sequences. 
  • Skills are specific to each consuming application and reflect the LLM vendor and model they are designed for (for example, OpenAI GPT-4, Azure OpenAI, Anthropic Claude). 
  • Skills link to agents, and agents link to tools, forming a clear execution path. 
  • BMC HelixGPT ships a set of out-of-the-box (OOTB) skills, agents, and tools tested across multiple LLMs and vendors. 
  • Skill names reflect their intended LLM and vendor, helping administrators choose the right configuration for their environment. 
  • BMC HelixGPT Agent Studio includes versioning capabilities, allowing prompt designers to track changes, test updates, and roll back if needed. 

Important: While BMC HelixGPT agents leverage generic agentic AI libraries that support reasoning and graph-based execution, the graph execution logic is developed and maintained by BMC Helix and is not accessible or editable by administrators. Prompt designers can define goals and instructions for the Supervisor agent and its sub-agents, but they cannot create or modify execution graphs.

Design prompts around agent goals, not just tasks. 

BMC HelixGPT agents are autonomous. They need a clear mission, not just a list of actions. 

  • Define the agent’s purpose in business terms. 

  • Avoid procedural instructions unless necessary. 

  • Use ReAct-style prompting to encourage reasoning and planning. 

For example: 

Guide employees through resolving IT issues by using available knowledge, automation, and escalation tools.

Use modular prompt templates for reusability.

Reusable prompt templates reduce duplication and improve consistency across agents. 

  • Create prompt modules for standard functions (for example, approvals, request tracking, and catalog execution). 

  • Use placeholders for role, department, or tool names. 

  • Store templates in a version-controlled repository. 

Use markdown to structure prompts.

The prompt is not just a command; it’s a mission briefing. A well-structured prompt helps the LLM: 

  • Understand the goal clearly. 

  • Reason through available tools. 

  • Interpret examples to align behavior. 

  • Adapt to enterprise-specific language and context. 

Markdown formatting improves LLM comprehension and parsing, especially when the LLM must autonomously plan actions by using the ReAct framework. 

  • Use markdown headers (#) to separate sections clearly. 

  • Use bullet points for instructions and tool usage. 

  • Include examples and terminology definitions. 

 

Align prompts with available tools and capabilities.

Agents must know what tools they can use and how to use them. 

  • Explicitly list tools in the prompt. 

  • Describe when and why each tool should be used. 

  • Include fallback logic for tool failures. 

Embed examples to guide agent behavior.

Examples help the large language model (LLM) interpret ambiguous inputs and align with enterprise expectations.

  • Include two or three examples per prompt. 

  • Use realistic user inputs and expected agent actions. 

  • Update examples as workflows evolve. 

Teach the agent enterprise-specific language.

 LLMs might not understand internal acronyms, jargon, or naming conventions. 

  • Include a glossary section in the prompt. 

  • Define abbreviations, product names, and internal terms. 

  • Update terminology regularly. 

Encourage clarification and interaction.

Not all user inputs will be clear. Agents must ask questions. 

  • Instruct agents to ask for clarification when needed. 

  • Allow agents to confirm actions before they are executed. 

Use prompt chaining only when necessary.

BMC HelixGPT supports both chained and agentic prompting. Chaining is useful for linear workflows but limits autonomy.

  • Prefer agentic prompting for dynamic, goal-driven tasks. 

  • Use chaining for multi-step processes with strict order (for example, onboarding). 

  • Document the logic behind each chained step. 

Validate prompt behavior by using built-in tools before you deploy.

Testing ensures reliability and prevents unintended behavior in Production. Even small prompt changes can lead to unexpected behavior. 

  • Use BMC HelixGPT Agent Studio’s built-in test tool to validate prompt behavior in a controlled environment. 

    This tool keeps all test data within BMC HelixGPT and is safe for enterprise use. For detailed instructions, see the To test a skill procedure in Creating and managing skills.

Validate prompt behavior across LLMs.

BMC HelixGPT supports multiple LLMs. Prompt behavior might vary.

  • Test prompts across supported models (for example, OpenAI, Azure, Anthropic). 

  • Use sandbox environments for validation. 

  • Monitor performance and adjust prompts as needed. 

  • Monitor agent logs for unexpected actions. 

Maintain prompt governance and versioning.

Prompts evolve with business needs and environment updates. Prompt drift can lead to inconsistent agent behavior. 

  • Use version control for all prompt updates. 

  • Document changes and rationale. 

  • Archive deprecated prompts for reference. 

Align prompts with business KPIs.

Prompts must drive measurable outcomes. 

  • Tie agent goals to metrics like resolution time, ticket deflection, or user satisfaction. 

  • Collaborate with business stakeholders during the prompt design. 

Customize agents for personas and use cases.

Different roles require different agent behaviors.

  • Create specialized agents for various use cases, including HR, IT, and Facilities.

  • Use metadata (such as role, location, and line of business) to tailor responses. 

  • Share prompt templates across teams for consistency. 

Advanced prompting techniques

Learn advanced prompting strategies and architectural considerations for BMC HelixGPT. These techniques are designed to help maximize value, scalability, and alignment with enterprise goals.

Principle

Why it matters

Best practice

Build a library of well-working prompt examples.

Having a curated set of proven prompt examples helps teams accelerate deployment, maintain consistency, and reduce trial-and-error during agent design.

  • Maintain a shared repository of well-performing prompts across use cases (for example, approvals, troubleshooting, onboarding). 

  • Treat these examples as starting points, like templates, that you can adapt to different roles, departments, or workflows. 

  • Include annotations, metadata, and version history to track changes and effectiveness. 

Leverage metadata for contextual prompting.

Agents can use metadata (for example, user role, location, department) to tailor responses and improve relevance.

  • Pass metadata into the agent context to influence tool selection and response tone. 

  • Use metadata to filter knowledge sources or catalog items. 

  • Adjust prompts dynamically based on user attributes. 

Enable dynamic prompt composition.

Dynamic prompts adapt to user input, system state, and available tools. 

  • Use conditional logic to adjust prompt structure based on context. 

  • Include fallback instructions if data or tools are unavailable. 

  • Allow agents to replan if initial actions fail. 

(Optional) Integrate with Langsmith for external testing.

Langsmith provides advanced observability and debugging for LLM-based agents. 

  • BMC HelixGPT supports integration with Langsmith for testing how agents interact with the LLM. 
    For the setup instructions, see Tracing LLM calls with LangSmith.

  • This tool is helpful for advanced debugging, trace analysis, and performance tuning.  

 Warning: When you use Langsmith, test data is sent externally and might leave the BMC HelixGPT environment. Use it only with non-sensitive data and in accordance with your organization’s data policies.

Governance and maintenance techniques

As BMC HelixGPT agents become embedded in enterprise workflows, maintaining prompt quality, agent behavior, and skill configurations becomes critical. Learn the following best practices for governing the lifecycle of agentic AI components in a secure, scalable, and auditable way. 

Principle

Why it matters

Best practice

Establish prompt lifecycle management.

Prompts evolve as business needs, tools, and LLM capabilities change.

  • Define a lifecycle for each prompt: draft → test → deploy → monitor → retire. 

  • Use versioning in BMC HelixGPT Agent Studio to track changes and roll back if needed. 

  • Document the rationale for each change, including business drivers and expected outcomes. 

Monitor agent behavior and feedback loops.

Continuous feedback ensures agents remain effective and aligned with user expectations.

  • Monitor agent logs and user feedback to identify patterns of drift or failure. 

  • Use built-in analytics (where available) to track prompt performance. 

  • Establish a feedback loop with business stakeholders to validate agent behavior. 

Maintain a centralized prompt and skill registry.

A shared registry prevents duplication, promotes reuse, and supports compliance. 

  • Maintain a centralized repository of all active prompts, agents, and skills. 

  • Include metadata such as owner, version, use case, and LLM compatibility. 

  • Tag prompts by domain (for example, HR, IT, Facilities) and application (for example, Employee Navigator, Service Collaborator). 

Govern skills, agents, and tool linkages.

Skills are the entry point for applications and must be tightly controlled. 

  • Understand the relationship: skills → agents → tools. 

  • Skills are application-specific and can be agentic or chained. 

  • BMC provides out-of-the-box skills tested across multiple LLMs and vendors. 

  • Skill names reflect the intended LLM and vendor (for example, EmployeeNavigator_OpenAI_GPT4). 

  • We recommend that only authorized users modify skill-to-agent mappings or tool configurations. 

Enforce access control and change management.

Prompt and agent changes can affect thousands of users. 

  • Use role-based access control to restrict who can edit prompts, agents, and skills. 

  • Require peer review or approval workflows for production changes. 

  • Log all changes for auditability and compliance. 

Align governance with enterprise standards.

AI governance must align with broader IT and compliance frameworks. 

  • Integrate prompt and agent governance into your organization’s existing change management and audit processes. 

  • Ensure that all AI components comply with internal standards for security, privacy, and data handling. 

  • Periodically review governance policies to reflect evolving AI capabilities and regulations. 

Managing agent upgrades across releases.

BMC Helix regularly ships updated agents and skills with each BMC HelixGPT version. Customers who have customized agents in previous versions must make sure those changes are preserved and compatible after an upgrade.

  • Merging new agents with existing customizations is now a manual process. 

  • Administrators must compare the newly shipped prompts with their existing customized prompts and decide what to retain, modify, or discard. 

  • Managing upgrades requires careful review to ensure consistency, avoid regressions, and preserve business-specific logic. 

Troubleshooting

Consult the following table to quickly diagnose common issues with agent behavior and apply targeted corrective actions:

Issue

Action

The agent gives vague or irrelevant answers.

Review prompt clarity and goal definition. Add examples and terminology. 

The agent fails to use available tools. 

Check the tool list in the prompt. Make sure tool names match the system configuration. 

Prompt changes cause regressions. 

Use BMC HelixGPT Agent Studio versioning to compare and roll back. 

Unexpected behavior after upgrade. 

Manually compare new out-of-the-box agents with customized versions. Document differences. 

Missing functionality after upgrade. 

Manually verify that the new versions of the prompts are being used, and review the documentation for configuration and enabling of new functionality. Mostly, the new functionality added is opt-in, and the administrators must enable it. 

Langsmith integration exposes sensitive data. 

Use only with non-sensitive test cases. Refer to Tracing LLM calls with LangSmith for setup guidance. 

Prompt review and deployment checklist

Use this checklist to validate prompts before deploying them into production environments or assigning them to BMC HelixGPT skills. 

Prompt design
  • Is the goal clearly defined and aligned with business outcomes? 

  • Are instructions concise, actionable, and formatted for clarity? 

  • Are tools explicitly listed with usage guidance? 

  • Are examples included to guide agent behavior? 

  • Is enterprise-specific terminology defined (for example, acronyms, product names)? 

  • Is the prompt structured using markdown for better LLM parsing? 

Agent behavior
  • Does the agent ask for clarification when needed? 

  • Does the agent confirm actions before executing them? 

  • Are fallback paths defined if tools or data are unavailable? 

  • Is the prompt designed for autonomous reasoning (ReAct style)? 

Testing and validation
  • Has the prompt been tested using the BMC HelixGPT Agent Studio test tool? 

  • If using Langsmith, was only non-sensitive data used? 

  • Were multiple LLMs tested for compatibility and consistency? 

  • Were edge cases and ambiguous inputs evaluated? 

Versioning and governance
  • Is the prompt versioned in BMC HelixGPT Agent Studio? 

  • Are changes documented with rationale and expected impact? 

  • Is the prompt stored in a centralized registry or repository? 

  • Has the prompt been reviewed by peers or stakeholders? 

Skill and agent Mapping
  • Is the prompt assigned to the correct agent? 

  • Is the agent linked to the correct skill for the consuming application? 

  • Does the skill name reflect the intended LLM vendor and model? 

  • Are tool linkages validated and up to date? 

Compliance and access control
  • Are only authorized users allowed to modify the prompt? 

  • Are changes logged for auditability? 

  • Does the prompt comply with internal data handling and security policies? 

Prompting guides and training resources

Consult the following prompting guides from the leading AI providers :

Review the following training:

Where to go from here

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC HelixGPT 25.4