Updating the configuration parameters of a skill


After importing a skill, you can configure its parameters to align with your specific needs. The default values are used if you don't update the values.

 To update the configuration parameters of a skill

  1. Log in to BMC Helix Innovation Studio.
  2. Select Workspace > HelixGPT Agent Studio.
  3. Select the Skill record definition, and click Edit data.
  4. On the Data editor (Skill) page, select a skill, and click Edit.
  5. In the Edit record pane, in the Configuration field, view and update the values for the following parameters: 

    Configuration parameter

    Description

    Default value

    conversationBufferMemory

    Maintains a record of recent conversations, which is used whenever the assistant makes an LLM call.

    30

    DataConnection_StripUrlsFromSource

    Specifies whether URLs should be removed from the source data before it is processed or displayed.

    Valid values:

    • True: When set to true, all URLs detected in the source content are stripped out to prevent unnecessary link exposure, improve readability, or ensure data privacy.
    • False: When set to false, the URLs remain intact in the source data.

     

    degreeOfDiversity

    Defines the level of variation applied when generating or selecting results.

    A higher value increases diversity, producing a wider range of outputs or responses, while a lower value yields more focused and consistent results.

    This parameter helps strike a balance between creativity and precision in data processing or response generation.

     

    0.5

    disableDigression

    Makes sure that the generated response stays focused on the given topic.

    When this parameter is set to true, the digression text is excluded from the service prompt.

    However, digressive text is not included if a knowledge prompt is not used in a skill.

    Digression text in the service prompt

    Digressions occur only when a user asks a question. For all other cases, digressions are not true.
      Only if the user's response is a question that starts with "How" or "Why", you must respond with the following JSON, by replacing the "inputUserText" atribute.
      Don't change other attributes.


        {{
          "digression" : "TRUE",
          "nextPromptType" : "Knowledge",
          "inputUserText": "user entered text here"
        }}

    False

    enableRephrasePrompt

    Enables or disables the option for the chatbot to automatically rephrase prompts.

    When set to true, the chatbot can rephrase user inputs to improve clarity.

    Important

    For Gemini and LlaMA models, the default value for the enableRephrasePrompt parameter is set to true. However, we recommend setting the value to false if you want to initiate a new topic. 

    True

    excludeDescriptionQuestions

    Use this parameter only when you configure a skill for BMC Helix Digital Workplace .

    When set to true, the prompt does not generate any description questions.

    False

    feedbackFilteringRelevancyThreshold

    Defines the lowest relevance score a Retrieval Augmented Generation (RAG) document must have, to be used by the model.

    The value must be between 0 and 1.

    0.8

    feedbackNegativeWeight

    Defines the configurable weight to be applied to negative feedback counts when calculating relevancy.

    The value must be between 1 and 5.

    1

    feedbackPositiveWeight

    Defines the configurable weight to be applied to positive feedback counts when calculating relevancy.

    The value must be between 1 and 5.

    2

    feedbackQuestionsSimilarityThreshold

    Defines how similar your question must be to the past ones for the system to use them. Only the feedback from similar past questions is used to improve the RAG document results.

    The value must be between 0 and 1.

    0.75

    filterContextBasedOnFeedback

    Lets you filter retrieved RAG documents based on past user feedback to improve relevance.

    False

    ignoreUserInputForKbidSearch

    Determines if the system should use the user's input when searching for a Knowledge Base (KB) article by its ID.

    Valid values:

    • True: The system ignores the user's input and strictly performs the search based on predefined configurations or parameters for the KB article ID.
       
    • False: The system includes the user's input in the search criteria. This allows for more dynamic and contextual searches based on what the user types.

    True

    initial_chunks_to_accumulate

    You can use this parameter only when the following conditions are true:

    • The steaming mode is on.
    • The supportCitation parameter is set to true.

    This parameter helps you determine the text size that you can send in response to a skill. By default, this parameter is not included in a skill configuration. You must add it manually in the skill configuration.

    If this parameter is not specified in the skill configuration, the default value 5 is applied.

    Best practices:

    • With the Google Gemini model, we recommend keeping the value of this parameter to 2.
    • For OCI Llama 3.1, we recommend keeping the value of this parameter to 14.
    • For other models, we recommend keeping the maximum value of this parameter to 5.

     

    5

    knowledgeSearchScope

    Defines the scope of the search to answer a user's question in a knowledge Q&A use case.

    Valid values:

    • ENTERPRISE—The search will be performed on the enterprise knowledge base.
    • WORLD—The search will be performed on the enterprise knowledge base and external articles online.
       

    Important:

    When you use the Gemini or Llama 3.1 model with the ENTERPRISE option, the search scope is limited to internal documents.

    ENTERPRISE

    langSmithProjectName

    Defines the project in LangSmith where you can find traces.

     

    lowWaterMarkThreshold

    Specifies the minimum relevance threshold for similarity_score_threshold.

    The value must be between 0 and 1.

     

    maxFeedbackQuestionsToReturn

    Defines the maximum number of feedback-based questions to be displayed.

    8

    maxTokensLimit

    Defines the maximum number of tokens that can be included in the input and output of an AI model call.

    It directly impacts the amount of content (chunks) that can be passed to the model.

    16384

    numberOfDocumentsToFetchForMMR

    Specifies the maximum number of similar and diverse documents to return based on the MMR algorithm.

     

    20

    numberOfDocumentsToReturn

    Specifies the maximum number of documents to retrieve in response to a query.

    This parameter helps control the result set size, ensuring that only the most relevant documents are returned based on the query's ranking criteria.

     

     

    numberOfNearestNeighborsToSearch

    Defines the number of nearest neighbors to evaluate during a query in a vector search operation.

    This parameter determines how many closely related items will be considered for similarity ranking, influencing the accuracy and speed of the results.

    Twice the value of the numberOfDocumentsToReturn parameter

    processInitialUserText

    Processes the initial input so the user doesn't need to answer the same question again and will also refrain the language model from asking the question again if already been answered.

     

    True

    promptGenerationMaxQuestions

    Defines the maximum number of questions that a user can ask to resolve a query.

    Consider the following points when using the promptGenerationMaxQuestions Parameter:

    • If the number of questions in the prompt is equal to or less than this value, the system will generate a prompt to guide the user through the questions in Helix Virtual Agent during runtime.
    • If the number of questions exceeds this value, the system will generate a prompt displaying a link in the Helix Virtual Agent at runtime. This link will direct the user to the Digital Workplace application, where they can submit their request.
    • For large services, you can change the value of this parameter. However, the token-based model has a limit for the number of prompts.

    15

    promptGenerationWithMultiSelectQuestion

    Defines if you want to generate a prompt with a question that has multi-select options.

     

    False

    retrieveUserContextProcessName

    BMC Helix Innovation Studio process that is used to retrieve the user context variables.

     

    searchType

    Defines the type of search the Retriever should perform.

    Valid values:

    • similarity—returns similar documents
    • similarity_score_threshold—returns similar documents only if they meet the minimum threshold specified by lowWaterMarkThreshold
    • mmr—returns similar and diverse documents based on maximum marginal relevance

    similarity

    showLearningFormOnThumbsUp

    Displays a form to collect more details when users give a thumbs-up.

    True

    supportCitations

    Valid values:

    • True: When this parameter is true, BMC HelixGPT extracts the most relevant resource and filters out less relevant resources, so that only the most relevant resources are displayed as references in the response.
    • FalseWhen this parameter is false, all resources extracted by BMC HelixGPT are displayed in the response.

     

    True

    traceOn

    Generates traces for the skill.

     

Example of configuring parameters for a skill

The following code is an example of configuring parameters for a skill:

{
    "knowledgeSearchScope": "ENTERPRISE",
    "lowWaterMarkThreshold": 0.2,
    "searchType": "similarity_score_threshold",
    "numberOfDocumentsToReturn": 8,
    "numberOfDocumentsToFetchForMMR": 20,
    "degreeOfDiversity": 0.5,
    "promptGenerationMaxQuestions": 15,
    "promptGenerationWithMultiSelectQuestion": false
}

Related topics

Skills

Creating prompts and skills

Updating prompts and skills

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC HelixGPT 25.4