Integrating BMC Helix Discovery with BMC Helix CMDB


Integrating BMC Helix Discovery and BMC Helix CMDB provides a unified, continuously updated view of the IT environment, supporting efficient operations and reliable service delivery. This is a sample integration use case that explains how to integrate BMC Helix Discovery with BMC Helix CMDB.

BMC Helix Discovery and BMC Helix CMDB integration workflow

The following diagram shows how BMC Helix Discovery collects data from the IT environment and sends it to BMC Helix CMDB.

Data flow diagram of information from Discovery to CMDB, and then to other ITSM applications

BMC Helix Discovery identifies assets and dependencies, then synchronizes this information with BMC Helix CMDB. BMC Helix CMDB validates the data through normalization and reconciliation to create a clean, trusted dataset, which is then used by ITSM applications such as Asset, Incident, Change, and Problem Management.

Key benefits of the integration:

  • Provides real‑time visibility into infrastructure and service dependencies.
  • Reduces manual effort by automatically populating and updating CIs.
  • Improves incident and problem resolution through clear impact paths.
  • Strengthens change management with dependable service models and risk insights.
  • Increases data quality for compliance, reporting, and audits.
  • Supports better operational decision‑making through consistent, trustworthy data.

Infrastructure discovery scope

The infrastructure discovery scope defines which locations, networks, and cloud platforms BMC Helix Discovery must scan. When new regions, platforms, or environments are added, the scope is updated so that BMC Helix Discovery continues to scan everything in the environment. This ensures that all in-scope systems are discovered and that BMC Helix CMDB is populated with accurate, up-to-date configuration data.

The following table shows an example list of colocation sites and cloud platforms included in the defined infrastructure discovery scope:

Colocation sitesCloud platforms
San JoseOCI
ZurichGCP
MumbaiAWS

Service blueprints for multi‑instance modeling

Service blueprints use BMC Helix Discovery rules and predefined templates to build accurate, multi-instance service‑dependency models across Kubernetes environments. They identify service components, assemble service topologies, and synchronize these models with BMC Helix CMDB.

Service blueprint capabilities

  • Identifies ITSM, HSSO, and ITOM service components.
  • Analyzes namespaces, workloads, and pod relationships.
  • Correlates pods with Deployments and StatefulSets.
  • Identifies dependencies such as load balancers, messaging layers, storage systems, and caches.
  • Supports consistent service modeling across Kubernetes clusters.

For more information about service blueprints, see Creating blueprint definitions

Before you start creating a blueprint for multi-instance modeling, make sure that the environment meets the following conditions:

  • BMC Helix Discovery is deployed and configured to scan the Kubernetes environment.
  • Kubernetes clusters are onboarded with valid credentials.
  • Naming conventions follow blueprint requirements.
  • External components such as load balancers, databases, messaging services, and identity providers are included in the Discovery scope.
  • Service blueprint packages are imported and activated.
  • BMC Helix CMDB synchronization is enabled.

Once these requirements are met, you can start creating the service blueprints required for multi‑instance modeling. The examples below provide reference models for different services and illustrate how these blueprints can be structured.

ITSM Business Services model

The following diagram illustrates the reference Information Technology Service Management (ITSM) Business Services model running in a Kubernetes environment:

image (1).png

This model generates an accurate topology of ITSM service components and their dependencies within the Kubernetes cluster, supporting clear service understanding and effective operational management.

HSSO Business Services model

The following diagram illustrates the reference Helix Single Sign-On (HSSO) Business Services model running in a Kubernetes environment.

image (2).png

This model generates a unified topology of HSSO service components and their cross‑cluster dependencies, providing an accurate representation of how HSSO services operate within highly available, distributed Kubernetes environments.

ITOM Business Services model

The following diagram illustrates the reference Information Technology Operations Management (ITOM) Business Services model running in a Kubernetes environment:

image (3).png

This model generates a detailed topology of ITOM service components and their dependencies, providing an accurate view of how ITOM workloads operate within a microservices‑driven Kubernetes environment.

How BMC Helix Discovery determines and enriches location information

BMC Helix Discovery assigns and enriches location information by using metadata from components. It evaluates attributes in a prioritized sequence and applies fallback logic when primary attributes are unavailable. This approach ensures consistent location coverage across on‑premises, cloud, and Kubernetes environments.

Custom TPL patterns analyze each component, select the most reliable source of location data, and create or update the corresponding Location node. This approach provides reliable visibility into where your infrastructure runs, enabling accurate service modeling in BMC Helix CMDB.

Location discovery patterns 

Location discovery patterns run whenever a component is created or confirmed. They evaluate the available metadata, determine the correct location, and maintain the relationship as the environment changes. This real‑time processing keeps location information accurate throughout the component lifecycle and helps ensure that service models remain reliable and current.

The following table lists the patterns used to assign and maintain location information for different component types:

Component typePattern name
HostsHost_Location
Network DevicesND_Location
PrintersPrinter_Location
SNMP‑Managed DevicesSNMPmanageddevice_Location
Storage DevicesStorageDevice_Location
Storage SystemsStorageSystem_Location
IP SubnetsSubnet_Location
Kubernetes/Cloud ClustersCluster_Location

Location discovery methods with prioritized fallback

Location discovery patterns use a structured, priority approach to determine the most accurate location for Kubernetes resources. Each method is evaluated in sequence, starting with the most reliable metadata. If a method cannot produce a valid result, BMC Helix Discovery moves to the next method. 

This approach helps you maintain accurate location information across different Kubernetes deployments. You can continue using your existing tagging standards, annotations, or infrastructure metadata while BMC Helix Discovery assigns each component a location by using the best information available.

Before implementing any location‑determination method, make sure that the following conditions are met:

  • Add the subnet‑to‑location mappings that BMC Helix Discovery will evaluate during this method.
  • Verify that each CIDR range is paired with the correct location name to support accurate assignment.
  • Keep subnet definitions current so BMC Helix Discovery can reliably match IP addresses to the correct location.

The following methods are used in this prioritized fallback process. Method 1 is the most commonly used and recommended approach; however, you can use other methods if you do not receive accurate location information.

Method 1: Subnet-based location mapping (primary method)

This method assigns a location by matching the component’s IP address to predefined subnet ranges. BMC Helix Discovery extracts the subnet, checks it against configured CIDR blocks (for example, /16 or /24), and returns the mapped location when a match is found. This method is the most accurate because it uses stable, infrastructure‑defined network boundaries.

Example table for subnet‑to‑location mapping
table Subnet2Location 1.0

    // Asia Pacific
    "192.168.10"      -> "DC-singapore";
    "10.50"           -> "DC-hongkong";
    "172.20.45"       -> "DC-tokyo";

    // Europe
    "10.100"          -> "DC-dublin";
    "192.168.25"      -> "DC-frankfurt";
    "10.120"          -> "DC-paris";

    // Americas
    "10.200"          -> "DC-virginia";
    "172.16.30"       -> "DC-saopaulo";
    "10.250"          -> "DC-toronto";

    // Middle East
    "10.75"           -> "DC-dubai";
    "10.76"           -> "DC-bahrain";

    // Default fallback
    default -> none;

end table;

 

Example of pattern logic for multi‑pass subnet extraction

This pattern uses a two‑pass subnet extraction process to support both /16 and /24 subnet masks. The pattern evaluates each IP address assigned to the host and attempts to match it against the subnet‑to‑location lookup table. When a match is found, the corresponding location is returned, and processing stops.

pattern Host_Location 1.2

    triggers
        on host := Host created, confirmed;
    end triggers;

    body

        # Extract IP addresses from the discovered host
        IPAddress_list := search(in host traverse DeviceWithAddress:DeviceAddress:IPv4Address:IPAddress);
        location_name := '';

        for IPAddress_node in IPAddress_list do

            # First pass: TRY/16 subnet mask (e.g., "10.175")
            subnet_id := regex.extract(IPAddress_node.ip_addr, regex '(\d+.\d+)', raw '\1');
            log.debug("Location-Info: Extracted Device subnet_id is %subnet_id%");
            location_name := Subnet2Location[subnet_id];

            if location_name then
                                log.debug("Location-Info: Location found in Subnet Mask: 16");
            else
            // Second pass: Try /24 subnet mask (e.g., "10.77.150")
                subnet_id := regex.extract(IPAddress_node.ip_addr, regex '(\d+.\d+.\d+)', raw '\1');
                log.debug("Location-Info: Extracted Device subnet_id is %subnet_id%");
                location_name := Subnet2Location[subnet_id];
            end if;

            if location_name then
                log.debug("Location-Info: Location found via Device IP Address");
                break;
            end if;

        end for;

end pattern;

Method 2: Scanned endpoint location (Fallback #1)

If the device IP address does not resolve to a location, the pattern evaluates the DiscoveryAccess endpoint by using the same subnet‑based logic as Method 1. BMC Helix Discovery attempts a /16 match first, then a /24 match.

Example of scanned endpoint fallback logic
# Fallback: Evaluate scanned endpoint when device IP lookup fails

if not location_name then

    IPAddress_list := search(in host traverse InferredElement:Inference:Associate:DiscoveryAccess);

    for IPAddress_node in IPAddress_list do

        log.debug("Location-Info: Current scanned IPAddress_node is %IPAddress_node.endpoint%");

        # Attempt /16 subnet match
        subnet_id := regex.extract(IPAddress_node.endpoint, regex '(\d+.\d+)', raw '\1');
        location_name := Subnet2Location[subnet_id];

        if not location_name then
            # Attempt /24 subnet match
            subnet_id := regex.extract(IPAddress_node.endpoint, regex '(\d+.\d+.\d+)', raw '\1');
            location_name := Subnet2Location[subnet_id];
        end if;

        if location_name then
            log.debug("Location-Info: Location found via scanned IP address");
            break;
        end if;

    end for;

end if;

Method 3: Cloud region extraction from virtual machines (Fallback #2)

If subnet‑based methods do not return a location, the pattern derives the location from the VM’s cloud region. It traverses the cloud service hierarchy to identify the cloud provider and region, then combines them into a standardized location identifier.

Example of VM cloud‑region fallback logic
       // Fallback: Extract location from cloud provider region (for VMs)
if not location_name then
    log.debug("Location-Info: Extracting Location from Cloud Region");
   
    // Traverse from Host -> VirtualMachine -> CloudService -> CloudRegion
    region_list := search(in host
        traverse ContainedHost:HostContainment:HostContainer:VirtualMachine
        traverse RunningSoftware:HostedSoftware:Host:CloudService
        traverse Service:CloudService:ServiceProvider:CloudRegion);
   
    // Get the cloud provider (AWS, Azure, GCP, OCI, etc.)
    provider_list := search(in host
        traverse ContainedHost:HostContainment:HostContainer:VirtualMachine
        traverse RunningSoftware:HostedSoftware:Host:CloudService
        traverse Service:CloudService:ServiceProvider:CloudRegion
        traverse Region:CloudService:ServiceProvider:CloudProvider);
   
    // Combine provider code and region code (e.g., "AWS-us-east-4")
    for region_node in region_list do
        for provider_node in provider_list do
            provider_code := text.upper(provider_node.code);
            location_name := "%provider_code%-%region_node.code%";
            log.debug("Location-Info: Extracted Location is %location_name% from VM");
        end for;
    end for;
end if;

Method 4: Cloud region extraction from clusters (Fallback #3)

If VM‑based region extraction is not applicable, the pattern derives the location from the cluster’s cloud region by using the same provider‑region hierarchy.

Example of cluster cloud‑region fallback logic
/# Fallback: Extract location from cluster cloud region

if not location_name then

    log.debug("Location-Info: Extracting Location from Cluster");

    # Traverse from Host → Cluster → CloudService → CloudRegion
    region_list := search(in host
        traverse ContainedHost:HostContainment:HostContainer:Cluster
        traverse Service:SoftwareService:ServiceProvider:CloudService
        traverse Service:CloudService:ServiceProvider:CloudRegion);

    # Retrieve cloud provider (AWS, Azure, GCP, OCI, etc.)
    provider_list := search(in host
        traverse ContainedHost:HostContainment:HostContainer:Cluster
        traverse Service:SoftwareService:ServiceProvider:CloudService
        traverse Service:CloudService:ServiceProvider:CloudRegion
        traverse Region:CloudService:ServiceProvider:CloudProvider);

    # Construct provider-region identifier (for example, "AWS-us-east-1")
    for region_node in region_list do
        for provider_node in provider_list do
            provider_code := text.upper(provider_node.code);
            location_name := "%provider_code%-%region_node.code%";
            log.debug("Location-Info: Extracted Location is %location_name% from Cluster");
        end for;
    end for;

end if;

Cluster-specific location discovery

Cluster‑specific location discovery provides a structured approach for determining and assigning locations to Kubernetes and cloud‑hosted cluster resources. This logic is used when cloud provider metadata is missing, incomplete, or inconsistent, and when additional fallback mechanisms are required.

The discovery patterns evaluate multiple metadata sources associated with the cluster, its nodes, and its workloads, applying them in a prioritized sequence to derive the most reliable location value. This approach ensures consistent mapping across on‑premises, hybrid, and cloud‑managed Kubernetes environments, including those that rely on custom labels or naming conventions.

The following methods are used to determine and assign locations during this cluster‑specific fallback process:

Method 1: Cluster URL parsing

If cloud provider metadata is unavailable, the pattern attempts to extract location information from the cluster API URL. It evaluates multiple subnet lengths (for example, /16 and /24) to identify a matching entry in the subnet‑to‑location lookup table.

Example of cluster URL‑parsing fallback logic
pattern Cluster_Location 1.2

    triggers
        on cluster := Cluster created, confirmed;
    end triggers;

    body

        location_name := '';

        # Cloud region extraction is attempted earlier...

        # Fallback: Parse cluster URL for subnet information
        if not location_name then

            log.debug("Cluster_Location-Info: Extracting Location from Cluster URL");

            # Attempt /16 subnet match
            subnet_id := regex.extract(cluster.cluster_url, regex '(\d+.\d+)', raw '\1');
            location_name := Subnet2Location[subnet_id];

            if not location_name then
                # Attempt /24 subnet match
                subnet_id := regex.extract(cluster.cluster_url, regex '(\d+.\d+.\d+)', raw '\1');
                location_name := Subnet2Location[subnet_id];
            end if;

        end if;

end pattern;

Method 2: Name based location extraction

As a final fallback, the pattern can infer a location from cluster naming conventions.

Example of name based fallback logic
 // Final fallback: Extract from cluster name

        if not location_name then

            log.debug("Cluster_Location-Info: Extracting Location from Name");

          

            // Extract first 3 characters from cluster name (e.g., "sg1-k8s-prod" -> "sg1")

            name_id := regex.extract(cluster.name, regex '(^.{3})', raw '\1');

            location_name := name_region[name_id];

          

            if location_name then

                log.debug("Location-Info: Location found via Location Name");

            end if;

        end if;

Location data priority rules

The following table outlines the prioritized methods used to determine location data. Each method is evaluated in sequence, and the first method to provide a valid location is used:

PriorityMethodApplicable toSource
1Device IP Subnet MappingHosts, Network Devices, Printers, SNMP Devices, Storage Systems, SubnetsDevice's assigned IP addresses
2Scanned Endpoint Subnet MappingHosts, Network Devices, SNMP Devices, Storage DevicesDiscoveryAccess endpoint
3Cloud Region (VM)Hosts (Virtual Machines)CloudProvider → CloudRegion relationships
4Cloud Region (Cluster)Hosts (in Clusters), ClustersCluster → CloudService → CloudRegion
5Cluster URL ParsingClustersCluster API endpoint URL
6Name-Based ExtractionClustersCluster naming conventions

The following examples show how the priority rules are applied during different discovery scenarios:

Host discovery initiated
[Method 1] Extract device IP: 192.168.25.100
    ├─→ Try /16 subnet: "192.168" → Lookup in table
    ├─→ Not found
    ├─→ Try /24 subnet: "192.168.25" → "DC-frankfurt" ✓
    └─→ Location Found → Create/Update Location Node
            ↓
Create Relationship: Host ← ElementInLocation → Location("DC-frankfurt")
            ↓
Location Assignment Complete
Cluster discovery initiated
[Method 1] Cloud Region Extraction
    ├─→ Traverse to CloudProvider: "AWS"
    ├─→ Traverse to CloudRegion: "eu-west-1"
    └─→ Construct location: "AWS-eu-west-1" ✓
            ↓
Create Location Node("AWS-eu-west-1")
            ↓
Create Relationship: Cluster ← ElementInLocation → Location("AWS-eu-west-1")
            ↓
Location Assignment Complete

Location assignment and linkage

After the pattern determines the most accurate physical or logical location for the Kubernetes resource, it creates or retrieves the corresponding Location node and establishes the appropriate relationships. The pattern normalizes the location value, ensures that only one authoritative Location node exists for that identifier, and updates any previous relationships. As a result, BMC Helix CMDB always reflects the resource’s current placement.

Example of location‑assignment logic
if location_name then

            // Search for existing Location node

            location := search(Location where name = '%location_name%');


            if location then

                log.debug("Location-Info: Location found in Discovery Location");

            else

                // Create new Location node if it doesn't exist

                location_node := model.Location(

                    key  := text.hash("%location_name%"),

                    name := location_name

                );

                location := search(Location where name = '%location_name%');

                log.debug("Location-Info: Location created");

            end if;

          
            if location then

                // Create unique relationship (removes old location if changed)

                log.info("Location-Info: model.uniquerel.Location %host% %location%");

                model.uniquerel.Location(

                    ElementInLocation := host,

                    Location := location

                );

            else

                log.debug("Location-Info: No location information");

            end if;

        else

            log.debug("Location-Info: Unknown location");

        end if;

    end body;

end pattern;

Key technical features supporting the location discovery methods

The following table lists the technical capabilities that support the location discovery methods and ensure that each method and its fallback logic operate reliably and consistently across diverse environments:

Feature Description
Dynamic location node management
  • Creates location nodes on demand when none exist
  • Uses hashed keys for consistent identification
  • Prevents duplicate location nodes
Unique relationship management
  • Ensures each component maintains a single location relationship
  • Automatically removes outdated relationships when components move
  • Preserves data integrity during infrastructure changes
Comprehensive logging
  • Debug‑level logging at each decision point for troubleshooting
  • Info‑level logging for successful assignments
  • Provides an audit trail and supports pattern optimization
Multi‑device type support
  • Supports multiple device types, including Host, NetworkDevice, Printer, SNMPManagedDevice, StorageDevice, StorageSystem, Subnet, and Cluster
  • Uses dedicated patterns with type‑specific traversal logic
  • Applies shared location‑determination methods across device types

Benefits of the prioritized fallback method

The following table lists the benefits of the prioritized fallback approach and how it improves the accuracy and reliability of location resolution across diverse environments:

BenefitDescription
High coverage Multi‑method fallback maximizes successful location resolution across all components
Cloud-nativeSupports AWS, Azure, GCP, OCI, and hybrid deployments without additional configuration
Self-healingAutomatically updates location relationships when infrastructure or metadata changes
Flexible mappingSupports /16 and /24 subnet masks to accommodate varied network architectures
Audit readyComprehensive logging enables troubleshooting, validation, and compliance reporting
ScalablePattern‑based approach scales to millions of discovered components across large estates
ConsistentShared location nodes ensure uniform and predictable mapping across all device types

Host and cluster location enrichment in BMC Helix CMDB

BMC Helix Discovery uses TPL syncmapping patterns to enrich Configuration Items (CIs) with structured, multi‑layered location metadata. These patterns transform raw location inputs, such as tags, codes, or cloud metadata, into meaningful attributes.

This enrichment makes sure that BMC Helix CMDB accurately reflects where hosts, clusters, and related infrastructure components reside and how they align with your organizational structure.

Four-phase location enrichment model

The enrichment process follows a structured four‑phase model that transforms cryptic location identifiers into normalized, relational data within BMC Helix CMDB.

Phase 1: Data population and attribute enrichment

In the initial phase, BMC Helix Discovery takes the raw location identifiers discovered on hosts or clusters, such as site codes, region tags, or cloud metadata, and enriches them with meaningful attributes.

Each discovered location key is mapped to a set of standardized attributes, including:

  • Region (for example, Americas West, APAC, EMEA East)
  • Site (Location identifier/key)
  • Site Group (Cloud provider or facility type)
  • City (Physical city name)

This allows BMC Helix CMDB to receive a complete and standardized set of attributes, even if a host reports only a minimal or cryptic location tag.

Location mapping table

TPL patterns use the following lookup tables to translate location keys into enriched attributes:

Example of location‑mapping table 1.0
 "AZURE-us-west3"          -> "Americas West", "AZURE - Microsoft Azure",              "Phoenix",      "U.S. - Azure West";

    "GCP-asia-south1"         -> "APAC",          "GCP - Google Cloud Platform",          "Mumbai",       "India - GCP";

    "AWS-eu-north-1"          -> "EMEA West",    "AWS - Amazon Web Services",            "Stockholm",    "Sweden - AWS";

    "ALIBABA-cn-beijing"      -> "APAC",          "ALIBABA - Alibaba Cloud",              "Beijing",      "China - Alibaba";

    "OCI-sa-santiago-1"       -> "Americas East", "OCI - Oracle Cloud Infrastructure",    "Santiago",     "Chile - OCI";

    "COLO-tokyo"              -> "APAC",          "BMC COLO - BMC Cloud Location",        "Tokyo",        "Japan - BMC Cloud";

    "IBM-jp-tok"              -> "APAC",          "IBM - IBM Cloud",                      "Tokyo",        "Japan - IBM";

    "GCP-europe-west4"        -> "EMEA West",    "GCP - Google Cloud Platform",          "Eemshaven",    "Netherlands - GCP";

    "AWS-ap-south-1"          -> "APAC",          "AWS - Amazon Web Services",            "Mumbai",       "India - AWS";

    "AZURE-southafricanorth"  -> "EMEA East",    "AZURE - Microsoft Azure",              "Johannesburg", "South Africa - Azure";

    default -> "UNKNOWN", "UNKNOWN", "UNKNOWN", "UNKNOWN";

end table;

Phase 2: Physical Location CI creation

In this phase, for each discovered host or cluster with valid location data, a corresponding BMC_PhysicalLocation CI is created or updated with enriched attributes. The following syncmapping example shows how enriched location attributes are applied to hosts during CI creation.

Example of Host Location Syncmapping
syncmapping Host_Location 1.0

    """

    Populate BMC_PhysicalLocation and relate to BMC_ComputerSystem.

    """

    mapping from Host_ComputerSystem.host as host

        traverse ElementInLocation:Location:Location:Location as location_list

        phys_loc -> BMC_PhysicalLocation;

        end traverse;

    end mapping;

    body

        computersystem := Host_ComputerSystem.computersystem;

        location_size := size(location_list);


        if location_size > 0 then

            for location_node in location_list do

                // Use the key attribute if present, otherwise use name

                if location_node.key then

                    key := location_node.key;

                else

                    key := location_node.name;

                end if;


                location := location_node.name;


                if location then

                    location_values := Location_Mapping[location];

                    location_region := location_values[0];

                    location_sitegroup := location_values[1];

                    location_city := location_values[2];

                    location_dc := location_values[3];
                  

                    // Create Physical Location CI

                    if not location_region = "UNKNOWN" then

                        phys_loc := sync.shared.BMC_PhysicalLocation(

                            key              := key,

                            Name             := location_dc,

                            ShortDescription := location_dc,

                            Company          := "BMC OnDemand",

                            Region           := location_region,

                            Site             := location,

                            SiteGroup        := location_sitegroup,

                            CityName         := location_city,

                            Category         := "Infrastructure",

                            Type             := "Facility",

                            Item             := "Cloud Data Center"

                        );


                        // Relate Host to Physical Location

                        sync.rel.BMC_ElementLocation(

                            Source      := computersystem,

                            Destination := phys_loc,

                            Name        := "ELEMENTLOCATION"

                        );

                    end if;

                end if;

            end for;

        end if;

    end body;

end syncmapping;

Phase 3: Configuration Item (CI) attribute augmentation

In this phase, discovered CIs are enriched with location attributes directly, enabling filtering, reporting, and analysis without requiring traversal to Physical Location CIs. The following example shows how location metadata is applied to CIs during attribute augmentation.

Example of computer system augmentation
syncmapping ComputerSystem_Augment 1.0

    """

    Augment BMC_ComputerSystem CI with location attributes.

    """

    mapping from Host_ComputerSystem.host as host

        traverse ElementInLocation:Location:Location:Location as location_list

        end traverse;

    end mapping;

    body

        computersystem := Host_ComputerSystem.computersystem;

        location_size := size(location_list);


        if location_size > 0 then

            for location_node in location_list do

                location := location_node.name;

            end for;

            if location then

                location_values := Location_Mapping[location];

                location_region := location_values[0];

                location_sitegroup := location_values[1];

                location_city := location_values[2];
              

                // Enrich ComputerSystem CI with location attributes

                if not location_region = "UNKNOWN" then

                    computersystem.Company := "BMC OnDemand";

                    computersystem.Region := location_region;

                    computersystem.Site := location;

                    computersystem.SiteGroup := location_sitegroup;

                    computersystem.CityName := location_city;

                end if;

            end if;

        end if;

    end body;

end syncmapping;

Phase 4: Cluster location enrichment

This phase aligns cluster‑level CIs with the same regional, site, and organizational attributes used for hosts, enabling unified reporting, dependency mapping, and operational analysis. The following example shows how location attributes are applied to cluster CIs during enrichment.

Example of cloud cluster location mapping
syncmapping CloudCluster_Location 1.0

    """

    Populate BMC_PhysicalLocation and relate to BMC_Cluster.

    """

    mapping from CloudCluster.cluster

        from Cluster.cluster

        as cluster

        traverse ElementInLocation:Location:Location:Location as location_list

        phys_loc -> BMC_PhysicalLocation;

        end traverse;

    end mapping;

    body

        cluster_ci := CloudCluster.cluster_ci or Cluster.cluster_ci;

        location_size := size(location_list);


        if location_size > 0 then

            for location_node in location_list do

                if location_node.key then

                    key := location_node.key;

                else

                    key := location_node.name;

                end if;

              
                location := location_node.name;


                if location then

                    location_values := Location_Mapping[location];

                    location_region := location_values[0];

                    location_sitegroup := location_values[1];

                    location_city := location_values[2];

                    location_dc := location_values[3];

                  
                    if not location_region = "UNKNOWN" then

                        phys_loc := sync.shared.BMC_PhysicalLocation(

                            key              := key,

                            Name             := location_dc,

                            ShortDescription := location_dc,

                            Company          := "BMC OnDemand",

                            Region           := location_region,

                            Site             := location,

                            SiteGroup        := location_sitegroup,

                            CityName         := location_city,

                            Category         := "Infrastructure",

                            Type             := "Facility",

                            Item             := "Cloud Data Center"

                        );

                      
                        // Relate Cluster to Physical Location

                        sync.rel.BMC_ElementLocation(

                            Source      := cluster_ci,

                            Destination := phys_loc,

                            Name        := "ELEMENTLOCATION"

                        );

                    end if;

                end if;

            end for;

        end if;

    end body;

end syncmapping;

Key technical features in the four‑phase model

The following table lists the technical features that support the four‑phase enrichment model by standardizing how location data is retrieved, normalized, and mapped to BMC Helix CMDB CIs:

FeatureDescription
Traversal pattern

Sync mappings use the following standardized traversal path:

traverse ElementInLocation:Location:Location:Location as location_list
This retrieves all Location nodes associated with the discovered infrastructure component

Shared Physical Location CIsThe sync.shared.BMC_PhysicalLocation() function ensures that hosts or clusters in the same physical location reference a single shared Physical Location CI, preventing duplication and maintaining data consistency
Relationship creationThe sync.rel.BMC_ElementLocation() function creates relationships between infrastructure components and their associated locations. These relationships follow the BMC class model and support impact analysis, service modeling, and dependency mapping
Attribute inheritanceBoth direct attribute augmentation and relational mapping are used to enrich CIs with location‑related attributes, providing flexibility for reporting, analytics, and downstream integrations

Data flow summary

The following example shows how location information moves through the enrichment process, from initial discovery to BMC Helix CMDB updates.

Example of data flow summary
Discovery → Location Node → Syncmapping Pattern → Location Table Lookup
        ↓
   Enriched Attributes (Region, Site, SiteGroup, City)
        ↓
        ├─→ Create/Update BMC_PhysicalLocation CI
        ├─→ Augment BMC_ComputerSystem / BMC_Cluster CI
        └─→ Create BMC_ElementLocation Relationship

Benefits of the four-phase location enrichment model

This model ensures that all discovered infrastructure assets are accurately mapped to their physical and organizational locations.

The table below summarizes the key benefits of the four‑phase enrichment approach and how it improves the accuracy, consistency, and usability of location data across BMC Helix Discovery and BMC Helix CMDB:

BenefitDescription
Unified location modelProvides consistent location metadata across hosts, clusters, and other infrastructure components
Multi‑cloud supportProvides vendor‑agnostic mapping for AWS, Azure, GCP, OCI, Alibaba Cloud, IBM Cloud, and private data centers
BMC Helix CMDB integrationAligns with the BMC Helix CMDB class model and relationship structure for seamless ingestion
Reporting readyEnables location‑based filtering, dashboards, and geographic reporting through enriched attributes
Impact analysisSupports business service impact assessment and disaster recovery planning through physical location relationships

Configuring the BMC Helix Discovery dataset in BMC Helix CMDB

Configuring the BMC.HELIX.DISCOVERY dataset ensures that BMC Helix Discovery standardizes and protects incoming data, which in turn strengthens BMC Helix CMDB accuracy, consistency, and overall data quality. 

This configuration ensures that incoming CIs align with the BMC Helix CMDB data model and reconcile cleanly with other data sources. It also preserves the data quality required to support dependable service models, accurate reporting, and sound operational decision‑making.

To configure the discovery dataset, perform the following steps:

  1. Select ITSM Mid‑Tier ApplicationsAtrium Core Configuration Manager DashboardConfigurationsManage Normalization RulesData Set Configurations.
  2. Locate BMC.HELIX.DISCOVERY.
  3. Select Edit to open the dataset configuration panel.
  4. Enable the following options by selecting their corresponding checkboxes:
    • Name and CTI Lookup
    • Enable Normalization
    • Preserve Manual Edits
    • Suite Rollup
  5. Click Save.

Product catalog integration

BMC Helix Discovery continuously identifies new products and versions. To make sure this information is classified correctly, the Product Catalog is kept up to date. A current catalog allows CIs sourced from BMC Helix Discovery to map correctly to the appropriate manufacturer, product family, and version, improving reporting accuracy, compliance tracking, and service modeling.

Key activities for maintaining an effective Product Catalog

Maintaining the Product Catalog requires ongoing activity to ensure that BMC Helix Discovery data remains normalized, classified, and aligned with organizational standards. The following activities keep the catalog accurate and reliable:

  • Add new Products and Product Versions detected in the BMC Helix Discovery dataset.
  • Verify that new catalog entries are associated with the correct Company to maintain alignment with organizational structures.
  • Review Product Manufacturers after each normalization run to ensure correct mapping and classification.

If the environment does not have an existing Product Catalog, enable Allow New Product Catalog Entry during the initial normalization run. This allows the system to automatically create baseline catalog entries based on BMC Helix Discovery data. Once the baseline is created, disable this option to ensure catalog entries are managed through controlled processes that follow best practices and governance standards.

The following table lists common manufacturer aliases used to improve normalization accuracy:

Alias nameActual name
ApacheApache Foundation
CiscoCisco Systems Inc
Cisco SystemsCisco Systems Inc
DellDell Inc.
EclipseEclipse Foundation
ElasticElastic NV
ElasticsearchElastic NV
MicrosoftMicrosoft Corporation
OracleOracle Corporation
VMwareVMware, Inc.

Location mapping

BMC Helix Discovery sends Region, Site Group, and Site values as part of CI data. These locations must already exist in the ITSM Foundation data before they can synchronize with BMC Helix CMDB. If a CI references a location that is not defined in ITSM, the synchronization fails with the following error:

“The Location Information is not valid. Please use the menus provided on the ‘Region’, ‘Site Group’, and ‘Site’ fields or the type‑ahead return information (ARERR 44897)."

To prevent these failures, ensure that all locations referenced by BMC Helix Discovery exist in the ITSM Foundation data.

The following configuration steps outline how to create or bulk‑load the required Region, Site Group, and Site records so that location validation succeeds during BMC Helix CMDB updates:

  1. Select ITSM Mid‑Tier Applications → Administrator Console → Application Administration Console → Location → Create to manually add individual location records.
  2. For environments with a large number of locations, use the Data Management spreadsheet load process to bulk‑load entries.

Normalization

Normalization standardizes CI data from BMC Helix Discovery by cleaning attribute values, validating them against Foundation and Product Catalog data, and enriching fields to ensure consistency across the BMC Helix CMDB.

The following table summarizes the normalization run performance across multiple executions:

Run typeDuration (hours)CIs processed
Initial NE Run10.52,954,198
Subsequent Run3.75890,861
Subsequent Run5.0827,515
Subsequent Run1.8303,497
Subsequent Run1.6295,199

Reconciliation 

BMC Helix CMDB identification rules ensure each CI is uniquely identified across datasets. Rules exist for multiple CI classes, such as:

  • BMC_ADMINDOMAIN
  • BMC_APPLICATIONSERVICE
  • BMC_APPLICATIONSYSTEM
  • BMC_BUSINESSSERVICE
  • BMC_CLUSTER
  • BMC_COMPUTERSYSTEM
  • BMC.CORE:BMC_CLOUDINSTANCE
  • BMC.CORE:BMC_CONCRETECOLLECTION
  • BMC.CORE:BMC_RESOURCEPOOL
  • BMC_DATABASE
  • BMC_DOCUMENT
  • BMC_HARDWAREPACKAGE
  • BMC_HARDWARESYSTEMCOMPONENT
  • BMC_IPCONNECTIVITYSUBNET
  • BMC_IPENDPOINT
  • BMC_LANENDPOINT
  • BMC_LOGICALSYSTEMCOMPONENT
  • BMC_NETWORKPORT
  • BMC_OPERATINGSYSTEM
  • BMC_PROCESSOR
  • BMC_PRODUCT
  • BMC_SOFTWARESERVER
  • BMC_SYSTEMRESOURCE
  • BMC_TAG
  • BMC_VIRTUALSYSTEMENABLE

Reconciliation job runtime

The following table lists the reconciliation job runs and their durations:

Reconciliation job runDuration
Run 13.8 hours
Run 21 hour
Run 35 hours
Run 44 hours

Business service source control and qualification rules

Business Service data is primarily sourced through an external integration. BMC Helix CMDB applies precedence rules that retain all BMC_BusinessService attributes from BMC.ASSET dataset, with the exception of the ADDMIntegrationId attribute, which BMC Helix Discovery is allowed to populate. This approach preserves the integrity of externally managed Business Service records and prevents unintended updates from BMC Helix Discovery.

Business Service CIs maintained by the external integration are not updated by BMC Helix Discovery, as these records directly support ITSM processes, reporting, and downstream integrations.

As BMC Helix CMDB and BMC Helix Discovery processes matured, BMC Helix Discovery began identifying additional Business Service CIs that were not part of the external integration but were still required for operational visibility and change management. To support these cases, BMC Helix Discovery was configured to populate a custom BMC Helix CMDB attribute that identifies the BMC Helix Discovery managed business services.

Enhanced qualification rules now allow BMC Helix CMDB to differentiate between:

  • Business Services managed by the external integration
  • Business Services discovered independently

This approach enables the selective creation of new Business Service CIs in the BMC.ASSET dataset when appropriate, without affecting externally managed records. The result is clearer ownership of Business Service data, more predictable reconciliation behavior, and improved accuracy across ITSM processes that rely on consistent and trustworthy CI information.

Updated reconciliation job design

The reconciliation job that processes data from BMC Helix Discovery was expanded from 2 to 4 activities to provide more granular control over how BMC_BusinessService CIs are evaluated in BMC Helix CMDB.

The original job used a single Identify activity followed by a Merge activity, which applied the same logic to all Business Services regardless of their source.

The updated design introduces three Identification activities and one Merge activity. This structure allows BMC Helix CMDB to distinguish between externally managed and BMC Helix Discovery managed business services, ensuring that each type is reconciled according to its ownership and qualification rules.

Reconciliation activity summary table:
ActivityConditionOutcome / Behavior
Activity 1 – Identification (Non‑Business Services)All CI classes except BMC_BusinessServiceCompares BMC Helix Discovery with BMC.ASSET and identifies CIs for reconciliation
Activity 2 – Identification (Externally Managed Business Services)BMC_BusinessService where the custom attribute is blankMatches existing CIs only; no new Reconciliation IDs generated
Activity 3 – Identification (BMC Helix Discovery Managed Business Services)BMC_BusinessService where the custom attribute has discovery valueMatches existing CIs or creates new ones with a new Reconciliation ID
Activity 4 – MergeAll CIs identified in Activities 1–3Merges BMC Helix Discovery data into BMC.ASSET using qualification rules

 

Innovation Studio - Asset Automation Application

The Asset Automation Application in Innovation Studio automates the creation and management of asset relationships.

During the initial BMC Helix CMDB implementation phase, this application was not incorporated. However, it will be adopted in future phases to automatically create asset–people relationships for CIs discovered by BMC Helix Discovery.

Based on data supplied in the asset record, the application can create and remove the following relationships:

  • Contract
  • Location
  • People

BMC Helix CMDB reporting plan

The following are example report concepts that end users can build to support ongoing BMC Helix CMDB governance, data quality monitoring, reconciliation oversight, operational visibility, and product catalog accuracy. These reports are grouped by functional category for ease of reference.

1. Data quality and completeness report
Report nameDescriptionPurpose
CI completeness report / dataset and class countIdentifies CIs missing mandatory attributes (for example, name, owner, location, status)Maintain accurate and usable data
Data quality scorecardMeasures CI data quality based on completeness, uniqueness, and timelinessProvide an overall BMC Helix CMDB health indicator
Stale CI reportIdentifies CIs not updated in the last 30/60/90 daysHighlight outdated records for cleanup

 

2. Reconciliation and integration report
Report nameDescriptionPurpose
Reconciliation exceptions reportLists CIs that failed reconciliation rulesResolve merge/reject conflicts and improve reconciliation accuracy
3. Relationship and dependency reports
Report nameDescriptionPurpose
Missing relationship reportIdentifies CIs without any relationshipsEnsure CI connectivity and traceability
Orphan CI reportIdentifies CIs not linked to any service or componentImprove service mapping accuracy
Business service dependency map reportMaps CI dependencies for Business ServicesMaintain reliable service dependency data

 

4. BMC Helix CMDB usage and audit reports
Report nameDescriptionPurpose
BMC Helix CMDB access audit reportTracks user interactions with BMC Helix CMDB recordsSupport governance, accountability, and compliance
Change audit trailShows updates made to key CI attributesIdentify unauthorized or unplanned changes

 

5. BMC Helix CMDB operational reports
Report nameDescriptionPurpose
CI lifecycle status distributionProvides an overview of CI lifecycle states (for example, in use, retired, decommissioned)Track asset utilization and lifecycle trends

 

6.Product catalog reports
Report nameDescriptionPurpose
Configured productsLists all products configured in the Product CatalogTrack product inventory and catalog completeness
BMC Helix CMDB product catalog consistency reportIdentifies CIs missing corresponding Product Catalog entriesDetect missing catalog mappings and maintain alignment

 

7. Location reports
Report nameDescriptionPurpose
Configured locations reportLists all configured locations and associated CIsValidate location hierarchy and ensure accurate CI‑to‑location mapping

 

BMC Helix CMDB Health dashboard

Key indicators used to assess BMC Helix CMDB data quality and operational health include:

  • Percentage of CIs with valid owners
  • Percentage of CIs with complete mandatory fields
  • Percentage of CIs updated within the last 30 days
  • Weekly count of reconciliation failure

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC Helix CMDB 26.1