Backward compatibility risk areas in the API
BMC Cloud Lifecycle Management version 3.0 contains several major changes to the API. BMC has a policy of maintaining backward compatibility with previous versions of the API. However, this policy does not guarantee that existing customizations that use the API will not need changes to take advantage of new features. This topic provides an overview of the changes and highlights the risk areas.
Cloud API changes
In version 3.0, some of the core model and, therefore, some API assumptions, have changed. As a result of these changes, old parts of the model have been deprecated. Deprecated APIs work with old objects, but might not work with new objects. This is a core risk for field use of the API. Because core assumptions have changed, code written against the old assumptions might need to be adapted to the new assumptions. Preexisting objects satisfy the old assumptions, so the APIs are guaranteed to work against those objects.
To better understand the deprecation issues, consider the following example: suppose you have a customization that includes API calls to find the names of all networks in a network container. To find network containers that do not have zones — a new capability in version 3.0 — this customization must be modified. However, this customization continues to function correctly when used with network containers that were created in BMC Cloud Lifecycle Management 2.1 and upgraded to version 3.0.
Version 3.0 maintains the old relationships and attributes for each API and returns them when an object is retrieved by search or by GET requests. However, because the API implementation ignores old relationships and attributes during PUT and POST requests, any customizations that use these features must be modified to call the equivalent new APIs (see Model changes).
Firewall and load balancer management changes
Existing customizations that involve firewall and load balancer management might need to be modified. In version 2.1, FirewallRule, LoadBalancerPool, and LBPoolEntry class objects are not persisted in the CloudDB and can be read only from the object provider. In version 3.0, these objects are persistent first-class objects. Because of this change, search requests involving these objects might need to be modified.
The sequence of calls that you must use to support concurrency during firewall modifications has changed. The following table shows the version 2.1 and version 3.0 flows for firewall modifications. Because the sessionGuid attribute of the Firewall class is deprecated, you must now acquire the sessionGuid attribute from the NetworkContainer object. The POST /csm/Firewall/<guid>/replaceRulesForIP request is deprecated, but functions for version 2.1 network containers that have been upgraded. However, to use any newly-created containers, you must use the new APIs (see Model changes).
Version 2.1 versus version 3.0 firewall modification flow
Model changes
The following table summarizes the changes to the model, and lists the deprecated APIs and corresponding new APIs.
Summary of model changes
Provider API changes
The approach to backward compatibility for the Provider API is different from the approach for the Cloud API because the Provider API is a southbound API. Version 3.0 contains many enhanced features, which cause the resource providers to have new behaviors. Most of the changes belong to the following categories: resource onboarding, container provisioning, and service provisioning. The primary feature responsible for these changes is the networking feature, in which the introduction of dynamic containers has changed the features that are expected from network and compute providers. The following sections describe changes in those categories, and changes to some algorithms that call the Provider API.
Resource onboarding changes
The version 3.0 resource onboarding flow is similar to the version 2.1 flow, but includes some new return values. The following table shows the differences (new features are listed in italics):
Version 2.1 versus version 3.0 resource onboarding flow
The key changes in resource onboarding are:
- A pod must have at least one access switch.
- Each compute resource (cluster or physical server) must be connected to the same switches used in the pod, and the switch names must match the names used in the pod.
- A container blueprint must have a template network container.
The following table lists the changes to the associated Provider APIs:
Changes in the Provider API for resource onboarding
Network container provisioning changes
The changes in network container provisioning are due to the introduction of dynamic network containers and changes in the object model. Dynamic network containers affect the API input parameters because the software uses cloud administrator input for IP address information, enabled and disabled networks, and so on. Changes were made to the object model to reduce the importance of zones. For example, firewalls are no longer in a single zone and no longer represent a single real interface. As such, both the input and the output of the Provider API calls have changed.
Changes in the Provider API for network container provisioning
SOI provisioning changes
Changes to service offering instance (SOI) provisioning are due to the introduction of dynamic containers, zone relaxation, and multi-ACL firewalls. The following table shows the differences between the version 2.1 and version 3.0 provisioning flows (new actions are listed in italics).
Version 2.1 versus version 3.0 SOI provisioning flow
Changes in the Provider API for SOI provisioning
Callout API changes
The core piece of the Callout API, registration of a new callout, does not have any changes in version 3.0. However, because callouts receive data from either the Cloud API or the Provider API, depending on the API call to which they are attached, some risk exists in backward compatibility for callouts. This section summarizes the risks and highlights the risk mitigation factors.
High-risk areas for callouts attached to Cloud APIs
Most callouts attached to Cloud APIs remain unaffected, except for callouts that, in their implementation, depend on the network container topology. For example, a callout attached to POST /csm/ServiceOfferingInstance/bulkcreate that looks up all network containers for the tenant will work. However, if the callout then looks into the network containers to find all load balancers, the callout might fail. If a version 3.0 network container exists for the tenant, the callout might not function correctly. Similarly, for callouts attached to the network container creation APIs, all input arguments are exactly the same. However, if the callout needs to introspect a network container in the system via the Cloud API, it might not function correctly. Success or failure of the callout depends on what you are looking for.
Low-risk areas for callouts attached to Cloud APIs
Callouts that do not look into the network container part of the system and look only in the compute part of the system have lower risk. For example, a callout attached to the POST csm/ComputeContainer/start API that looks up the IP Address of the second NIC in the compute container functions correctly in version 3.0. Similarly, a callout attached to POST /csm/ServiceOfferingInstance/bulkcreate that looks up all the IP addresses of all the compute containers continues to function.
High-risk areas for callouts attached to Provider APIs
The changes to the Provider APIs are dramatic in version 3.0. Thus, most of the callouts attached to Provider API calls are at risk. Some of the risk is syntactical and some of the risk is semantic. The biggest risk areas are around networking, however the risk is present in all areas of the Provider API.
For an example of a callout that might seem on the surface to be unaffected syntactically, consider a callout that is attached to the POST /csm/VirtualGuest API. The callout introspects all the NICs to see all the IPs that the VM has acquired, either through IPAM or DHCP (it can do this if it is a post-callout to the API). In version 2.1, when the virtual guest create API call finishes, all IP addresses are reserved. However, in version 3.0, all IP addresses are not reserved when the call finishes because a new Provider API call that handles NAT is invoked after the virtual guest create API call finishes. Thus, a callout that depends on having a complete picture of the reserved IP addresses does not function correctly in version 3.0, even though it still can see all the IP addresses on the VM.
Low-risk areas for callouts attached to Provider APIs
At the Provider API level, low-risk areas are few and far between. However, in some cases, such as finding information about VMs or physical servers (for example, the host name and IP addresses), and in the absence of any use of NAT features, these callouts function correctly. For example, a callout that registers a VM in Active Directory can find the VM IP addresses in version 3.0 and does not require modification. As long as the VM IP does not use NAT, this callout continues to function as expected.