Backward compatibility risk areas in the API
BMC Cloud Lifecycle Management version 3.0 contains several major changes to the API. BMC has a policy of maintaining backward compatibility with previous versions of the API. However, this policy does not guarantee that existing customizations that use the API will not need changes to take advantage of new features. This topic provides an overview of the changes and highlights the risk areas.
Cloud API changes
In version 3.0, some of the core model and, therefore, some API assumptions, have changed. As a result of these changes, old parts of the model have been deprecated. Deprecated APIs work with old objects, but might not work with new objects. This is a core risk for field use of the API. Because core assumptions have changed, code written against the old assumptions might need to be adapted to the new assumptions. Preexisting objects satisfy the old assumptions, so the APIs are guaranteed to work against those objects.
To better understand the deprecation issues, consider the following example: suppose you have a customization that includes API calls to find the names of all networks in a network container. To find network containers that do not have zones — a new capability in version 3.0 — this customization must be modified. However, this customization continues to function correctly when used with network containers that were created in BMC Cloud Lifecycle Management 2.1 and upgraded to version 3.0.
Version 3.0 maintains the old relationships and attributes for each API and returns them when an object is retrieved by search or by GET
requests. However, because the API implementation ignores old relationships and attributes during PUT
and POST
requests, any customizations that use these features must be modified to call the equivalent new APIs (see Model changes).
Firewall and load balancer management changes
Existing customizations that involve firewall and load balancer management might need to be modified. In version 2.1, FirewallRule
, LoadBalancerPool
, and LBPoolEntry
class objects are not persisted in the CloudDB and can be read only from the object provider. In version 3.0, these objects are persistent first-class objects. Because of this change, search requests involving these objects might need to be modified.
The sequence of calls that you must use to support concurrency during firewall modifications has changed. The following table shows the version 2.1 and version 3.0 flows for firewall modifications. Because the sessionGuid
attribute of the Firewall
class is deprecated, you must now acquire the sessionGuid
attribute from the NetworkContainer
object. The POST /csm/Firewall/<guid>/replaceRulesForIP
request is deprecated, but functions for version 2.1 network containers that have been upgraded. However, to use any newly-created containers, you must use the new APIs (see Model changes).
Version 2.1 versus version 3.0 firewall modification flow
Step | Version 2.1 firewall modification flow | Version 3.0 firewall modification flow |
---|---|---|
1 | Get the list of firewall rules on a firewall and retrieve the | Call |
2 | Not applicable | Get the list of firewall rules on a firewall network interface. |
3 | Call | Call |
4 | Not applicable | Call |
Model changes
The following table summarizes the changes to the model, and lists the deprecated APIs and corresponding new APIs.
Tip
If you can't see the rightmost column of the following table, click Hide left column or type [. If you are using Safari on a Mac and still cannot see the rightmost column, try using Firefox.
Summary of model changes
Model change | Deprecated APIs | New APIs |
---|---|---|
The | The relationship between | A new relationship between |
Firewalls no longer belong to a single zone. Instead, they have interfaces on networks and belong to network containers. |
|
|
Load balancers no longer belong to a single zone. Instead, they have interfaces on networks and belong to network containers. These interfaces are either client or server interfaces, representing the client side or server side of the load balancer. |
|
|
Firewalls are no longer assumed to have a single inbound access control list (ACL), but instead have inbound and outbound ACLs on each interface. |
|
|
Load balancer pools are now persisted in the CloudDB and point to the actual networks via their parent |
| You can retrieve the client network (single) that the load balancer pool is on by using: |
|
| You can now use relationship traversal to retrieve |
Zones are now optional. Network containers have parent-level networks. If zones exist, they can own their own networks. | No API is deprecated per se. However, | You can now get the container level networks by requesting |
The software no longer assumes that network labels are unique within a container. Network labels must be unique among all container-level networks and must be unique within each zone. This implies that in a network container with N zones, up to N+1 networks might have the same network label. | None | None |
Version 3.0 does not model the total VLANs for a network | The | None |
Logical communication paths is a new concept for end users. In the UI, this concept is called a network path. This concept replaces | No APIs are deprecated, but firewall rule APIs do not function for end users. | To create a logical communication path, use |
A new object called a logical hosting environment, which has logical networks and other objects parallel to a network container, replaces network containers for the purpose of tagging and tenant mapping. A logical hosting environment maps to either a network container or a logical data center, which represents a VMware Organizational VDC. |
|
|
Provider API changes
The approach to backward compatibility for the Provider API is different from the approach for the Cloud API because the Provider API is a southbound API. Version 3.0 contains many enhanced features, which cause the resource providers to have new behaviors. Most of the changes belong to the following categories: resource onboarding, container provisioning, and service provisioning. The primary feature responsible for these changes is the networking feature, in which the introduction of dynamic containers has changed the features that are expected from network and compute providers. The following sections describe changes in those categories, and changes to some algorithms that call the Provider API.
Resource onboarding changes
The version 3.0 resource onboarding flow is similar to the version 2.1 flow, but includes some new return values. The following table shows the differences (new features are listed in italics):
Version 2.1 versus version 3.0 resource onboarding flow
Step | Version 2.1 flow | Version 3.0 flow |
---|---|---|
1 | Onboard a pod.
| Onboard a pod.
|
2 | Onboard a container blueprint.
| Onboard a container blueprint.
|
3 | Onboard a virtual cluster or physical server into a pod.
| Onboard a virtual cluster or physical server into a pod.
|
The key changes in resource onboarding are:
- A pod must have at least one access switch.
- Each compute resource (cluster or physical server) must be connected to the same switches used in the pod, and the switch names must match the names used in the pod.
- A container blueprint must have a template network container.
The following table lists the changes to the associated Provider APIs:
Changes in the Provider API for resource onboarding
Provider API | Description of change |
---|---|
| The HTTP response JSON contains a list of one or more |
| The HTTP response includes a nested |
| The HTTP response includes a list of |
| The HTTP response includes a |
Network container provisioning changes
The changes in network container provisioning are due to the introduction of dynamic network containers and changes in the object model. Dynamic network containers affect the API input parameters because the software uses cloud administrator input for IP address information, enabled and disabled networks, and so on. Changes were made to the object model to reduce the importance of zones. For example, firewalls are no longer in a single zone and no longer represent a single real interface. As such, both the input and the output of the Provider API calls have changed.
Changes in the Provider API for network container provisioning
Provider API | Description of change |
---|---|
| The input is now a much richer model that includes objects such as container-level (zone-free) networks, IP ranges set per network, firewalls with interfaces attached to networks and outside of zones, and load balancers with interfaces attached to networks. The response refers by name to switches that are in the pod and to a switch port for each network. Platform Manager will fail to create the container if the switches do not match. |
SOI provisioning changes
Changes to service offering instance (SOI) provisioning are due to the introduction of dynamic containers, zone relaxation, and multi-ACL firewalls. The following table shows the differences between the version 2.1 and version 3.0 provisioning flows (new actions are listed in italics).
Version 2.1 versus version 3.0 SOI provisioning flow
Step | Version 2.1 flow | Version 3.0 flow |
---|---|---|
1 | Call
| Call
|
2 | Call
| Call
|
3 | Not applicable | Call |
4 | Call | Call |
5 | Call | Call |
6 | Call | Call |
7 | Call | Call |
Changes in the Provider API for SOI provisioning
Provider API | Description of change |
---|---|
| The API now takes a list of |
| If your provider supports containers with NAT, you must implement this new call. |
| This new call replaces the call to |
Callout API changes
The core piece of the Callout API, registration of a new callout, does not have any changes in version 3.0. However, because callouts receive data from either the Cloud API or the Provider API, depending on the API call to which they are attached, some risk exists in backward compatibility for callouts. This section summarizes the risks and highlights the risk mitigation factors.
High-risk areas for callouts attached to Cloud APIs
Most callouts attached to Cloud APIs remain unaffected, except for callouts that, in their implementation, depend on the network container topology. For example, a callout attached to POST /csm/ServiceOfferingInstance/bulkCreate
that looks up all network containers for the tenant will work. However, if the callout then looks into the network containers to find all load balancers, the callout might fail. If a version 3.0 network container exists for the tenant, the callout might not function correctly. Similarly, for callouts attached to the network container creation APIs, all input arguments are exactly the same. However, if the callout needs to introspect a network container in the system via the Cloud API, it might not function correctly. Success or failure of the callout depends on what you are looking for.
Low-risk areas for callouts attached to Cloud APIs
Callouts that do not look into the network container part of the system and look only in the compute part of the system have lower risk. For example, a callout attached to the POST csm/ComputeContainer/start
API that looks up the IP Address of the second NIC in the compute container functions correctly in version 3.0. Similarly, a callout attached to POST /csm/ServiceOfferingInstance/bulkCreate
that looks up all the IP addresses of all the compute containers continues to function.
High-risk areas for callouts attached to Provider APIs
The changes to the Provider APIs are dramatic in version 3.0. Thus, most of the callouts attached to Provider API calls are at risk. Some of the risk is syntactical and some of the risk is semantic. The biggest risk areas are around networking, however the risk is present in all areas of the Provider API.
For an example of a callout that might seem on the surface to be unaffected syntactically, consider a callout that is attached to the POST /csm/VirtualGuest
API. The callout introspects all the NICs to see all the IPs that the VM has acquired, either through IPAM or DHCP (it can do this if it is a post-callout to the API). In version 2.1, when the virtual guest create API call finishes, all IP addresses are reserved. However, in version 3.0, all IP addresses are not reserved when the call finishes because a new Provider API call that handles NAT is invoked after the virtual guest create API call finishes. Thus, a callout that depends on having a complete picture of the reserved IP addresses does not function correctly in version 3.0, even though it still can see all the IP addresses on the VM.
Low-risk areas for callouts attached to Provider APIs
At the Provider API level, low-risk areas are few and far between. However, in some cases, such as finding information about VMs or physical servers (for example, the host name and IP addresses), and in the absence of any use of NAT features, these callouts function correctly. For example, a callout that registers a VM in Active Directory can find the VM IP addresses in version 3.0 and does not require modification. As long as the VM IP does not use NAT, this callout continues to function as expected.
Comments
Log in or register to comment.