Configuring Cisco ASA 1000V firewalls

The following topics provide information about Pod and Container Management (PCM) changes and requirements that support the management of Cisco ASA 1000V Cloud firewalls by using BMC Network Automation as part of a BMC Cloud Lifecycle Management implementation:

Cisco ASA 1000V firewall overview

The Cisco ASA 1000V Cloud firewall is a virtual appliance developed using the Cisco ASA 1000V infrastructure to secure the tenant edge in multitenant environments with Nexus 1000V deployments. It provides the following benefits:

  • Supports edge features and functionality, including site-to-site virtual private network (VPN), network address translation (NAT), and Dynamic Host Configuration Protocol (DHCP)

  • Acts as a default gateway

  • Secures VMs within the tenant against any network-based attacks

The Cisco ASA 1000V firewall is used with Cisco Virtual Security Gateway (VSG) to provide tenant edge security, which provides more fine-grained security within the same IP space or VLAN.

The following figure depicts a Cisco ASA 1000V firewall in a multitenant environment:

Back to top

Management modes

ASA 1000V has the following management modes:

  • Adaptive Security Device Manager (ASDM): The traditional mode that supports the ASA 1000V CLI. The command line of ASA 1000V is the same as the physical ASA 1000V device.

  • Virtual Network Management Center (VNMC): Acts as a central management system for the VSG and ASA 1000V device

Each mode is mutually exclusive. You cannot use both management modes in the same deployment. After deploying ASA 1000V, you cannot change the management mode without redeploying ASA 1000V by using the VMware vSphere client. Also, in an ASA 1000V deployment that consists of a failover pair, the primary and secondary ASA 1000V must use the same management mode.

Note

In BMC Cloud Lifecycle Management, ASA 1000V supports only the VNMC mode.

ASA 1000V device

ASA 1000V is in the form of a VM, which can be deployed on an ESX hypervisor by using a VMware vCentre Server.

Each ASA 1000V provides four available Ethernet interfaces for data and failover traffic: one for management, two for "through" traffic, and one for a failover link, as follows:

  • Management 0/0: For management-only traffic, named management, with IP address parameters that you specified when you deployed the ASA 1000V

  • GigabitEthernet 0/0: For "inside" data (higher security level)

  • GigabitEthernet 0/1: For "outside" data (lower security level)

  • GigabitEthernet 0/2: For failover or high-availability (HA) traffic, with IP address parameters that you specified when you deployed the ASA 1000V

ASA 1000V can support only one "inside" network or one VLAN and no trunk.

 Back to top

Deploying ASA 1000V VM

BMC Network Automation manages deployment and undeployment of an ASA 1000V VM on vCentre by using an external script action, which calls a vSphere API to perform a deploy or undeploy operation. BMC Network Automation uses a predefined Open Virtualization Archive/Open Virtualization Format (OVA/OVF) for VM deployment. When deploying the primary ASA 1000V VM, you must pass all the required management IP parameters: management IP address, Standby IP address, HA Active/Standby IP address, and VNMC IP address. However, when deploying the secondary ASA 1000V VM, you must pass the HA Active/Standby IPv4 address and HA network mask parameters only, and specify 0.0.0.0 as the value for the other parameters. As soon as the secondary ASA 1000V VM starts, it searches for the primary ASA 1000V, and after failover formation completes, the secondary ASA 1000V fetches all the configuration from the primary and completes the synchronization process.

Note

The Active/Active mode is not supported.

OVA/OVF requirements

BMC Network Automation must be able to access ASA 1000V as soon as it is deployed. You cannot use a fresh OVA/OVF file because it does not have certain initial configurations that are required to enable SSH/Telnet, user name, password, and so on. The administrator must prepare the required OVA/OVF file for the primary, secondary, and stand-alone ASA 1000V.

Back to top

To export the OVA/OVF file

The administrator must deploy ASA 1000V on ESX through VMware vCentre by using an OVA/OVF file downloaded from the Cisco website.

  1. To prepare the OVA/OVF file for the primary, secondary, or stand-alone ASA 1000V, select one of the following configurations in the Deploy OVF Template window:
    • Deploy ASA as Primary
    • Deploy ASA as Secondary
    • Deploy ASA as Standalone
  2. On the Network Mapping screen, select four different network profiles for the four NICs because the exported OVA/OVF file renames the Source Networks name to the selected Destination Networks name:

     
  3. On the Properties page, provide values for the following parameters and set the rest of the address fields to 0.0.0.0:
    • Management IP Address

    • Management IP Subnet Mask

    • Management IP Gateway

    • Device Manager Mode – VNMC

Back to top

To configure the ASA 1000V device

  1. After the VM is deployed and powered on, access the VM from the Console and configure the following parameters:
    1. Username userName password password privilege 15
    2. Enable password enablePassword
    3. (Optional) Aaa authentication ssh console LOCAL 
    4. Ssh networkAddress subnetMask management
    5. Telnet networkAddress subnetMask management
    6. For stand-alone mode, skip to step 2.
    7. (HA mode: Primary ASA 1000V): Run the following commands:
      1. failover   
      2. failover lan unit primary         
      3. failover lan interface fover GigabitEthernet0/2 
      4. failover link fover GigabitEthernet0/2
    8. (HA mode: Secondary ASA 1000V): Run the following commands:
      1. failover   
      2. failover lan unit secondary              
      3. failover lan interface fover GigabitEthernet0/2 
      4. failover link fover GigabitEthernet0/2
  2. Run the write memory command.
  3. Power off the VM.
  4. Export the OVF template by clicking the deployed ASA1000V VM and selecting File > Export > Export OVF Template.

To update the external script

Because the exported OVF/OVA renames the source network values in the network mapping, you must update those names in the external script's script-arguments.

  1. On the BMC Network Automation UI, navigate to Admin > Device Adapters > External Script > ASA 1000V > ASA1000V Deploy.
  2. Click Export.
  3. Open the external script adapter XML file.

    <scriptArg>dvportgroup-603=${runtime.insidePortProfileName}</scriptArg>
    <scriptArg>-net</scriptArg>
    <scriptArg>dvportgroup-604=${runtime.outsidePortProfileName}</scriptArg>
    <scriptArg>-net</scriptArg>
    <scriptArg>dvportgroup-324=${runtime.haPortProfileName}</scriptArg>
    <scriptArg>-net</scriptArg>
    <scriptArg>dvportgroup-304=${runtime.managementPortProfileName}</scriptArg>

     

  4. Make the following replacements for dvportgroup, shown in the preceding figure, and save the file:
    • dvportgroup: 603 to 2095
    • dvportgroup: 604 to 2096
    • dvportgroup: 324 to 300
    • dvportgroup: 304 to 201
  5. Import the file by navigating to Admin > Device Adapter > Import.

 Back to top

Known issue in OVF deployment and its workaround

ASA 1000V VM deployment might fail due to the following error:

Failed to deploy OVF package: File <file path> was not found

Use the following procedure to fix the problem:

  1. Extract the archive file by running tar -xf ovaFile.                                                                                                                               

  2. List all the files by running the ls command.
      .mv, .ovf, and .vmdk files should be listed.

  3. Change vmware.cdrom.iso to vmware.cdrom.remotepassthrough in the .ovf file.  
  4. Save the file.

  5. Calculate and verify the SHA-1 hashes on the OVF file by running sha1sum ovfFile.

  6. Replace the existing hash value for the SHA1(ASA 1000V Sec.ovf) parameter in the .mf file with the value calculated in the earlier step.

  7. Create the archive file in verbose mode by running tar -cvf mostFinalMasterModified.ova  mostFinalMaster.ovf.

  8. Update the archive file in verbose mode by running tar -uvf mostFinalMasterModified.ova mostFinalMaster.mf mostFinalMaster*.vmdk.

Back to top

Error handling

BMC Network Automation configures Cisco ASA 1000V through Cisco Virtual Network Management Center (VNMC). The Cisco VNMC configuration is pushed to the ASA 1000V VM. Config State, a managed-resource parameter in the edge firewall in VNMC shows the status of the configuration that is pushed from VNMC to ASA 1000V.

  • applied confirms successful configuration

  • failed-to-apply indicates failed configuration

BMC Network Automation checks whether Config State is set to applied after every configuration attempt. When Config State is set to failed-to-apply, the Cisco VNMC configuration is not pushed to the ASA 1000V VM and the administrator must click the View Configuration Faults link to view the reason for the failure.

The administrator must acknowledge the faults and revert the faulty configurations. ASA 1000V does not accept any further configurations until Config State is set to applied.

Guidelines for creating the pod blueprint and pod

The Pod Firewall Node is the VNMC device. For management and HA networks, specify values for management port profiles. For HA, specify the file path for the primary and secondary OVA/OVF files. VNMC Shared Secret Password specifies the password that is required to register ASA 1000V to VNMC. You specify this password during pod creation.

Note

 For HA, ensure that both the OVA/OVF files are of the same OS version.Otherwise, failover inspection fails.

The following figure shows a pod node with sample values for the required parameters:

 Back to top

Guidelines for creating the container blueprint and container

The container blueprint must have the following nodes:

  • nodeBlueprint
  • virtualGuestBlueprint

  • natTypeBlueprint

  • managedInterfaceBlueprint

Warning

Container creation might fail if you do not execute the actions in the sequence in which they are defined in the SampleContainerBlueprintASA1000v-HA.xml file.

nodeBlueprint

The container blueprint must have one node of type containerFirewallHostBlueprint for ASA 1000V.

Create Tenant is the configuration action required in the containerFirewallHostBlueprint tag. This configuration action runs the following command to create a tenant in VNMC:

add-tenant -name "${container.name}"

The following code block defines the configureActionInfoBlueprint tag:

<configureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
  <name>Create Tenant</name>
  <condition>>-EXISTS- container.nodes['VFW']</condition>
  <templateGroups>
    <item>CreateTenant</item>
  </templateGroups>
</configureActionInfoBlueprint>

The Initialize ASA1000v action adds one Edge-Security Profile for Inside Network and one Edge-Security Profile for Outside Network. Each Inside Network and Outside Network Edge-Security Profile has one Policy-Set for Ingress traffic and one Policy-Set for Egress traffic. Each Policy-Set for Ingress and Egress has one policy. A Policy-Set is treated as an Access Control List (ACL). Therefore, the Policy-Set for Ingress traffic is equivalent to the Inbound ACL and the Policy-Set for Egress traffic is equivalent  to the Outbound ACL.This action also adds a Policy-Set and a policy for Static Natting in the Edge-Security Profile of the Inside Network.

The following code sample shows the pseudo commands in the Initialize ASA 1000V template:

add-policy \-name "${container.nicSegments[Customer Network 1].ingressPolicy}"-tenant
"${container.name}"
add-policy \-name "${container.nicSegments[Customer Network 1].egressPolicy}"-tenant
"${container.name}"
add-policy \-name "${container.nicSegments[Outside Network].ingressPolicy}"-tenant
"${container.name}"
add-policy \-name "${container.nicSegments[Outside Network].egressPolicy}"-tenant
"${container.name}"
add-policy-set \-name "${container.nicSegments[Customer Network 1].ingressPolicySet}"-tenant
"${container.name}"
add-policy-set \-name "${container.nicSegments[Customer Network 1].egressPolicySet}"-tenant
"${container.name}"
add-policy-set \-name "${container.nicSegments[Outside Network].ingressPolicySet}"-tenant
"${container.name}"
add-policy-set \-name "${container.nicSegments[Outside Network].egressPolicySet}"-tenant
"${container.name}"
add-policy-to-policy-set-name "${container.nicSegments[Customer Network
1].ingressPolicySet}"\-tenant "${container.name}" -policy
"${container.nicSegments[Customer Network 1].ingressPolicy}"
add-policy-to-policy-set-name "${container.nicSegments[Customer Network
1].egressPolicySet}"\-tenant "${container.name}" -policy
"${container.nicSegments[Customer Network 1].egressPolicy}"
add-policy-to-policy-set-name "${container.nicSegments[Outside Network].ingressPolicySet}" \-tenant
"${container.name}" -policy
"${container.nicSegments[Outside Network].ingressPolicy}"
add-policy-to-policy-set-name "${container.nicSegments[Outside Network].egressPolicySet}" \-tenant
"${container.name}" -policy
"${container.nicSegments[Outside Network].egressPolicy}"
add-nat-policy-set \-tenant ${container.name} -natPolicySetName natPolicySet-${container.id}
add-nat-policy -tenant "${container.name}" \-natPolicyName "natPolicy-${container.id}"
add-nat-policy-to-policy-set-tenant "${container.name}" \-natPolicyName "natPolicy-${container.id}"
\-natPolicySetName natPolicySet-${container.id}
add-edge-security-profile \-name ${container.nicSegments[Customer Network 1].edgeSecurityProfile}
\-tenant "${container.name}" \ -egressPolicySet ${container.nicSegments[Customer Network
1].egressPolicySet}
\-ingressPolicySet ${container.nicSegments[Customer Network 1].ingressPolicySet} \-natPsetRef
natPolicySet-${container.id}
add-edge-security-profile \-name ${container.nicSegments[Outside Network].edgeSecurityProfile}
\-tenant "${container.name}"
\ -egressPolicySet ${container.nicSegments[Outside Network].egressPolicySet} \-ingressPolicySet
${container.nicSegments[Outside Network].ingressPolicySet}

Back to top

The Create ASA1000v action calls Deploy ASA 1000v, an external script action, which in turn calls a vSphere API to deploy the VM on ESX.

The following code shows the sample configureActionInfoBlueprint with all the required runtime property mapping:

<configureActionInfoBlueprint xsi:type="externalScriptActionInfoBlueprint">
  <guid>5F09E1A8-8679-4D7F-B775-137B347EB898</guid>
  <name>Create ASA1000V</name>
  <condition>-EXISTS-container.nodes['VFW']</condition>
  <runtimeProps>
    <item>                       
      <key>vCentreURL</key>                        
      <value>${pod.node.params[vCenter Address]}</value>
    </item>
    <item>                   
      <key>vCentreUser</key>                          
      <value>${pod.node.params[vCenter Admin Username]}</value>
    </item>
    <item>                      
      <key>vCentreUserPassword</key>                          
      <value>${pod.node.params[vCenter Admin Password]}</value>
    </item>
    <item>                         
      <key>datacenter</key>                          
      <value>${pod.node.params[ESX Data Center]}</value>
    </item>
    <item>                    
      <key>esxCluster</key>                           
      <value>${pod.node.params[ESX Cluster]}</value>
    </item>
    <item>
      <key>vmName</key>                
      <value>${container.nodes[VFW].device.name}</value>
    </item>
    <item>                     
      <key>insidePortProfileName</key>
      <value>${container.nodes[Access].portTypes[Customer Port Type 1].name}</value>
    </item>
    <item>                          
      <key>outsidePortProfileName</key>                          
      <value>${container.nodes[Access].portTypes[Outside Port Type 1].name}</value>
    </item>
    <item>                         
      <key>haPortProfileName</key>                           
      <value>${pod.node.params[HA Port Profile]}</value>
    </item>
    <item>                           
      <key>managementPortProfileName</key>                           
      <value>${pod.node.params[Management Port Profile]}</value>
    </item>
    <item>                           
      <key>haActiveIPv4</key>                           
      <value>${container.nodes[VFW].addresses[HA Active]}</value>
    </item>
    <item>
      <key>haSubnetIPv4</key>                      
      <value>${container.nodes[VFW].addresses[HA Active].subnetMask}</value>
    </item>
    <item>                         
      <key>haStandbyIPv4</key>
      <value>${container.nodes[VFW].addresses[HA Standby]}</value>
    </item>
    <item>
      <key>ASA1000VManagementIP</key>
      <value>${container.nodes[VFW].addresses[Management]}</value>
    </item>
    <item>
      <key>ASA1000VManagementStandbyIP</key>
      <value>${container.nodes[VFW].addresses[Management-Standby]}</value>
    </item>
    <item>
      <key>ASA1000VManagementSubnetMask</key>                       
      <value>${container.nodes[VFW].addresses[Management].subnetMask}</value>
    </item>
    <item>
      <key>ASA1000VManagementGatewayIP</key>                           
      <value>${pod.addressPools[Management].gatewayAddress}</value>
    </item>
    <item>                          
      <key>vnmcIP</key>                           
      <value>${container.node.device.address}</value>
    </item>
    <item>
      <key>asdmIP</key>                         
      <value>0.0.0.0</value>
    </item>
    <item>                     
      <key>haFlag</key>                           
      <value>true</value>
    </item>
    <item>                         
      <key>ovfPrimaryFileName</key>                          
      <value>${runtime.ovfPrimaryFileName}</value>
    </item>
    <item>                         
      <key>ovfSecondaryFileName</key>                          
      <value>${runtime.ovfSecondaryFileName}</value>
    </item>
    <!—above two params ovf primary/secondary file name can be hardcoded over here or can be taken as pod params or 
can be defined as runtime parameters whos value can be passed while creating container ->
  </runtimeProps>             
</configureActionInfoBlueprint>

Note

In a multi-tenant environment, because BMC Network Automation recommends using VNMC as the management mode, you must set the value of the asdmIP runtime property to 0.0.0.0.

For standalone mode, you must change the following tags in the blueprint:

  • haFlag must be set to false.

  • haActiveIPv4, haStandbyIPv4, and haSubnetIPv4 must be set to “0.0.0.0”.

  • Delete the ovfPrimaryFileName and ovfSecondaryFileName sections.

  • Define ovfFileName.

  • In Network > Template, make the following changes in the AssignASA template:

    • Set -haMode to standalone

    • Delete -insideInterfaceIpAddressSecondary ${container.nodes[VFW].addresses[Inside Secondary]}
    • Delete -outsideInterfaceIpAddressSecondary ${container.nodes[VFW].addresses[Outside Secondary]}
  • In Admin > Device Adapter > External Script Action > ASA1000v > ASA1000v Deploy, delete the ovfPrimaryFileName and ovfSecondaryFileName arguments and define ovfFileName.

Back to top

virtualGuestBlueprint

The virtualGuestBlueprint definition requires the following configuration actions:

Inspect ASA 1000v Failover Status: This custom action is required only for HA. It configures the failover interface IP and then verifies the failover status of the active and standby ASA 1000V VMs. It ensures that the failover interface is functioning, the secondary device is ready, and the interfaces are in the monitored state before ASA 1000V is registered in the VNMC.

Warning

If HA is not formed and ASA 1000V is registered in the VNMC, HA might not function properly. To avoid this situation, ensure that the custom action executes successfully.

The following code snippet shows a sample virtualGuestBlueprint with the Inspect ASA1000v Failover Status custom action:

 <configureActionInfoBlueprint xsi:type="customActionInfoBlueprint">
  <description>Inspect ASA1000V Failover Status</description>
  <name>Inspect ASA1000V Failover Status</name>                     
  <condition>-EXISTS- container.nodes['VFW']</condition>
  <guid>58A6E8FC-BB30-4F7A-A3CD-528A9A72B825</guid>
  <runtimeProps>
    <item>
      <key>inspectFailoverLoopCount</key>
      <value>50</value>
    </item>
    <item>                     
      <key>haActiveIPv4</key>                            
      <value>${container.nodes[VFW].addresses[HA Active]}</value>
    </item>
    <item>                              
      <key>haStandbyIPv4</key>                             
      <value>${container.nodes[VFW].addresses[HA Standby]}</value>
    </item>
    <item>
      <key>haSubnetMaskIPv4</key>                             
      <value>${container.nodes[VFW].addresses[HA Active].subnetMask}</value>
    </item>
  </runtimeProps>
</configureActionInfoBlueprint>

The inspectFailoverLoopCount runtime property is an integer value that defines the number of times BMC Network Automation executes the show failover command and tries to find the HA status. The loop count depends upon network latency and other parameters. In the sample container blueprint, the loop count is 50, which can be modified.

Occasionally, after configuring the failover Interface IP, BMC Network Automation loses network connectivity for a few seconds, causing the SSH session to terminate and resulting in failure in container provisioning.

Note

ASA 1000V takes some time to boot and power up. Immediately after the Create ASA 1000v job completes, BMC Network Automation executes the Inspect ASA 1000v Failover Status action. If the ASA 1000V VM is not reachable, the action times out. To avoid this situation, on the BMC Network Automation GUI, navigate to Admin > System Parameters and set Timeout for Establishing Connections to a value between 600 and 1500 seconds.

The following code sample shows the expected output of the show failover command:

show failover
Failover On
Failover unit Primary
Failover LAN Interface: fover GigabitEthernet0/2 (up)
Unit Poll frequency 1 seconds, holdtime 15 seconds
Interface Poll frequency 5 seconds, holdtime 25 seconds
Interface Policy 1
Monitored Interfaces 1 of 265 maximum
Version: Ours 8.7(1)4, Mate 8.7(1)4
Last Failover at: 11:33:31 UTC Sep 6 2013
    This host: Primary - Active 
        Active time: 54 (sec)
        slot 0: empty
            Interface management (10.1.17.0): Normal (Monitored)
    Other host: Secondary - Standby (Ready)
        Active time: 0 (sec)
        slot 0: empty
            Interface management (10.1.17.1): Normal (Monitored)

Back to top

Register ASA1000v to VNMC: This custom action is executed on the ASA 1000V device to register ASA 1000V to VNMC by executing vnmc policy-agent commands.The following code snippet shows a sample virtualGuestBlueprint with the Register ASA1000v to VNMC custom action:

  <configureActionInfoBlueprint xsi:type="customActionInfoBlueprint">
   <description>Register ASA1000V to VNMC</description>
   <name>Register ASA1000V</name>                     
   <condition>-EXISTS- container.nodes['VFW']</condition>
   <guid>58A6E8FC-BB30-4F7A-A3CD-528A9A725F18</guid>
   <runtimeProps>
     <item>
       <key>VNMCIP</key>
       <value>>${container.nodes[FWH].device.address}</value>
     </item>
     <item>                     
       <key>sharedSecret</key>                            
       <value>>${pod.nodes[FWH].params[VNMC Shared Secret]}</value>
     </item>
   </runtimeProps>
  </configureActionInfoBlueprint>
  <configureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
    <name>Assign ASA to Edge Firewall</name> 
    <condition>-EXISTS- container.nodes['VFW']</condition> 
    <templateGroups>
      <item>AssignASA</item> 
    </templateGroups>
  </configureActionInfoBlueprint>
</configureActionInfoBlueprints>
<unconfigureActionInfoBlueprints>
  <unconfigureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
    <name>Inspect ASA1000v Failover Status</name> 
    <templateGroups>
      <item>dummy</item> 
    </templateGroups>
  </unconfigureActionInfoBlueprint>
  <unconfigureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
    <name>Register ASA1000v</name> 
    <templateGroups>
      <item>dummy</item> 
    </templateGroups>
  </unconfigureActionInfoBlueprint>

Back to top

Assign ASA to Edge Firewall: This action is executed on the VNMC by using pesudo commands to add an edge firewall and then assign the registered ASA 1000V to the firewall. The edge firewall is added with Inside and Outside Interfaces with their respective IP addresses and Edge-Security-Profile created for Outside Network, which is associated with the Outside Interface.

The following pseudo commands are used in the AssignASA template:

 add-edge-firewall \
  -name ${container.name}-VFW \
  -tenant ${container.name} \
  -haMode activestandby \
  -hostname "ASA1000v" \
  -insideInterfaceName inside \
  -insideInterfaceIpAddressPrimary ${container.nodes[VFW].addresses[Inside Primary]} \
  -insideInterfaceIpAddressSecondary ${container.nodes[VFW].addresses[Inside Secondary]} \
  -insideInterfaceIpSubnetMask ${container.nodes[VFW].addresses[Inside Primary].subnetMask} \
  -outsideInterfaceName outside \
  -outsideInterfaceIpAddressPrimary ${container.nodes[VFW].addresses[Outside Primary]} \
  -outsideInterfaceIpAddressSecondary ${container.nodes[VFW].addresses[Outside Secondary]} \
  -outsideInterfaceIpSubnetMask ${container.nodes[VFW].addresses[Outside Primary].subnetMask} \
  -edgeSecurityProfileName ${container.nicSegments[Outside Network].edgeSecurityProfile}
verify-client-registered \
  -address ${container.nodes[VFW].addresses[Management]} \
  -pollMax 20 \
  -pollIntervalSeconds 30
assign-asa-to-edge-firewall \
  -edgeFirewall ${container.name}-VFW \
  -tenant "${container.name}" \
  -address ${container.nodes[VFW].addresses[Management]}

Back to top

natTypeBlueprint

You can configure static Natting by using natTypeBlueprint. The following code snippet shows a sample natTypeBlueprint:

<natTypeBlueprint>
  <natRuleBlueprints/>    
  <addressTranslatorBlueprints>
    <addressTranslatorBlueprint>
      <addressPoolNames>
        <addressPoolName>Customer Network 1</addressPoolName>
      </addressPoolNames>
      <insideInterfaceName>${container.id}-Inside</insideInterfaceName>
      <outsideInterfaceName>>${container.id}-Outside</outsideInterfaceName>
    </addressTranslatorBlueprint>
  </addressTranslatorBlueprints>
  <createNatActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
    <requiresTunneling>true</requiresTunneling>
    <templateGroups>                         
      <item>Configure Static Nat - ASA1000V</item>
    </templateGroups>                 
  </createNatActionInfoBlueprint>
  <removeNatActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
    <requiresTunneling>true</requiresTunneling>
    <templateGroups>                         
      <item>Unconfigure Static Nat - ASA1000V</item>
    </templateGroups>               
  </removeNatActionInfoBlueprint>
</natRuleBlueprints>

The ConfigureStaticNat template first adds a NAT pool with one acquired IP from the Public IP pool. It then adds a rule (Source=%private Address of VM% Destination=ANY  Bidirectional=true) in the NAT-Policy. This action occurs in the Edge-Security Profile created for Inside Network. Unconfigure Static Nat removes the previously created NAT rule.

The following pseudo commands are used in the ConfigureStaticNat template:

add-nat-ip-pool -name natPool-${runtime.publicAddress} -tenant 
"${container.name}" -ip ${runtime.publicAddress} add-nat-rule-in-nat-policy \ 
-natActionType static \ -natRuleName natRule-${runtime.privateAddress} \ -tenant 
"${container.name}" \ -sourceTranslatedIpPoolName 
natPool-${runtime.publicAddress} \ -natPolicyName "natPolicy-${container.id}" \ 
-sourceConditionHostAddress ${runtime.privateAddress} \ -order 
${runtime.position}

Back to top

managedInterfaceBlueprint

The firewall has one ACL for Inbound traffic (Ingress) and one for Outbound traffic (Egress). While adding the edge firewall, add the following unique interfaces and specify REST-ful IP addresses:

  • ${container.id}-Inside

  • ${container.id}-Outside

 <managedInterfaceBlueprints>
  <managedInterfaceBlueprint>                       
    <inboundAclBlueprint>                           
      <enablePathUpdates>true</enablePathUpdates>                           
      <name>Inbound1</name>                           
      <ruleBlueprints/>                       
    </inboundAclBlueprint>                     
    <outboundAclBlueprint>                        
      <enablePathUpdates>true</enablePathUpdates>                         
      <name>Outbound1</name>                       
      <ruleBlueprints/>                       
    </outboundAclBlueprint>                      
    <name>${container.id}-Inside</name>                      
    <servicedSegmentNames>                           
      <servicedSegmentName>Customer Network 1</servicedSegmentName>                      
    </servicedSegmentNames>                   
  </managedInterfaceBlueprint>                  
  <managedInterfaceBlueprint>                      
    <inboundAclBlueprint>                          
      <enablePathUpdates>true</enablePathUpdates>                          
      <name>Inbound2</name>                        
      <ruleBlueprints/>                     
    </inboundAclBlueprint>                      
    <outboundAclBlueprint>                          
      <enablePathUpdates>true</enablePathUpdates>                          
      <name>Outbound2</name>                          
      <ruleBlueprints/>                        
    </outboundAclBlueprint>                  
    <name>${container.id}-Outside</name>                      
    <servicedSegmentNames>                        
      <servicedSegmentName>Outside Network</servicedSegmentName>                     
    </servicedSegmentNames>                  
  </managedInterfaceBlueprint>
</managedInterfaceBlueprints>

 Back to top

Cisco Nexus 1000V access switch configuration

In the SampleConfigureAccess template, the port profile created for Inside Network must be configured by using a vService node and ORG details. The vService node is first configured by defining an Inside Interface IP address. The Port-Profile created for Inside Network configures this vService node by using the respective Edge-Security-Profile name.

vlan ${container.vlans[Customer Network 1]}
  exit
vservice node ASA1000V-${container.id} type ASA1000V  
  ip address ${container.nodes[VFW].addresses[Inside]}  
  adjacency l2 vlan ${container.vlans[Customer Network 1]} 
  exit
port-profile Customer-${container.node.portTypes[Customer Port Type 1].vlan}
  switchport mode access
  switchport access vlan ${container.node.portTypes[Customer Port Type 1].vlan}
  vmware port-group Customer-${container.node.portTypes[Customer Port Type 1].vlan}
vservice node ASA1000V-${container.id} profile ${container.nicSegments[Customer Network 1].edgeSecurityProfile} 
org root/${container.name}
no shutdown
state enabled
exit
port-profile Outside-${container.node.portTypes[Outside Port Type 1].vlan}
  switchport mode access
  switchport access vlan ${container.node.portTypes[Outside Port Type 1].vlan}
  vmware port-group Outside-${container.node.portTypes[Outside Port Type 1].vlan}
  no shutdown
  state enabled
  exit

Note

If ASA 1000V and VSG are both used in a network container, you must configure a “vservice path,” which defines the order of the firewall devices on which a packet is filtered.

Sample pod and container blueprints

You can find sample pod and container blueprints and related templates in the BCAN_HOME\public\bmc\bca-networks\csm\samples\sampleWithASA1000V directory on the BMC Network Automation application server. See Pod model and Container model for additional information about the sample pod and container blueprints for use with Cisco ASA 1000V Cloud firewalls.

 Back to top

Was this page helpful? Yes No Submitting... Thank you

Comments