top of page
  • Writer's pictureMukesh Chanderia

ACI Multi Site

Updated: Jul 20

Multi-Site Connectivity can be done through ISN.


The ISN between sites must support these specific functionalities:


MTU in ISN and MP-BGP


System --> System Settings --> Control Plane --> 9000


  • Increased maximum transmission unit (MTU) support to handle the VXLAN encapsulated traffic.... The ISN must support increased MTU on its links to allow site-to-site VXLAN traffic.The generic recommendation to add 100 bytes above that value. For example, if the endpoints support jumbo frames (9000 bytes) or if you use the default settings on spine nodes that generate 9000-byte packets for exchanging endpoint routing information, you should configure the ISN to support an MTU size of at least 9100 bytes. The default control plane MTU value can be tuned by modifying the corresponding system settings in each APIC domain, while navigating to System > System Settings > Control Plane MTU

  • Open Shortest Path First (OSPF) support between spine switches and ISN routers. Even though OSPF peering is required between the spine switches and the ISN devices, you are not limited to use OSPF across the entire ISN infrastructure. ISN can be MPLS or Internet for ISN.

  • QoS considerations for consistent QoS policy deployment across you should configure a QoS DSCP marking policy on the spine nodes (CoS to DSCP) in each site to help ensure proper QoS treatment in the ISN.


The CoS-to-DSCP mappings are configured using the Cisco APIC in each site through Tenant > infra > Policies > Protocol > DSCP class-cos translation policy for L3 traffic.


The spine interfaces are connected to the ISN devices through point-to-point routed sub interfaces with a fixed VLAN 4.


ACI MultiSite deployments, the ISN does not need to support multicast to facilitate BUM traffic flows across sites. Instead, ACI uses headend replication for the flood traffic where the copy of the packet happens on the destination site. This process is much simpler than the inter-pod network (IPN) in Cisco ACI Multi-Pod deployments that requires Bidirectional Protocol Independent Multicast (PIM Bidir) support.


Specific hardware for the spine switches Cisco is Nexus EX platform or newer, since they must perform name-space translation function at line rate to avoid performance issues during inter site communication.


In addition, starting from Cisco ACI Release 3.2(1), a back-to-back topology between spine switches in two sites is supported.


ISN Control Plane


The OSPF control plane is used to exchange routing information for specific IP addresses defined on the spine switches between sites, while utilizing the following components:

  • BGP-EVPN Router-ID (EVPN-RID): A unique IP address that is defined on each spine node belonging to a fabric, which is used to establish MP-BGP EVPN adjacencies with the spine nodes in remote sites. This BGP Router-ID is the same for Multi-Pod & Multi-site is already configured in the same site.

  • Overlay Unicast TEP (O-UTEP): A common anycast address that is shared by all the spine nodes in a pod at the same site. It is used to source and receive unicast VXLAN data-plane traffic.

  • Overlay Multicast TEP (O-MTEP): A common anycast address that is shared by all the spine nodes in the same site and is used to perform headend replication for BUM traffic (sourced from the O-UTEP address that is defined on the local spine nodes and destined for the O-MTEP of remote sites to which the given bridge domain is being stretched). This IP address is assigned per site.


These three types of IP addresses must be globally routable addresses.



EVPN-RID, O-UTEP, and O-MTEP IP addresses are the only prefixes that must be exchanged across sites to enable the intersite MP-BGP EVPN control plane and the VXLAN data plane.


Therefore, the TEP pool prefixes used within each site, such as TEP pool 1 for Site 1 and TEP pool 2 for Site 2, do not need to be exchanged across sites to allow intersite communication.


So, there are no technical restrictions regarding how those pools should be assigned. However, the strong recommendation is not to assign overlapping TEP pools across separate sites so that your system is prepared for future functions that may require the exchange of TEP pool summary prefixes.


In addition, as best practice you should ensure that those TEP pool prefixes are filtered on the first ISN device so that they are not injected into the ISN network (as they may overlap with the address space already deployed in the backbone of the network).



As previously mentioned, MP-BGP EVPN adjacencies are established between spine nodes belonging to different fabrics by using the EVPN-RID addresses.


Both MP Interior BGP (MP-IBGP) and MP External BGP (MP-EBGP) sessions are supported, depending on the specific BGP autonomous system to which each site belongs.


When deploying EBGP sessions across sites, you can create only a full mesh of adjacencies, where each site’s spine switch connected to the ISN establishes EVPN peerings with all the remote spine switches.


When IBGP is deployed across sites, you can instead decide whether to use a full mesh or to introduce route-reflector nodes, which should be placed in separate sites to help ensure resiliency. The route-reflector nodes peer with each other and with all the remote spine nodes


Starting from the APIC release 4.2(1), external routes learned from L3Outs can be exchanged between sites on top of endpoint information. This feature is called intersite L3Out.


You can configure the infra for intersite connectivity using the Cisco ND and NDO user interface while following these summary steps:


  1. Add sites in Cisco Nexus Dashboard first. From the left navigation menu, select Sites. In the top right of the main page, select Actions > Add Site. For Site Type, select ACI or Cloud ACI depending on the type of ACI fabric you are adding. Specify the Cisco APIC's IP address and user credentials, and unique site ID.

  2. Spine switch data from the registered site is automatically pulled and the BGP AS number is populated with the ACI MP-BGP AS number.

  3. From the Nexus Dashboard's Services page, open the Nexus Dashboard Orchestrator service. You will be automatically logged in using the Nexus Dashboard user's credentials.

  4. In the Nexus Dashboard Orchestrator GUI, manage the sites. From the left navigation menu, select Infrastructure > Sites. In the main pane, change the State from Unmanaged to Managed for each fabric that you want the NDO to manage.

  5. In the NDO, select Infrastructure > Infra Configuration and click Configure Infra to begin the configuration of the fabric connectivity infra for the sites.

  6. In the main list of the Fabric Connectivity Infra page, click General Settings to specify the Control Plane BGP parameters, such as BGP Peering Type (full-mesh or route-reflector), Keepalive Interval (Seconds), Hold Interval (Seconds), Stale Interval (Seconds), and so on.

  7. In the main list of the Fabric Connectivity Infra page, click on a specific site, to access three different configuration levels: Site, Pod, and Spine level.

  8. At Site level, perform the following actions:

  • Turn on the ACI Multi-Site knob to enable the site for MultiSite connectivity.

  • In the Overlay Multicast TEP field, enter the O-MTEP IP address.

  • In the BGP Autonomous System Number field, enter the BGP autonomous system number or the IP address, if you want to modify this setting.

  • In the External Router Domain choose an external router domain that you have created in the APIC user interface in each site just as for a standard L3Out.

  • In Underlay Configuration tab, configure OSPF Area ID, OSPF Area Type, and OSPF Policies.

  1. At Pod level, perform the following actions:

  • In the Overlay Unicast TEP field, enter the O-UTEP IP address.

  1. At Spine level, perform the following actions:

  • Configure the port using Add Port for OSPF connection towards the ISN.

  • (Optional) Turn on the BGP peering knob to enable MP-BGP EVPN and in the BGP-EVPN ROUTER-ID field, enter the EVPN-RID IP address for that spine switch. The spine switch will automatically try to peer with all the other spine switches with the BGP peering on.

  • (Optional) Turn on the Spine is route reflector knob if the spine switch acts as BGP route reflector.

  1. Click Deploy to apply the configuration.


MultiSite Overlay Control Plane



The overlay control plane events for the exchange of host information across sites follow this sequence:

  1. Endpoints EP1 and EP2 connect to separate Sites 1 and 2.

  2. The endpoints are locally learned by the leaf node in their sites, and the leaf nodes originate a COOP control-plane message for endpoint information to the spine nodes.

  3. Spine nodes in both sites learn about the locally connected endpoints at the leaf nodes. Still, this information is not yet exchanged across sites for EP1 and EP2 EPGs because there is no policy in place that allows communication between them.

  4. An intersite policy is defined in the Cisco MultiSite Orchestrator, which is pushed and rendered in the two sites.

  5. The intersite policy triggers Type-2 EVPN updates across sites to exchange EP1 and EP2 host route information, which is always associated with the O-UTEP address that identifies their site. Thus, when you move an endpoint in a site between leaf nodes, the spine nodes will not generate additional EVPN updates, until the endpoint is migrated to a different site.

  6. The received MP-BGP EVPN information is synced (via COOP) with the other local spine nodes that are not BGP intersite peers.


MultiSite Overlay Data Plane


ACI MultiSite uses MP-BGP EVPN is used as a control plane between spine switches for exchanging host information for discovered endpoints that are part of separate fabrics to allow east-west.


After endpoint information is exchanged across sites, the VXLAN data plane is used to allow intersite Layer 2 and Layer 3 communication.


The endpoint information is shared across sites only when the NDO configuration indicates that the endpoints need to communicate with the other site.


The two conditions for EP communication across site are:


  1. The endpoint belongs to the EPG that is stretched across sites.

  2. The endpoint belongs to the non-stretched EPG with a contract that allows the communication to the EPG on another site.


In such scenarios, the endpoints with IP addresses are shared across sites via MP-BGP EVPN.


However, the endpoints without IP addresses (such as a Layer 2 endpoint that only has a MAC address) are not shared until Layer 2 STRETCH is enabled on the NDO.



The following shows the detailed steps of how MP-BGP EVPN is used to share such endpoint information across sites:



  1. Endpoints EP1 and EP2 connect to separate Sites 1 and 2.

  2. The endpoints are locally learned by the leaf node & information to the spine nodes.

  3. Spine nodes in both sites learn about the locally connected endpoints at the leaf nodes. Still, this information is not yet exchanged across sites for EP1 and EP2 EPGs because there is no policy in place that allows communication between them.

  4. An intersite policy is defined in the Cisco MultiSite Orchestrator, which is pushed to the two sites.

  5. The intersite policy triggers Type-2 EVPN updates across sites to exchange EP1 and EP2 host route information, which is always associated with the O-UTEP address that identifies their site.

  6. Thus, when you move an endpoint in a site between leaf nodes, the spine nodes will not generate additional EVPN updates, until the endpoint is migrated to a different site.


MultiSite Overlay Data Plane


BUM Traffic Between Sites


The deployment of VXLAN tunnels between the endpoints in different sites creates a logical separation over the ISN, which can have multiple Layer 3 hops, so they can communicate as they are part of the same logical Layer 2 domain.


Thus, those endpoints can send Layer 2 BUM frames between sites to other endpoints connected to the same Layer 2 segment, regardless of their actual physical location.


Cisco ACI MultiSite uses ingress replication and headend replication for BUM frames, which is the reason why the ISN does not need to support multicast routing.


In ingress replication, the spine node in the source site creates copies of the BUM frame, one for each destination site on which the Layer 2 domain is extended.


Those copies are sent towards O-MTEP of each site. These packets are handled as unicast traffic in the ISN because the O-MTEP in the outer encapsulation is a unicast IP address.


Once the BUM frame encapsulated with a unicast O-MTEP reaches the destination site, the destination site replicates the BUM frame and floods it within the site, which is the headend replication.


The BUM traffic is forwarded across sites in the way mentioned above only when Intersite BUM Traffic Allow is enabled on the bridge domain.



If the stretched bridge domain has ARP Flooding enabled or Layer 2 Unknown Unicast is set to flood, Intersite BUM Traffic Allow needs to be enabled so that the flood traffic can be flooded to the other sites as well.



There are three types of Layer 2 BUM traffic that are forwarded across sites when Intersite BUM Traffic Allow is enabled on the bridge domain:


  • Layer 2 Broadcast frames (B): The frames are always forwarded across sites when Intersite BUM Traffic Allow is enabled. The ARP request is an exception because the ARP request is flooded only when ARP Flooding is enabled in the bridge domain regardless of MultiSite.

  • Layer 2 Unknown Unicast frames (U): The frames are flooded only when Layer 2 Unknown Unicast is set to flood in the bridge domain regardless of MultiSite. When Intersite BUM Traffic Allow is enabled, Layer 2 Unknown Unicast flooding/proxy mode can be modified via the NDO.

  • Layer 2 Multicast frames (M): The same forwarding behavior (ingress and headend replication) applies to intra-bridge-domain Layer 3 multicast frames (that is, the source and receivers are in the same or different IP subnets but part of the same bridge domain) or to "true" Layer 2 multicast frames (that is, the destination MAC address is multicast and there is no IP header in the packet). In both cases, the traffic is forwarded across the sites where the bridge domain is stretched across sites with Intersite BUM Traffic Allow is enabled.


The following figure shows Layer 2 BUM traffic flow across sites, for a specific bridge domain that is stretched with Intersite BUM Traffic Allow enabled.




The Layer 2 BUM frame across sites follows this sequence:


  1. EP1, belonging to a specific bridge domain, generates a Layer 2 BUM frame.

  2. Depending on the BUM type of frame and the corresponding bridge domain settings, the leaf node may need to flood the traffic in the bridge domain. As a result, the frame is VXLAN-encapsulated and sent to the specific multicast group (Group IP address outer [GIPo]) associated with the bridge domain within the fabric along one of the specific multidestination trees associated to that GIPo, so it can reach all the other leaf and spine nodes in the same site.

  3. One of the spine nodes connected to the ISN is elected as the designated forwarder for that specific bridge domain (this election is held between the spine nodes using IS-IS protocol exchanges). The designated forwarder is responsible for replicating each BUM frame for that bridge domain to all the remote sites with the same stretched bridge domain.

  4. The designated forwarder makes a copy of the BUM frame and sends it to the remote sites. The destination IP address used when the packet is encapsulated with VXLAN is the special IP address (O-MTEP) identifying each remote site and is used specifically for the transmission of BUM traffic across sites. The source IP address for the VXLAN-encapsulated packet is instead the anycast O-UTEP address deployed on all the local spine nodes connected to the ISN.

  5. One of the remote spine nodes receives the packet, translates the VNID value contained in the header to the locally significant VNID value associated with the same bridge domain, and sends the traffic within the site along one of the local Mult destination trees for the bridge domain.

  6. The traffic is forwarded within the site and reaches all the spine and leaf nodes with endpoints that are actively connected to the specific bridge domain.

  7. The receiving leaf nodes use the information that is contained in the VXLAN header to learn the site location for endpoint EP1 that sourced the BUM frame. They also send the BUM frame to all (or some of) the local interfaces that are associated with the bridge domain, so that endpoint EP2 (in this example) can receive it.


Depending on the number of configured bridge domains, the same GIPo address may be associated with different bridge domains in the same site.


Thus, when flooding for one of those bridge domains is enabled across sites with Intersite BUM Traffic Allow, BUM traffic for the other bridge domains using the same GIPo address is also sent across the sites and will then be dropped on the received spine nodes.


This behavior can increase the bandwidth utilization in the ISN.

Because of this behavior, when a bridge domain is configured as stretched with Intersite BUM Traffic Allow enabled from the Cisco NDO user interface, by default a GIPo address is assigned from a separate range of multicast addresses.


It is reflected in the user interface by the Optimize WAN Bandwidth flag, which is enabled by default for the bridge domain created by the NDO.


Note: If a bridge domain configuration is imported from an APIC domain, by default the flag is disabled, so you will need to manually configure it to change the GIPo address already associated with the bridge domain but doing so will cause a few seconds of outage for the intra-fabric BUM traffic for the bridge domain while the GIPo address is updated on all the leaf nodes on which that specific bridge domain is deployed.


So far, you have seen three bridge domain configurations for the Cisco ACI MultiSite:

  • Layer 2 Stretch: Share Layer 2 endpoint information across sites on top of Layer 3 endpoints.

  • Intersite BUM Traffic Allow: Forward BUM traffic across sites.

  • Optimize WAN Bandwidth: Allocate a GIPo to the bridge domain from a reserved range to avoid unnecessary intersite traffic due to multiple bridge domains sharing the same GIPo.

Although these MultiSite specific configurations could be changed by the APIC in each site, it is recommended to maintain from the NDO like any other configurations that NDO has visibility of.



In the APIC, the configurations are reflected (and could be modified) in the Advanced/Troubleshooting tab of the bridge domain.



Unicast Communication Between Sites


The first requirement before intra-subnet IP communication across sites can be achieved is to complete the ARP exchange between source and destination endpoints.


In ACI, Address Resolution Protocol (ARP) request handling (whether ACI floods it or not) depends on the ARP Flooding setting on the bridge domain.


There are two different scenarios to consider:

  • ARP flooding is enabled in the bridge domain: When ARP flooding is enabled, the Intersite BUM Traffic Allow in the same bridge domain needs to be enabled as well because the ARP request is handled as normal broadcast traffic and is flooded. The previously mentioned is the default configuration for stretched bridge domains created by the NDO with Intersite BUM Traffic Allow enabled.

  • ARP flooding is disabled in the bridge domain: When ARP flooding is disabled, the ARP request is handled as a routed unicast packet. Hence, the BUM traffic handling across sites (Intersite BUM Traffic Allow) does not matter. The previously mentioned is the default configuration for stretched bridge domains created by the NDO with Intersite BUM Traffic Allow disabled.


The following example illustrates an intra-subnet IP communication across sites, where ARP flooding is disabled on the bridge domain. This figure shows the ARP request flow from Site 1 to Site 2.



The ARP request from EP1 in Site 1 to EP2 in Site 2 with ARP flooding that is disabled in the bridge domain follows this sequence:


  1. EP1 generates an ARP request for the EP2 IP address.

  2. Since ARP Flooding is disabled, the local leaf node inspects the ARP payload and checks the target IP address which is of the EP2. Assuming that EP2’s IP information is initially unknown on the local leaf, the ARP request is encapsulated and sent toward the Proxy A anycast TEP address defined on all the local spine nodes (based on the pervasive EP1/EP2 IP subnet information installed in the local routing table) to perform a lookup in the COOP database.

  3. One of the local spine nodes receives the ARP request from the local leaf node.

  4. The capability of forwarding ARP requests across sites in "unicast mode" is mainly dependent on the knowledge in the COOP database of the IP address of the remote endpoint (information that is received via the MP-BGP EVPN control plane with the remote spine nodes). If the IP address of the remote endpoint is known (that is, EP2 is not a “silent host”), the local spine nodes know the remote O-UTEP B address identifying the site to which EP2 is connected and can encapsulate the packet and send it across the ISN toward the remote site. The spine node also rewrites the source IP address of the VXLAN-encapsulated packet, replacing the TEP address of the leaf node with the local O-UTEP A address identifying the local site.


Note : If the IP address of the remote endpoint was not known in the COOP database in Site 1, from Cisco ACI Release 3.2(1) a new “intersite ARP Glean” function has been introduced to ensure that the remote “silent host” can receive the ARP request, so it can reply and be discovered in the remote site.


5) The VXLAN frame is received by one of the remote spine nodes, which translates the original VNID and class ID values to locally significant ones and encapsulates the ARP request, then sends it toward the local leaf node to which EP2 is connected.


6)The leaf node receives the frame, de-encapsulates it, and learns the class ID and site location information for remote endpoint EP1.


7)The frame is then forwarded to the local interface to which the EP2 is connected, assuming the ARP flooding is disabled on the bridge domain in Site 2 as well.


At this point, EP2 can reply with a unicast ARP response that is delivered to EP1, as shown in this figure:



The ARP reply from EP2 in Site 2 to EP1 in Site 1 follows this sequence:

  1. EP2 sends ARP reply to the EP1.

  2. The local leaf node encapsulates the traffic to remote O-UTEP A address.

  3. The spine nodes also rewrite the source IP address of the VXLAN-encapsulated packet, with the local O-UTEP B address identifying the Site 2.

  4. The VXLAN frame is received by the spine node, which translates the original VNID and class ID values of Site 2 to locally significant ones (Site 1) and sends it toward the local leaf node to which EP1 is connected.

  5. The leaf node receives the frame, de-encapsulates it, and learns the class ID and site location information for remote endpoint EP2.

  6. The frame is then sent to the local interface on the leaf node and reaches EP1.



At the completion of the ARP exchange process described above, each leaf node has full knowledge of the class ID and location of the remote endpoints that are trying to communicate.


Thus, from this point on the traffic will always flow in two directions, in which a leaf node at a given site always encapsulates traffic and sends it towards the O-UTEP address that identifies the site to which the destination endpoint is connected.


MultiSite Stretched Components


Cisco ACI Multisite architecture can be used in various scenarios to meet different business requirements. Each specific use case essentially uses different bridge-domain configuration settings, to facilitate different connectivity options.


Layer 2 Connectivity Across Sites Without Flooding


In this Cisco ACI Multi-Site use case, the tenant, VRF, bridge domains, and EPGs are stretched across sites, while the Layer 2 BUM flooding is localized at each site and it is not forwarded between sites. Similarly, their provider and consumer contracts are also stretched between sites.



The chosen bridge domain in the Cisco NDO user interface should be configured as depicted in this figure:



This use case enables you to implement IP mobility across sites without BUM flooding. Even when the endpoint relocated to a different site, it can still communicate with endpoints that are part of the same (or different) IP subnet that may still be connected to the original site.


If a new endpoint in the original site wants to talk to the migrated endpoint, the ARP requests from the new endpoint for the migrated endpoint can still be delivered even without BUM flooding, assuming the ARP flooding is disabled on the bridge domain in each site and the migrated endpoint is already learned in the new site.


In case the migrated endpoint has yet to be learned, ARP glean process will kick in even across sites.


An issue in a given bridge domain in a site (as, for example, a broadcast storm) can in fact be confined in that specific site without affecting other connected fabrics.


The IP mobility support across sites (when Layer 2 flooding is not required) is needed in these two main scenarios:

  • For disaster-recovery scenarios that use cold migrations, in which you can move an application, initially running inside Fabric 1, to a different site in Fabric 2.

  • While it can be achieved by changing the IP address used to access the application and applying a DNS-based mechanism to point clients to the new IP address, often the desire is to maintain the same IP address the application had while running in the original site.

  • For business continuity scenarios that use hot migrations, in which you can temporarily relocate workloads across sites without interruption in the provided service that is being migrated. A typical example of functionality that can be used for this purpose is VMware vSphere vMotion for live VM migration.


Layer 2 Connectivity Across Sites with Flooding


This Cisco ACI MultiSite design provides traditional Layer 2 stretching of bridge domains across sites, including the capability to flood Layer 2 BUM frames.


In this use case, the tenant and the VRF are stretched across sites, including bridge domains, EPGs, and their provider and consumer contracts. BUM traffic is forwarded across sites, using the headend replication capabilities of the spine nodes that replicate and send the frames to each remote fabric where the Layer 2 bridge domain has been stretched.



To stretch a bridge domain in the Cisco NDO user interface with flooding enable, use the following option:



The need to flood BUM traffic across sites is driven by specific requirements, such as:

  • Deployment of same application hierarchy on all sites, which enables you to spread workloads belonging to the various EPGs across different fabrics, and use common and consistent policies.

  • Active/Active high availability implementations between the sites.

  • Layer 2 clustering, which requires BUM communication between cluster nodes.

  • Live VM migration.


Layer-3-Only Connectivity Across Sites


In many deployment scenarios, the fundamental requirement is to help ensure that only routed communication can be established across sites. In this use case, no Layer 2 extension or flooding is allowed across sites, while different bridge domains and IP subnets are defined in separate sites.


As always in Cisco ACI, communication between EPGs can be established only after applying a proper security policy using contract between EPGs. There are two main types of Layer 3 connectivity across sites:


  • Intra-VRF communication

  • Inter-VRF communication

For both options, the chosen bridge domain in the Cisco NDO user interface should be configured as depicted in this figure:



Layer 3 Intra-VRF Communication Across Sites


This scenario provides intersite communication between endpoints that are connected to different bridge domains that are part of the same stretched VRF instance (the same tenant).


Therefore, you can manage non-stretched EPGs across sites and the contracts between them, while the MP-BGP EVPN allows the exchange of host routing information, enabling inter-site communication. This use case is depicted in the following figure:



When deploying Cisco ACI MultiSite for Layer-3-only connectivity across sites, this use case provides multiple benefits compared to simply interconnecting separate Cisco ACI fabrics via L3Out logical connections established from the border leaf nodes, such as:


  • There is no need to re-classify the packet when it enters the destination fabric because the EPG classification (class-ID) is carried in the VXLAN header through the ISN. When connecting two ACI fabrics with L3Outs (such as without ACI MultiSite), the VXLAN header is de-encapsulated when the packet goes out from the source fabric, which implies that the destination fabric needs to have an extra configuration to re-classify the packet when it enters the destination fabric via the L3Out.

  • Contract configuration becomes simpler and easier to maintain. When connecting two ACI fabrics with L3Outs (such as without ACI MultiSite), separate contracts need to be crated and applied between the source EPG and the L3Out in the source fabric, and between the destination EPG and the L3Out in the destination fabric. With ACI MultiSite, contracts are stretched and it allows to configure the contract directly between the source and destination EPG as if they are the entities in the same fabric.

  • The same IP subnet can be easily used on two different fabrics. When connecting two ACI fabrics without ACI MultiSite, which requires the deployment of a separate Layer 2 Data-Center Interconnect (DCI) technology in the external network while ACI MultiSite only requires a simple IP connectivity of a few IP addresses such as O-UTEP, O-MTEP.

  • Extending multiple VRFs across fabrics is simpler and easier to maintain. When connecting two ACI fabrics with L3Outs (such as without ACI MultiSite), the external network between the L3Outs in each site also needs to support VRF separation (or L3VPN) while delivering the packet and routing protocol database between each site while ACI MultiSite only requires a simple IP connectivity of a few IP addresses such as O-UTEP, O-MTEP.

Layer 3 Inter-VRF Communication Across Sites


This use case scenario enables you to create provider EPGs in one group of sites that offer shared services, which can be consumed by EPGs in another group of sites.


The source and destination bridge domains belong to different VRF instances (part of the same or different tenants), which requires route-leaking function to allow this communication, which is driven by the creation of a contract between the source and destination EPGs.



The provider contract C2 for the shared service must be set to global scope, because it needs to be used between EPGs deployed across tenants, such as Web and App EPGs in Tenant 1 and BigData EPG as shared service in Tenant BigData. Communication across the VRFs is established through VRF route leaking.

This design provides communication across VRFs and tenants while preserving the isolation and security policies of the tenants. Still, the shared service is supported only with nonoverlapping and nonduplicate subnets.


Verify node-role for BGP in msite-speaker both in Object Model and in bgp.


pod35-spine1# moquery -d sys/bgp/inst | egrep "dn|.*Role"

dn : sys/bgp/inst

spineRole : msite-speaker


pod35-spine1# show bgp internal node-role

Node role : : MSITE_SPEAKER


Verify BGP l2vpn evpn and OSPF is up on each spine


pod35-spine1# show ip ospf neighbors vrf overlay-1

OSPF Process ID default VRF overlay-1

Total number of neighbors: 2

Neighbor ID Pri State Up Time Address Interface

10.10.35.100 1 FULL/ - 02:06:51 10.10.35.2 Eth2/5.37

10.10.35.100 1 FULL/ - 02:06:27 10.10.35.6 Eth2/6.38

MP-BGP EVPN used to communicate Endpoint (EP) information across Sites.MP-iBGP or MP-EBGP peering supported across sites.

Remote host route entries (EVPN Type-2) are associated to the remote site Anycast DP-ETEP address


pod35-spine1# show bgp l2vpn evpn summary vrf overlay-1

BGP summary information for VRF overlay-1, address family L2VPN EVPN

BGP router identifier 10.10.35.111, local AS number 135

BGP table version is 26, L2VPN EVPN config peers 1, capable peers 1

13 network entries and 9 paths using 1864 bytes of memory

BGP attribute entries [4/576], BGP AS path entries [1/6]

BGP community entries [0/0], BGP clusterlist entries [0/0]


Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

10.10.35.112 4 136 126 124 26 0 0 01:54:15 3


Note : Host routes are exchanged only if there is a cross-site contract requiring communication between endpoints.


Cisco Nexus Dashboard


Cisco Nexus Dashboard is a single launch point to monitor and scale across different sites, whether it is Cisco Application Centric Infrastructure™ (Cisco ACI®) fabric controllers, the Cisco® Application Policy Infrastructure Controller (APIC), Cisco Nexus Dashboard Fabric Controller (NDFC), or a Cloud APIC running in a public cloud provider environment.


Using Cisco Nexus Dashboard, DevOps can improve the application deployment experience for multicloud applications Infrastructure-as-Code (IaC) integrations.




MSO TSHOOT


Node Unknown/Down status


[root@node1 ~]# docker node ls



Inspect a node using “docker node inspect ”


[root@node1 ~]# docker node inspect node2 –pretty



Check Docker Application Engine on failed node



Docker service restart

' systemctl stop docker.service ' then ' systemctl start docker.service' <-- root user


Remove a failed node from swarm cluster


Check STATUS of one or more nodes through ‘docker node ls’


[root@node1 ~]# docker node ls



Demote the failed node from manager to worker


[root@node1 ~]# docker node demote node2

Manager node2 demoted in the swarm.


Remove the failing node from swarm cluster


[root@node1 ~]# docker node rm node2

node2


Join a new node to the swarm cluster


[root@node2 ~]# docker swarm join-token manager



Docker Swarm Initialization



[root@node2 ~]# cd /opt/cisco/msc/builds/msc_2.2.b/prodha

[root@node2 ~]#./msc_cfg_init.py



42 views0 comments

Recent Posts

See All

Comments


bottom of page