top of page

Multi-site Traffic Flow

  • Writer: Mukesh Chanderia
    Mukesh Chanderia
  • Feb 9
  • 7 min read

This article explains how traffic flows between Endpoint Groups (EPGs) across multiple sites in Cisco ACI using Nexus Dashboard Orchestrator (NDO). We will walk through three common design scenarios and explain both the configuration steps and the underlying traffic behavior.


1. Stretched Bridge Domain with Site-Local EPGs (Layer 2 Inter-Site Traffic)

Scenario Overview

In this scenario:

  • EPG1 is located in Site 1

  • EPG2 is located in Site 2

  • Both EPGs belong to the same Bridge Domain (BD)

This design represents a Stretched Bridge Domain with site-local EPGs. Because both EPGs share the same BD and subnet, traffic between them is bridged (Layer 2) across the Inter-Site Network (ISN).

Even though the traffic is bridged, ACI’s Zero Trust policy model still requires a contract because the endpoints belong to different EPGs.

Configuration Using Nexus Dashboard Orchestrator (NDO)

Step 1: Create the Stretched Bridge Domain

Since the BD must exist in both sites, it is created in a Stretched Template.

  • Navigate to a Stretched Template associated with both Site 1 and Site 2

  • Create a Bridge Domain (e.g., BD)

  • Enable L2 Stretch

  • Configure a subnet (e.g., 192.168.1.254/24)

    • This subnet is deployed as an anycast gateway on leaf switches in both sites

  • Associate the BD with a Stretched VRF

Step 2: Configure Site-Local EPGs

EPGs are created in site-specific templates while referencing the shared Bridge Domain.

Site 1 Template

  • Create EPG1

  • Map it to the stretched BD

    • Use Inter-Template Reference if required

  • Assign static ports or VMM domains for Site 1 endpoints

Site 2 Template

  • Create EPG2

  • Map it to the same BD

  • Assign static ports or VMM domains for Site 2 endpoints

Step 3: Create and Apply the Contract

Even within the same BD, communication between EPGs requires a contract.

  • In the Stretched Template, create a contract (e.g., EPG1-to-EPG2)

  • Add an appropriate filter (e.g., Permit-Any)

  • Apply the contract:

    • EPG1 as Consumer (Site 1)

    • EPG2 as Provider (Site 2)

Step 4: Deploy

Deploy the following templates:

  • Stretched Template

  • Site 1 Template

  • Site 2 Template

How Traffic Flows

Data Plane

  • The stretched BD forms a single broadcast domain across sites

  • Traffic is bridged using VXLAN tunnels across the ISN

Policy Plane

  • NDO automatically creates Shadow EPGs

  • These shadow objects ensure correct class-ID translation and policy enforcement across sites


2. Inter-Site, Intra-VRF Traffic (Layer 3 Routing Across Sites)

Scenario Overview

In this design:

  • EPG1 is in BD1 (Site 1)

  • EPG2 is in BD2 (Site 2)

  • Both Bridge Domains belong to the same stretched VRF

Because the BDs are different, traffic is routed (Layer 3) across the ISN rather than bridged.

Configuration Using Nexus Dashboard Orchestrator (NDO)

Step 1: Configure the Stretched VRF

  • Open a Stretched Template

  • Create a VRF (e.g., VRF-Stretched)

  • Deploy the template so the VRF exists in both sites

Step 2: Configure Site-Local Bridge Domains

Each BD is local to its site and not stretched.

Site 1 Template

  • Create BD1

  • Associate it with VRF-Stretched

  • Add subnet (e.g., 10.1.1.1/24)

  • Do not enable L2 Stretch or Intersite BUM traffic

Site 2 Template

  • Create BD2

  • Associate it with VRF-Stretched

  • Add subnet (e.g., 10.2.2.1/24)

Step 3: Configure EPGs

Site 1

  • Create EPG1

  • Map to BD1

  • Assign ports or VMM domains

Site 2

  • Create EPG2

  • Map to BD2

  • Assign ports or VMM domains

Step 4: Create and Apply the Contract

  • In the Stretched Template, create a contract (e.g., Contract-1)

  • Add required filters (e.g., Permit IP)

  • Set scope to Tenant

  • Apply:

    • EPG1 as Consumer

    • EPG2 as Provider

Step 5: Deploy

Deploy all related templates.

How Traffic Works

  • Shadow Objects: NDO creates shadow EPGs and BDs in remote sites

  • Route Exchange: MP-BGP EVPN distributes endpoint routes within the stretched VRF

  • Traffic Flow:

    • Traffic from EPG1 is routed by the local leaf

    • VXLAN encapsulation occurs at the spine

    • Traffic is forwarded across the ISN to Site 2 and delivered to EPG2


3. Inter-Tenant Traffic Between Sites (Inter-VRF Communication)

Scenario Overview

In this scenario:

  • EPGs belong to different tenants or to a tenant that is not stretched

  • Each tenant uses a separate VRF

By default, traffic is blocked even if a contract exists. To enable communication, Route Leaking and a Global-Scope Contract are required.

Configuration Steps

Step 1: Configure Route Leaking

Provider EPG (Site 1)

  • Add the EPG subnet

  • Set scope to Shared between VRFs

  • Enable No Default SVI Gateway

This advertises the subnet without creating a conflicting gateway.

Consumer Bridge Domain (Site 2)

  • Ensure the BD subnet is marked Shared between VRFs

Step 2: Create a Global Contract

  • Create the contract in the Provider Tenant

  • Set scope to Global

  • Add required filters (e.g., Permit IP/TCP)

Step 3: Apply the Contract

  • Provider Tenant (Site 1): Apply contract to EPG as Provider

  • Consumer Tenant (Site 2): Apply contract to EPG as Consumer

Because the scope is Global, the contract is visible across tenants.

Step 4: Deploy Templates

Deploy configurations to both sites using NDO.

What Happens Behind the Scenes

  • Shadow Objects: NDO creates shadow tenants, VRFs, and EPGs in remote sites

  • Translation Tables: VNIDs and class IDs are mapped for correct decapsulation and policy enforcement

  • Route Leaking: MP-BGP EVPN exchanges leaked routes between VRFs


How NDO Simplifies Inter-Tenant Contracts

NDO automatically handles the contract export/import mechanism:

  • Detects inter-tenant relationships

  • Creates a Contract Interface in the consumer tenant

  • Eliminates manual configuration on the APIC


Verification

After deployment:

  • The local APIC will show a Contract Interface under the consumer tenant

  • This confirms the contract has been successfully imported and enforced

Key Takeaway

  • Same BD across sites → Layer 2 bridging

  • Different BDs, same VRF → Layer 3 routing

  • Different tenants/VRFs → Route leaking + Global contracts

Nexus Dashboard Orchestrator abstracts much of the complexity by automatically creating shadow objects, translation tables, and contract interfaces, making multi-site ACI deployments significantly easier to manage.



-------------------------------------------------------------------------------------------------------------------------------


Lab Guide


Lab Objective

In this lab, you will configure and verify traffic flow between Endpoint Groups (EPGs) across multiple sites using Cisco Nexus Dashboard Orchestrator (NDO). You will implement and validate three common multi-site ACI designs:

  1. Layer 2 Inter-Site traffic using a Stretched Bridge Domain

  2. Layer 3 Inter-Site traffic within a stretched VRF

  3. Inter-Tenant (Inter-VRF) traffic across sites

Lab Prerequisites

  • Cisco ACI Multi-Site fabric deployed

  • Nexus Dashboard with Nexus Dashboard Orchestrator (NDO) enabled

  • Two ACI sites:

    • Site 1

    • Site 2

  • Inter-Site Network (ISN) connectivity established

  • Administrative access to NDO and APICs

  • Basic understanding of:

    • Bridge Domains (BD)

    • Endpoint Groups (EPG)

    • VRFs

    • Contracts

Lab 1: Layer 2 Inter-Site Traffic Using a Stretched Bridge Domain

Lab Topology

  • EPG1 → Site 1

  • EPG2 → Site 2

  • Shared Bridge Domain and subnet

Task 1: Create a Stretched Bridge Domain

  1. Log in to Nexus Dashboard

  2. Open Nexus Dashboard Orchestrator

  3. Navigate to Application Management → Templates

  4. Open an existing Stretched Template (associated with Site 1 and Site 2)

  5. Create a Bridge Domain:

    • Name: BD-Stretched

  6. Enable L2 Stretch

  7. Add subnet:

    • 192.168.1.254/24

  8. Associate the BD with a Stretched VRF

  9. Save the configuration

Expected Result:The Bridge Domain is defined once and will be deployed to both sites with an anycast gateway.

Task 2: Configure Site-Local EPGs

Site 1 – EPG1

  1. Open the Site 1 Template

  2. Create an EPG:

    • Name: EPG1

  3. Map the EPG to BD-Stretched

    • Use Inter-Template Reference if prompted

  4. Assign static ports or VMM domain for Site 1 endpoints

  5. Save

Site 2 – EPG2

  1. Open the Site 2 Template

  2. Create an EPG:

    • Name: EPG2

  3. Map the EPG to BD-Stretched

  4. Assign static ports or VMM domain for Site 2 endpoints

  5. Save

Task 3: Create and Apply a Contract

  1. Open the Stretched Template

  2. Create a contract:

    • Name: EPG1-to-EPG2

    • Filter: Permit-Any

  3. Save the contract

Apply the contract:

  • Site 1 Template:

    • Assign EPG1-to-EPG2 to EPG1 as Consumer

  • Site 2 Template:

    • Assign EPG1-to-EPG2 to EPG2 as Provider

Task 4: Deploy and Verify

  1. Deploy:

    • Stretched Template

    • Site 1 Template

    • Site 2 Template

  2. Generate traffic between endpoints in EPG1 and EPG2

Verification

  • Endpoints communicate at Layer 2

  • Traffic traverses the ISN using VXLAN

  • Shadow EPGs appear in the remote site

Lab 2: Inter-Site Layer 3 Traffic Within a Stretched VRF

Lab Topology

  • EPG1 → BD1 → Site 1

  • EPG2 → BD2 → Site 2

  • Shared VRF

Task 1: Create a Stretched VRF

  1. Open the Stretched Template

  2. Create a VRF:

    • Name: VRF-Stretched

  3. Save and deploy the template

Task 2: Configure Site-Local Bridge Domains

Site 1 – BD1

  1. Open Site 1 Template

  2. Create Bridge Domain:

    • Name: BD1

  3. Associate with VRF-Stretched

  4. Add subnet:

    • 10.1.1.1/24

  5. Ensure L2 Stretch is disabled

  6. Save

Site 2 – BD2

  1. Open Site 2 Template

  2. Create Bridge Domain:

    • Name: BD2

  3. Associate with VRF-Stretched

  4. Add subnet:

    • 10.2.2.1/24

  5. Save

Task 3: Configure EPGs

  • Site 1:

    • Create EPG1

    • Map to BD1

    • Assign ports or VMM domains

  • Site 2:

    • Create EPG2

    • Map to BD2

    • Assign ports or VMM domains

Task 4: Create and Apply a Contract

  1. Open the Stretched Template

  2. Create contract:

    • Name: Contract-L3

    • Filter: Permit IP

    • Scope: Tenant

  3. Apply:

    • EPG1 as Consumer (Site 1)

    • EPG2 as Provider (Site 2)

Task 5: Deploy and Verify

  1. Deploy all templates

  2. Test connectivity between endpoints

Verification

  • Traffic is routed (Layer 3)

  • MP-BGP EVPN exchanges endpoint routes

  • VXLAN encapsulation occurs at the spine

Lab 3: Inter-Tenant (Inter-VRF) Traffic Across Sites

Lab Topology

  • Tenant A / VRF A → Site 1

  • Tenant B / VRF B → Site 2

Task 1: Configure Route Leaking

Provider EPG (Tenant A – Site 1)

  1. Open the EPG configuration

  2. Add endpoint subnet

  3. Set scope to Shared between VRFs

  4. Enable No Default SVI Gateway

  5. Save

Consumer BD (Tenant B – Site 2)

  1. Open the Bridge Domain

  2. Select the subnet

  3. Set scope to Shared between VRFs

  4. Save

Task 2: Create a Global Contract

  1. Open the Provider Tenant Template

  2. Create a contract:

    • Name: Inter-Tenant-Contract

    • Scope: Global

    • Filter: Permit required traffic

  3. Save

Task 3: Apply the Contract

  • Provider Tenant:

    • Apply contract to EPG as Provider

  • Consumer Tenant:

    • Apply the same contract to EPG as Consumer

Task 4: Deploy and Verify

  1. Deploy templates to both sites

  2. Test traffic between endpoints

Verification

  • Shadow Tenants and VRFs are created

  • Route leaking occurs via MP-BGP EVPN

  • Traffic is routed and policy-enforced

Lab Summary

Scenario

Traffic Type

Key Requirement

Same BD

Layer 2

L2 Stretch + Contract

Different BD, Same VRF

Layer 3

Stretched VRF + Contract

Different Tenants

Layer 3

Route Leaking + Global Contract






Recent Posts

See All
PBR Concepts

What is a Health Group? A Health Group  is a configuration object used to group specific PBR destination interfaces—typically the consumer and provider interfaces of the same service node (such as a f

 
 
 
Active/Standby F5 Across Different ACI Pods

Normal L3Out vs Floating L3Out Explained Understanding Cisco ACI Multi-Pod Architecture In a Cisco ACI Multi-Pod design: Each Pod has an independent IS-IS control plane Endpoint learning is maintained

 
 
 
In-Band Management Configuration in ACI

High-Level Objective The goal is to enable  APICs, leaf switches, and spine switches  to: Use  in-band management IP addresses Carry management traffic  over the ACI fabric data plane Reach  external

 
 
 

Comments


Follow me

© 2021 by Mukesh Chanderia
 

Call

T: 8505812333  

  • Twitter
  • LinkedIn
  • Facebook Clean
©Mukesh Chanderia
bottom of page