top of page

MultiCast In ACI

  • Writer: Mukesh Chanderia
    Mukesh Chanderia
  • Apr 23
  • 14 min read

Updated: Apr 28



Understanding Multicast in Cisco ACI


1. Multicast Traffic Flow in ACI

In ACI, multicast traffic is primarily managed within Bridge Domains (BDs). When a multicast packet is received, the following sequence occurs:

  • IGMP Snooping: ACI uses IGMP (Internet Group Management Protocol) snooping to listen to multicast group memberships, ensuring that multicast traffic is only forwarded to interested hosts.

  • GIPo Assignment: Each BD is assigned a GIPo (Group IP Outer) address, which is used as the destination IP in the VXLAN header for multicast traffic.

  • FTAG Trees: ACI employs FTAG (Forwarding Tag) trees to determine the optimal path for multicast traffic, ensuring efficient delivery across the fabric.

  • Spine Replication: Spine switches replicate multicast packets to the appropriate leaf switches based on the FTAG tree, ensuring that only interested endpoints receive the traffic.

2. Role of IGMP Querier

The IGMP Querier plays a pivotal role in managing multicast group memberships:

  • Function: It periodically sends IGMP queries to solicit group membership information from hosts.

  • Configuration: In ACI, the IGMP Querier can be enabled at the BD level. It's crucial to ensure that only one IGMP Querier exists per BD to prevent conflicts.

  • Importance: Without an active IGMP Querier, multicast group memberships may time out, leading to unintended traffic flooding.

3. COOP and Endpoint Learning

ACI's control plane uses the Council of Oracle Protocol (COOP) for endpoint learning:

  • Endpoint Registration: When a new endpoint joins the network, the local leaf switch registers its information with a spine switch using COOP.

  • IS-IS Protocol: ACI uses the IS-IS (Intermediate System to Intermediate System) protocol to distribute endpoint information across the fabric, ensuring all switches have up-to-date forwarding information.

  • Multicast Relevance: Accurate endpoint information is vital for efficient multicast delivery, ensuring traffic reaches only the intended recipients.

4. Flooding Behavior and BD Configuration

Flooding behavior in ACI is influenced by BD settings:

  • Default Behavior: By default, ACI minimizes flooding. However, certain scenarios necessitate enabling flooding for specific traffic types.

  • ARP Flooding: If ARP flooding is enabled in a BD, ARP requests are broadcasted to all endpoints within the BD, which can include multicast traffic.

  • Unknown Unicast Flooding: Enabling this setting causes unknown unicast traffic to be flooded within the BD, potentially impacting multicast behavior.

  • Best Practices: Carefully configure BD settings to balance efficient multicast delivery with network performance considerations.

5. Multicast in Multi-Site Deployments

In ACI Multi-Site architectures, multicast handling is adapted to suit inter-site communication:

  • Headend Replication: Instead of relying on traditional multicast routing across sites, ACI uses headend replication, where the source spine switch replicates multicast packets and sends them to remote sites as unicast traffic.

  • Intersite BUM Traffic Allow: To enable multicast (and other BUM—Broadcast, Unknown unicast, and Multicast) traffic across sites, this setting must be enabled in the BD configuration.

  • Optimize WAN Bandwidth: This feature assigns unique GIPo addresses per BD, preventing unnecessary multicast traffic from traversing the inter-site network, thus conserving bandwidth.

  • Configuration Tools: Utilize the Nexus Dashboard Orchestrator (NDO) for centralized management of these settings across multiple sites.





Let's understand how multicast actually works inside a Cisco ACI fabric. It starts with a quick primer on general multicast terms, then shows how those pieces are mapped to ACI objects and—step by step—how a single data packet travels from source to receiver.


1 Multicast fundamentals

Element

What it does

Multicast group (G) – e.g. 239.1.1.1

A logical “radio station” that many hosts can tune in to.

IGMP v2/3

Host-to-first-hop signalling: receivers send Join/Leave reports; a querier asks “who’s still listening?”.

PIM-SM / SSM

Routing protocol that stitches trees between first-hop and last-hop routers, using an RP (shared tree, *,G) or straight to the source (S,G).

Replicating the packet

Classic L3 devices replicate hop-by-hop. VXLAN fabrics like ACI replicate once in the spine ASIC and spray finished copies to the interested leaves.


2 Key ACI objects & terms you’ll see

Classic term

ACI term / object

Multicast group (G)

Exactly the same inside the VXLAN header

Subnet / VLAN

Bridge Domain (BD) – owns the IGMP snooping policy

Router interface

VRF – owns the PIM/TRM policy

MRIB entry

EVPN route-types 6 & 7 that a leaf advertises to the spines

Data-plane tree

GIPo address (Group-IP-outer) carried in VXLAN outer header; spines replicate on this tree

GIPo

  • Every BD (and every multicast-enabled VRF) gets a /28 multicast block, e.g. 225.0.18.0/28.

  • When a leaf encapsulates a multi-destination frame it picks one of the 16 addresses in that block; this balances traffic across 16 FTAG fabric trees.


3 Layer-2 multicast (same BD, “bridged” multicast)


Receiver → IGMP Join → Leaf → COOP → Spines

Source   → Packet     → Leaf → VXLAN (dest = BD-GIPo) → Spines replicate → Interested Leaves → Port(s)


  1. Receiver Join

    Host sends IGMP. Leaf snoops it, installs a hardware entry, and informs the spines via a COOP message that “Leaf 101 is interested in G 239.1.1.1”


  2. Data forward

    Source leaf encapsulates the frame in VXLAN; outer-IP = BD-GIPo, VNID = BD-VNID.Spines replicate the packet only to leaves that previously registered interest (efficient “Optimized Flood” mode).


  3. Last-hop delivery

    Each egress leaf decapsulates and, using its local IGMP table, forwards the frame out the exact access ports that sent joins.


No PIM is involved; everything is L2 within the BD.


4 Layer-3 multicast in ACI – Tenant Routed Multicast (TRM)


Enabling PIM on the BD flips multicast to an overlay-routed model that looks like this:


Receiver Leaf

 └─ IGMP Join → EVPN RT-7 → Border Leaf(s)

                                             ┌─ (S,G) Join toward source/RP via PIM

Source Leaf ─► VXLAN (dest = VRF-GIPo) ─► Spines ─┤

                                             └─ Spines replicate to Receiver Leaf(s)

What changes compared with pure L2?

Component

L2 only

TRM / PIM enabled

Outer multicast tree

BD-GIPo

VRF-GIPo (so traffic can leave the BD)

Control-plane advert

None (spine IGMP DB only)

EVPN RT-7 (receiver) & RT-6 (source)

PIM speakers

None

Border-leaf runs full PIM; non-border leaves run passive PIM.

External connectivity

Not possible

Border-leaf joins RP or source in the outside network

Join signalling step-by-step

  1. IGMP Join on receiver leaf → leaf creates EVPN IMET (RT-7) message that tells all border leaves “this VRF has receivers for G”.

  2. Border leaf (Designated Forwarder) converts that knowledge into a PIM (*,G) or (S,G) join out of the VRF’s L3Out.

  3. Source traffic enters the fabric, is VXLAN-encapped toward VRF-GIPo. Spines replicate copies only to leaves whose VRF has an interested receiver.

Because the packet is VXLAN-encapped once, the underlay never has to run PIM—only the border leaves talk PIM to outside devices. That’s why TRM scales far better than a legacy “multicast in the underlay” design.


5 Putting it all together – end-to-end flow recap


IGMP (receiver)        EVPN RT-7            PIM (*,G)               Data plane

┌─────────┐          ┌────────┐          ┌─────────┐          ┌──────────────────┐

│Receiver │─IGMP→│Rx Leaf │─RT7→│Border LF│─PIM→  RP / Src │  │VXLAN dest = GIPo │

└─────────┘          └────────┘          └─────────┘          └──────────────────┘

                                                                     ▲

                                   EVPN RT-6 (source)                │

                                   ┌────────┐                        │

                                   │Src Leaf│─RT6────────────────────┘

                                   └────────┘

  • G state is synchronised fabric-wide through EVPN;

  • replication happens once in the spines using the right GIPo tree;

  • only the leaves that truly need the stream receive it.


6 Where to look when something breaks

Plane

CLI to start with

What success looks like

IGMP snooping

show ip igmp snooping groups vrf <vrf>

Group + ingress port listed

EVPN

`show bgp l2vpn evpn route-type 6

7 ip `

PIM / TRM

show ip pim vrf <vrf> route <G>

Correct RPF & OIF list; fabric-tunnel appears

Spine replication

`acidiag mcast if

inc `

7 Design & operating tips


  • Always enable an IGMP querier when no external router does that job.

  • Prefer Optimized Flood mode on the BD unless you run first-generation leaf hardware with strict limits. citeturn3view0

  • For TRM, give every border leaf a unique loopback and keep RP placement close to the fabric to minimise join latency. citeturn1view0

  • Upgrade to a modern 5.x/6.x ACI release if you need IPv6 multicast or Source-Specific Multicast (SSM) in the overlay.


Key Points


L2 multicast in ACI is just IGMP + spine replication on the BD-GIPo tree.

L3 multicast (TRM) adds EVPN signalling and PIM only at the border, using VRF-GIPo trees.

Once you map those two modes to the control-plane messages (IGMP → COOP → EVPN → PIM) the entire troubleshooting workflow becomes predictable and fast.



TROUBLESHOOTING MULTICAST ISSUES


Multicast Troubleshooting Steps


Step-by-step guide to help troubleshooting multicast issues in Cisco ACI focusing on common issues: missing multicast group mapping, phantom RPF, underlay PIM configuration, and multicast policies.


Key checks include:

  • Verifying IGMP

  • Confirming BD in multicast flood mode

  • Reviewing PIM settings in the underlay

  • Checking MP-BGP route types 6/7

  • Verifying leaf and spine outputs for multicast traffic


Step-by-step multicast troubleshooting


Multicast modes in ACI. Then, I'll break it down by steps:

  • Step 0: Identify the scenario, including where the source and receiver are, and whether the traffic is L2 or L3.

  • Step 1: Confirm BD and VRF multicast configuration, and check IGMP snooping policy settings.

  • Step 2: Verify underlay multicast settings, especially head-end replication and the difference between 'Encap' and 'Optimized' flood modes.


Flow --> define the problem → verify L2 multicast → verify L3 multicast (TRM/PIM) → look at the underlay → common fixes.


0 Define the exact scenario first

What to record

Why it matters

Source (IP/MAC, Leaf, interface, EPG/BD/VRF)

Needed for RPF checks

Receiver(s) (IP, Leaf, interface, EPG/BD/VRF)

Needed to see where the join should appear

Group address (G) (e.g. 239.1.1.1)

Drives all subsequent look-ups

L2 or L3 delivery? (same BD vs routed between BDs/VRFs)

Decides whether IGMP-only or PIM/TRM is required

External receivers?

Tells you if you need L3Out multicast


1 Verify the Bridge Domain & IGMP snooping (Layer-2 multicast)

  1. Open the BD in Tenant ▶ Networking ▶ Bridge Domains

    • Flood mode should be “Optimized Flood” or “Encap Flood” for multicast.

    • If the BD will never leave the fabric, you can stay with L2 flooding.

    • If L3 forwarding is expected, be sure “Enable Unicast Routing” is on.

  2. IGMP Snooping policy

    • Tenant ▶ Policies ▶ Protocols ▶ IGMP Snooping

    • Typical fixes:

      • Enable Querier if there is no external querier in that subnet.

      • Raise robustness-variable if you expect lossy links.

    • Commit the policy and re-map it to the BD/EPG if necessary.


CLI quick-check


# On the source Leaf

show ip igmp snooping groups vrf PROD | inc 239.1.1.1

show endpoint ip 239.1.1.1 detail

If the group is absent, the Leaf never saw an IGMP join; check the receiver port or querier.


2 Check EVPN multicast routes & COOP (fabric control-plane)


ACI distributes multicast state with EVPN route-types 6/7 plus the GIPo.The COOP database ties endpoint locations to the spine loopback that replicates the traffic.


CLI quick-check


# On any Leaf

show bgp l2vpn evpn route-type 6 ip 239.1.1.1

show system internal epm multicast detail | inc 239.1.1.1

Healthy: you see <Leaf ID, BD VNID, GIPo, Replication-Spines>.If nothing appears, the Leaf never exported state—igmp snooping or BD settings are still wrong.


3 If the traffic must be routed (different BDs or external networks)

Enable Tenant Routed Multicast (TRM) or classic PIM

Step

What to do

Where

1

Create / choose a Multicast Domain object

Tenant ▶ Networking ▶ Multicast

2

Under the VRF, add PIM – Enabled → choose RP-policy (Anycast-RP is fine)

Tenant ▶ Networking ▶ VRF

3

Under each BD that must route multicast, tick “Enable PIM”

Tenant ▶ Networking ▶ BD

4

If you need external receivers, extend the L3Out and enable PIM on the interface

Tenant ▶ Networking ▶ External Routed Networks

Cisco calls this whole workflow TRM. ACI injects (S,G) or (*,G) into EVPN and handles replication in the spines.


L3 CLI health-check


show ip pim vrf PROD route 239.1.1.1

show ip pim vrf PROD rp mapping

The RP address must appear on every Leaf that holds the VRF; RPF interface should be “fabric” or the expected routed link.


4 Check the underlay replication (Spines)

Even if the control plane looks fine, congestion or hardware issues on replication spines can drop multicast.


CLI quick-check


# Any Spine

acidiag mcast if | inc 239.1.1.1

show system internal mcglobal info

A missing interface indicates that spine never programmed the group; usually caused by COOP errors or a spine-leaf overlay split-brain (clear-clock + COOP clear can help).


5 Common break-fix patterns

Symptom

Likely cause

Quick fix

Group never shows on Leaf

Missing IGMP Querier

Enable querier in BD or L3 gateway

Group shows, but no traffic

Underlay head-end replication bug in older code < 5.2(4)

Upgrade, or move BD to Encap Flood as a workaround

Traffic works inside ACI but not outside

External PIM domain has wrong RP / RPF failure

Check show ip pim vrf OUT route, fix RP or add MSDP peer

Packet bursts then stop

IGMP version mismatch (V2 vs V3), or IGMP throttling on Leaf

Force correct version in IGMP policy

Receivers across BDs drop frames

BD missing “Enable PIM”

Tick the box & commit; verify EVPN RT-6 export

6 End-to-end worked example


Scenario:

Source: 10.1.1.10 (Video-Srv) on Leaf101, EPG Video-Src, BD Video-BD, VRF PRODReceiver: 10.1.1.50 (Video-Cli) on Leaf102, EPG Video-Cli, same BD/VRFGroup: 239.1.1.1


  1. Receiver join

    Leaf102# show ip igmp snooping groups vrf PROD | inc 239.1.1.1 Vlan382 239.1.1.1 0100.5e01.0101 port1/3

  2. Source registration

    Leaf101# show endpoint ip 239.1.1.1 detail # shows (MAC, VNID, GIPo=225.0.0.37)

  3. EVPN distribution

    Leaf102# show bgp l2vpn evpn route-type 6 ip 239.1.1.1 * i [2]:[0]:[VNID]:[225.0.0.37]:[239.1.1.1]:[10.1.1.10]

  4. Replication spine check

    Spine201# acidiag mcast if | inc 239.1.1.1 239.1.1.1 VNID 382 L101,VPC102

  5. Packet capture confirms traffic enters Leaf102 on P-port1/3 → success.


7 Handy command cheat-sheet


# Layer-2 / IGMP

show ip igmp snooping groups vrf <VRF>

show system internal epm multicast detail


# EVPN / COOP

show coop internal info | inc 239.

show bgp l2vpn evpn route-type 6 ip <G>


# Layer-3 / PIM

show ip pim vrf <VRF> route <G>

show ip pim vrf <VRF> rp mapping

show ip mroute vrf <VRF>


# Spines / Replication

acidiag mcast if

show system internal mcglobal info


Common Issues


  • You see COOP timeouts, repeated duplicate GIPo entries, or spines crash the multicast process.

  • Multicast drops only at fabric load > 70 Gbps (possible ASIC bug—TAC has diagnostics).


------------------------------------------------------------------------------------------------------------


Multicast in Cisco ACI – Simple Explanation


1 . Why Multicast Matters in ACI

  • Multicast = one-to-many delivery (video, market feeds, updates) without wasting bandwidth.

  • Cisco ACI is an SDN fabric; the APIC turns policy into switch config.

  • Multicast rides inside VXLAN tunnels; ACI must cope with it at L2 (inside one Bridge Domain) and L3 (between BDs / to outside).


2 . How It Works

2 . 1 Layer-2 multicast (inside one BD)
  • Bridge Domain (BD) ≈ VLAN.

  • IGMP-snooping on every leaf learns which ports really need the stream.

  • IGMP Querier (an elected leaf) sends periodic queries so the table stays fresh.

  • Key BD knobs (APIC GUI → BD → Policies):

    • L3 Unknown Multicast Flooding:

      • Flood – send everywhere.

      • Optimized – send only where IGMP table or mrouter port says.

    • Multicast Destination Flooding: force flood even if snooping is on (use sparingly).


Must-know L2 commands


# Map BD encapsulation to internal VLAN

show system internal epm vlan <encap_id>


# IGMP snooping overview for a VLAN

show ip snooping vlan <vlan_id>


# Groups learned on that VLAN

show ip snooping groups vlan <vlan_id> <multicast_group_address>


2 . 2 Layer-3 multicast (between BDs or outside)
  • Enable multicast first in the VRF, then per-BD.

  • Uses PIM:

    • ASM → needs a Rendezvous Point (RP) (static / Auto-RP / BSR). In 2.0(1) the RP must be outside the fabric; best-practice = Anycast-RP + MSDP.

    • SSM → no RP; receivers join (S,G) via IGMP v3.

  • ACI EX-series leaves can run native PIM; non-EX needs an external router.


Must-know L3 commands


# PIM neighbours

show ip pim neighbor vrf <vrf_name>


# Multicast routing table

show ip mroute vrf <vrf_name> [group]


# Reverse-Path check for a source

show ip rpf <source_ip> vrf <vrf_name>


# Interface-level PIM status

show ip pim interface vrf <vrf_name> interface <intf>

2 . 3 Underlay magic (what you don’t normally see)
  • APIC allocates a GIPo (/28 from default 225.0.0.0/15) per BD & PIM-enabled VRF.

  • That address chooses an FTAG tree (built with IS-IS) for fabric-wide forwarding.

  • COOP messages from leaves tell spines which groups have receivers.

  • show isis internal mcast routes ftag (spine) shows trees.

2 . 4 Talking to the outside (L3Out)
  • Non-EX fabric → use external PIM router.

  • EX-only fabric → border leaves can be the routers.

  • Border leaf tasks:

    • Form PIM neighbours on L3Out.

    • Optional ACLs to block unwanted PIM Join floods.

    • Optional MSDP to discover external sources.

    • Use loopback for internal PIM peerings (stripe-winner logic).

3 . Typical Troubles we See

Symptom

What to look at

Multicast tick-box missing

VRF not enabled before BD.

IGMP Snooping weirdness

Wrong IGMP policy / Querier not elected / BD mistakenly treated as PIM-enabled.

No PIM neighbour

Bad PIM policy, trying to peer over vPC-peer-link, IP reachability.

RPF-fail drops

Unicast path mismatch, RP unreachable, “ip data-plane-learning” disabled.

Excess flooding

IGMP snooping disabled or mis-set, TTL decrements in L2-only BD.

Multi-Pod issues

IPN PIM/IGMP, spine not joining, HA pairs across pods drop keepalives.

External sources not seen

ACL blocks Join, MSDP not forming, border-leaf list wrong.

4 . Step-by-Step Troubleshooting Flow

  1. Basic reachability

    • ping source ↔ receiver.

  2. Check config in APIC

    • VRF → Enable Multicast.

    • BD → Enable Multicast + IGMP policy.

    • VRF → PIM Settings (ASM/SSM, RP).

  3. Inside one BD (L2)

    • Map BD to VLAN → show system internal epm vlan <encap_id>

    • IGMP status → show ip snooping vlan <vlan>

    • Group learners → show ip snooping groups vlan <vlan> <group>

  4. Across BDs / to outside (L3)

    • PIM neighbours → show ip pim neighbor vrf <vrf>

    • mroute entry → show ip mroute vrf <vrf> <group>

    • RPF check → show ip rpf <source> vrf <vrf>

    • Interface PIM mode → show ip pim interface vrf <vrf> interface <intf>

  5. Border / External

    • L3Out interface up & PIM sparse-mode.

    • ACLs: show running-configuration pim (look for “ip pim join-filter”).

    • MSDP peers if used: show ip msdp summary.

  6. Multi-Pod / IPN

    • IPN devices: show ip igmp groups, PIM config.

    • Confirm GIPo range identical in every pod.

5 . Handy moquery Snippets


# VRF-level PIM config (JSON)

moquery -c pimCtxP -o json


# Check if multicast enabled on BD "BD1"

moquery -c fvBD -f 'fv.BD.name=="BD1"' | grep mcastAllow


# Grab IGMP snoop policy details

moquery -c igmpSnoopPol -f 'igmp.SnoopPol.name=="<policy>"' -o json

6 . Best-Practice Checklist

  • Turn on multicast in VRF before BD.

  • Leave PIM off in BDs that only need L2 flooding (avoids TTL headaches).

  • Redundant RP (Anycast RP + MSDP) for ASM; IGMPv3 for SSM.

  • On mixed-hardware fabrics, let an external router handle PIM.

  • Border-leaf ACLs to drop joins for unwanted groups (DoS prevention).

  • Monitor: APIC → Fabric → Inventory → Protocols; syslog/SNMP.


CLI Examples


Leaf-302# show ip snooping vlan 10

IGMP Snooping information for VLAN 10

IGMP snooping enabled

Lookup mode: IP

Optimised Multicast Flood (OMF) enabled

IGMP querier present, address: 10.0.10.254, version: 2

Switch-querier disabled

IGMPv3 Explicit tracking: Disabled

Leaf-302# show ip snooping groups vlan 10 239.1.1.1

VLAN Group Address Type Port List

---- --------------- ---------- ------------------------------------

10 239.1.1.1 dynamic Eth1/1

Leaf-302# show system internal epm vlan 86

VLAN Encap BD Name Flags SVI IP Address

---- ------------- --------------- ----------- ------------------

86 vxlan-16850433 BD1 0x0 10.0.10.1

Leaf-303# show ip pim neighbor vrf default

PIM Neighbor information for VRF default

Neighbor Interface Uptime Expires DRPriority Bidir Flags

10.20.20.2 Ethernet1/20 00:05:32 00:01:30 1 No

Leaf-303# show ip mroute vrf default 239.1.1.1

(*, 239.1.1.1), 00:04:58/00:02:05, RP 10.10.10.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Tunnel16, Forward/Sparse, 00:04:58/00:01:59


(192.168.1.10, 239.1.1.1), 00:03:20/00:02:27, flags: FT

Incoming interface: Ethernet1/10, RPF nbr 192.168.1.10

Outgoing interface list:

Tunnel16, Forward/Sparse, 00:03:20/00:01:59

Spine-301# show isis internal mcast routes ftag | grep FTAG

FTAG Routes

FTAG ID: 0[Enabled] Cost:( 0/ 0/ 0)

FTAG ID: 1 [Enabled] Cost:( 2/ 8/ 0)

FTAG ID: 2 [Enabled] Cost:( 2/ 9/ 0)

...

FTAG ID: 5[Enabled] Cost:( 0/ 0/ 0)

Leaf-301# show running-configuration pim

Building configuration...


ip pim vrf default

rendezvous-point 10.10.10.1 group-list 224.0.0.0/4


interface Ethernet1/10

ip pim sparse-mode

7 . Quick Test Scenario

Receiver on Leaf-302 VLAN 10 can’t see group 239.1.1.1 (source is external)

  1. Leaf-302 → IGMP snooping/group checks (see above).

  2. Border leaf → show ip pim neighbor, ensure peering with external router.

  3. Border leaf → show ip mroute vrf <vrf> 239.1.1.1 – is there an entry?

  4. If missing, run show ip rpf <source> vrf <vrf> – fix RPF or add ip mroute.

  5. Confirm border ACLs aren’t blocking Joins.

  6. Re-verify VRF+BD multicast boxes ticked in APIC.


Resources :








Recent Posts

See All
Quality of Service (QoS) in Cisco ACI

Configuring Quality of Service (QoS)  in Cisco ACI (Application Centric Infrastructure)  involves creating and applying QoS policies that...

 
 
 

Comments


Follow me

© 2021 by Mukesh Chanderia
 

Call

T: 8505812333  

  • Twitter
  • LinkedIn
  • Facebook Clean
©Mukesh Chanderia
bottom of page