top of page

MultiCast In ACI

  • Writer: Mukesh Chanderia
    Mukesh Chanderia
  • Apr 23
  • 23 min read

Updated: Sep 27



Understanding Multicast in Cisco ACI


1. Multicast Traffic Flow in ACI

In ACI, multicast traffic is primarily managed within Bridge Domains (BDs). When a multicast packet is received, the following sequence occurs:

  • IGMP Snooping: ACI uses IGMP (Internet Group Management Protocol) snooping to listen to multicast group memberships, ensuring that multicast traffic is only forwarded to interested hosts.

  • GIPo Assignment: Each BD is assigned a GIPo (Group IP Outer) address, which is used as the destination IP in the VXLAN header for multicast traffic.

  • FTAG Trees: ACI employs FTAG (Forwarding Tag) trees to determine the optimal path for multicast traffic, ensuring efficient delivery across the fabric.

  • Spine Replication: Spine switches replicate multicast packets to the appropriate leaf switches based on the FTAG tree, ensuring that only interested endpoints receive the traffic.

2. Role of IGMP Querier

The IGMP Querier plays a pivotal role in managing multicast group memberships:

  • Function: It periodically sends IGMP queries to solicit group membership information from hosts.

  • Configuration: In ACI, the IGMP Querier can be enabled at the BD level. It's crucial to ensure that only one IGMP Querier exists per BD to prevent conflicts.

  • Importance: Without an active IGMP Querier, multicast group memberships may time out, leading to unintended traffic flooding.

3. COOP and Endpoint Learning

ACI's control plane uses the Council of Oracle Protocol (COOP) for endpoint learning:

  • Endpoint Registration: When a new endpoint joins the network, the local leaf switch registers its information with a spine switch using COOP.

  • IS-IS Protocol: ACI uses the IS-IS (Intermediate System to Intermediate System) protocol to distribute endpoint information across the fabric, ensuring all switches have up-to-date forwarding information.

  • Multicast Relevance: Accurate endpoint information is vital for efficient multicast delivery, ensuring traffic reaches only the intended recipients.

4. Flooding Behavior and BD Configuration

Flooding behavior in ACI is influenced by BD settings:

  • Default Behavior: By default, ACI minimizes flooding. However, certain scenarios necessitate enabling flooding for specific traffic types.

  • ARP Flooding: If ARP flooding is enabled in a BD, ARP requests are broadcasted to all endpoints within the BD, which can include multicast traffic.

  • Unknown Unicast Flooding: Enabling this setting causes unknown unicast traffic to be flooded within the BD, potentially impacting multicast behavior.

  • Best Practices: Carefully configure BD settings to balance efficient multicast delivery with network performance considerations.

5. Multicast in Multi-Site Deployments

In ACI Multi-Site architectures, multicast handling is adapted to suit inter-site communication:

  • Headend Replication: Instead of relying on traditional multicast routing across sites, ACI uses headend replication, where the source spine switch replicates multicast packets and sends them to remote sites as unicast traffic.

  • Intersite BUM Traffic Allow: To enable multicast (and other BUM—Broadcast, Unknown unicast, and Multicast) traffic across sites, this setting must be enabled in the BD configuration.

  • Optimize WAN Bandwidth: This feature assigns unique GIPo addresses per BD, preventing unnecessary multicast traffic from traversing the inter-site network, thus conserving bandwidth.

  • Configuration Tools: Utilize the Nexus Dashboard Orchestrator (NDO) for centralized management of these settings across multiple sites.





Let's understand how multicast actually works inside a Cisco ACI fabric. It starts with a quick primer on general multicast terms, then shows how those pieces are mapped to ACI objects and—step by step—how a single data packet travels from source to receiver.


1 Multicast fundamentals

Element

What it does

Multicast group (G) – e.g. 239.1.1.1

A logical “radio station” that many hosts can tune in to.

IGMP v2/3

Host-to-first-hop signalling: receivers send Join/Leave reports; a querier asks “who’s still listening?”.

PIM-SM / SSM

Routing protocol that stitches trees between first-hop and last-hop routers, using an RP (shared tree, *,G) or straight to the source (S,G).

Replicating the packet

Classic L3 devices replicate hop-by-hop. VXLAN fabrics like ACI replicate once in the spine ASIC and spray finished copies to the interested leaves.


2 Key ACI objects & terms you’ll see

Classic term

ACI term / object

Multicast group (G)

Exactly the same inside the VXLAN header

Subnet / VLAN

Bridge Domain (BD) – owns the IGMP snooping policy

Router interface

VRF – owns the PIM/TRM policy

MRIB entry

EVPN route-types 6 & 7 that a leaf advertises to the spines

Data-plane tree

GIPo address (Group-IP-outer) carried in VXLAN outer header; spines replicate on this tree

GIPo

  • Every BD (and every multicast-enabled VRF) gets a /28 multicast block, e.g. 225.0.18.0/28.

  • When a leaf encapsulates a multi-destination frame it picks one of the 16 addresses in that block; this balances traffic across 16 FTAG fabric trees.


3 Layer-2 multicast (same BD, “bridged” multicast)


Receiver → IGMP Join → Leaf → COOP → Spines

Source   → Packet     → Leaf → VXLAN (dest = BD-GIPo) → Spines replicate → Interested Leaves → Port(s)


  1. Receiver Join

    Host sends IGMP. Leaf snoops it, installs a hardware entry, and informs the spines via a COOP message that “Leaf 101 is interested in G 239.1.1.1”


  2. Data forward

    Source leaf encapsulates the frame in VXLAN; outer-IP = BD-GIPo, VNID = BD-VNID.Spines replicate the packet only to leaves that previously registered interest (efficient “Optimized Flood” mode).


  3. Last-hop delivery

    Each egress leaf decapsulates and, using its local IGMP table, forwards the frame out the exact access ports that sent joins.


No PIM is involved; everything is L2 within the BD.


4 Layer-3 multicast in ACI – Tenant Routed Multicast (TRM)


Enabling PIM on the BD flips multicast to an overlay-routed model that looks like this:


Receiver Leaf

 └─ IGMP Join → EVPN RT-7 → Border Leaf(s)

                                             ┌─ (S,G) Join toward source/RP via PIM

Source Leaf ─► VXLAN (dest = VRF-GIPo) ─► Spines ─┤

                                             └─ Spines replicate to Receiver Leaf(s)

What changes compared with pure L2?

Component

L2 only

TRM / PIM enabled

Outer multicast tree

BD-GIPo

VRF-GIPo (so traffic can leave the BD)

Control-plane advert

None (spine IGMP DB only)

EVPN RT-7 (receiver) & RT-6 (source)

PIM speakers

None

Border-leaf runs full PIM; non-border leaves run passive PIM.

External connectivity

Not possible

Border-leaf joins RP or source in the outside network

Join signalling step-by-step

  1. IGMP Join on receiver leaf → leaf creates EVPN IMET (RT-7) message that tells all border leaves “this VRF has receivers for G”.

  2. Border leaf (Designated Forwarder) converts that knowledge into a PIM (*,G) or (S,G) join out of the VRF’s L3Out.

  3. Source traffic enters the fabric, is VXLAN-encapped toward VRF-GIPo. Spines replicate copies only to leaves whose VRF has an interested receiver.

Because the packet is VXLAN-encapped once, the underlay never has to run PIM—only the border leaves talk PIM to outside devices. That’s why TRM scales far better than a legacy “multicast in the underlay” design.


5 Putting it all together – end-to-end flow recap


IGMP (receiver)        EVPN RT-7            PIM (*,G)               Data plane

┌─────────┐          ┌────────┐          ┌─────────┐          ┌──────────────────┐

│Receiver │─IGMP→│Rx Leaf │─RT7→│Border LF│─PIM→  RP / Src │  │VXLAN dest = GIPo │

└─────────┘          └────────┘          └─────────┘          └──────────────────┘

                                                                     ▲

                                   EVPN RT-6 (source)                │

                                   ┌────────┐                        │

                                   │Src Leaf│─RT6────────────────────┘

                                   └────────┘

  • G state is synchronised fabric-wide through EVPN;

  • replication happens once in the spines using the right GIPo tree;

  • only the leaves that truly need the stream receive it.


6 Where to look when something breaks

Plane

CLI to start with

What success looks like

IGMP snooping

show ip igmp snooping groups vrf <vrf>

Group + ingress port listed

EVPN

`show bgp l2vpn evpn route-type 6

7 ip `

PIM / TRM

show ip pim vrf <vrf> route <G>

Correct RPF & OIF list; fabric-tunnel appears

Spine replication

`acidiag mcast if

inc `

7 Design & operating tips


  • Always enable an IGMP querier when no external router does that job.

  • Prefer Optimized Flood mode on the BD unless you run first-generation leaf hardware with strict limits. citeturn3view0

  • For TRM, give every border leaf a unique loopback and keep RP placement close to the fabric to minimise join latency. citeturn1view0

  • Upgrade to a modern 5.x/6.x ACI release if you need IPv6 multicast or Source-Specific Multicast (SSM) in the overlay.


Key Points


L2 multicast in ACI is just IGMP + spine replication on the BD-GIPo tree.

L3 multicast (TRM) adds EVPN signalling and PIM only at the border, using VRF-GIPo trees.

Once you map those two modes to the control-plane messages (IGMP → COOP → EVPN → PIM) the entire troubleshooting workflow becomes predictable and fast.



TROUBLESHOOTING MULTICAST ISSUES


Multicast Troubleshooting Steps


Step-by-step guide to help troubleshooting multicast issues in Cisco ACI focusing on common issues: missing multicast group mapping, phantom RPF, underlay PIM configuration, and multicast policies.


Key checks include:

  • Verifying IGMP

  • Confirming BD in multicast flood mode

  • Reviewing PIM settings in the underlay

  • Checking MP-BGP route types 6/7

  • Verifying leaf and spine outputs for multicast traffic


Step-by-step multicast troubleshooting


Multicast modes in ACI. Then, I'll break it down by steps:

  • Step 0: Identify the scenario, including where the source and receiver are, and whether the traffic is L2 or L3.

  • Step 1: Confirm BD and VRF multicast configuration, and check IGMP snooping policy settings.

  • Step 2: Verify underlay multicast settings, especially head-end replication and the difference between 'Encap' and 'Optimized' flood modes.


Flow --> define the problem → verify L2 multicast → verify L3 multicast (TRM/PIM) → look at the underlay → common fixes.


0 Define the exact scenario first

What to record

Why it matters

Source (IP/MAC, Leaf, interface, EPG/BD/VRF)

Needed for RPF checks

Receiver(s) (IP, Leaf, interface, EPG/BD/VRF)

Needed to see where the join should appear

Group address (G) (e.g. 239.1.1.1)

Drives all subsequent look-ups

L2 or L3 delivery? (same BD vs routed between BDs/VRFs)

Decides whether IGMP-only or PIM/TRM is required

External receivers?

Tells you if you need L3Out multicast


1 Verify the Bridge Domain & IGMP snooping (Layer-2 multicast)

  1. Open the BD in Tenant ▶ Networking ▶ Bridge Domains

    • Flood mode should be “Optimized Flood” or “Encap Flood” for multicast.

    • If the BD will never leave the fabric, you can stay with L2 flooding.

    • If L3 forwarding is expected, be sure “Enable Unicast Routing” is on.

  2. IGMP Snooping policy

    • Tenant ▶ Policies ▶ Protocols ▶ IGMP Snooping

    • Typical fixes:

      • Enable Querier if there is no external querier in that subnet.

      • Raise robustness-variable if you expect lossy links.

    • Commit the policy and re-map it to the BD/EPG if necessary.


CLI quick-check


# On the source Leaf

show ip igmp snooping groups vrf PROD | inc 239.1.1.1

show endpoint ip 239.1.1.1 detail

If the group is absent, the Leaf never saw an IGMP join; check the receiver port or querier.


2 Check EVPN multicast routes & COOP (fabric control-plane)


ACI distributes multicast state with EVPN route-types 6/7 plus the GIPo.The COOP database ties endpoint locations to the spine loopback that replicates the traffic.


CLI quick-check


# On any Leaf

show bgp l2vpn evpn route-type 6 ip 239.1.1.1

show system internal epm multicast detail | inc 239.1.1.1

Healthy: you see <Leaf ID, BD VNID, GIPo, Replication-Spines>.If nothing appears, the Leaf never exported state—igmp snooping or BD settings are still wrong.


3 If the traffic must be routed (different BDs or external networks)

Enable Tenant Routed Multicast (TRM) or classic PIM

Step

What to do

Where

1

Create / choose a Multicast Domain object

Tenant ▶ Networking ▶ Multicast

2

Under the VRF, add PIM – Enabled → choose RP-policy (Anycast-RP is fine)

Tenant ▶ Networking ▶ VRF

3

Under each BD that must route multicast, tick “Enable PIM”

Tenant ▶ Networking ▶ BD

4

If you need external receivers, extend the L3Out and enable PIM on the interface

Tenant ▶ Networking ▶ External Routed Networks

Cisco calls this whole workflow TRM. ACI injects (S,G) or (*,G) into EVPN and handles replication in the spines.


L3 CLI health-check


show ip pim vrf PROD route 239.1.1.1

show ip pim vrf PROD rp mapping

The RP address must appear on every Leaf that holds the VRF; RPF interface should be “fabric” or the expected routed link.


4 Check the underlay replication (Spines)

Even if the control plane looks fine, congestion or hardware issues on replication spines can drop multicast.


CLI quick-check


# Any Spine

acidiag mcast if | inc 239.1.1.1

show system internal mcglobal info

A missing interface indicates that spine never programmed the group; usually caused by COOP errors or a spine-leaf overlay split-brain (clear-clock + COOP clear can help).


5 Common break-fix patterns

Symptom

Likely cause

Quick fix

Group never shows on Leaf

Missing IGMP Querier

Enable querier in BD or L3 gateway

Group shows, but no traffic

Underlay head-end replication bug in older code < 5.2(4)

Upgrade, or move BD to Encap Flood as a workaround

Traffic works inside ACI but not outside

External PIM domain has wrong RP / RPF failure

Check show ip pim vrf OUT route, fix RP or add MSDP peer

Packet bursts then stop

IGMP version mismatch (V2 vs V3), or IGMP throttling on Leaf

Force correct version in IGMP policy

Receivers across BDs drop frames

BD missing “Enable PIM”

Tick the box & commit; verify EVPN RT-6 export

6 End-to-end worked example


Scenario:

Source: 10.1.1.10 (Video-Srv) on Leaf101, EPG Video-Src, BD Video-BD, VRF PRODReceiver: 10.1.1.50 (Video-Cli) on Leaf102, EPG Video-Cli, same BD/VRFGroup: 239.1.1.1


  1. Receiver join

    Leaf102# show ip igmp snooping groups vrf PROD | inc 239.1.1.1 Vlan382 239.1.1.1 0100.5e01.0101 port1/3

  2. Source registration

    Leaf101# show endpoint ip 239.1.1.1 detail # shows (MAC, VNID, GIPo=225.0.0.37)

  3. EVPN distribution

    Leaf102# show bgp l2vpn evpn route-type 6 ip 239.1.1.1 * i [2]:[0]:[VNID]:[225.0.0.37]:[239.1.1.1]:[10.1.1.10]

  4. Replication spine check

    Spine201# acidiag mcast if | inc 239.1.1.1 239.1.1.1 VNID 382 L101,VPC102

  5. Packet capture confirms traffic enters Leaf102 on P-port1/3 → success.


7 Handy command cheat-sheet


# Layer-2 / IGMP

show ip igmp snooping groups vrf <VRF>

show system internal epm multicast detail


# EVPN / COOP

show coop internal info | inc 239.

show bgp l2vpn evpn route-type 6 ip <G>


# Layer-3 / PIM

show ip pim vrf <VRF> route <G>

show ip pim vrf <VRF> rp mapping

show ip mroute vrf <VRF>


# Spines / Replication

acidiag mcast if

show system internal mcglobal info


Common Issues


  • You see COOP timeouts, repeated duplicate GIPo entries, or spines crash the multicast process.

  • Multicast drops only at fabric load > 70 Gbps (possible ASIC bug—TAC has diagnostics).


------------------------------------------------------------------------------------------------------------


Multicast in Cisco ACI – Simple Explanation


1 . Why Multicast Matters in ACI

  • Multicast = one-to-many delivery (video, market feeds, updates) without wasting bandwidth.

  • Cisco ACI is an SDN fabric; the APIC turns policy into switch config.

  • Multicast rides inside VXLAN tunnels; ACI must cope with it at L2 (inside one Bridge Domain) and L3 (between BDs / to outside).


2 . How It Works

2 . 1 Layer-2 multicast (inside one BD)
  • Bridge Domain (BD) ≈ VLAN.

  • IGMP-snooping on every leaf learns which ports really need the stream.

  • IGMP Querier (an elected leaf) sends periodic queries so the table stays fresh.

  • Key BD knobs (APIC GUI → BD → Policies):

    • L3 Unknown Multicast Flooding:

      • Flood – send everywhere.

      • Optimized – send only where IGMP table or mrouter port says.

    • Multicast Destination Flooding: force flood even if snooping is on (use sparingly).


Must-know L2 commands


# Map BD encapsulation to internal VLAN

show system internal epm vlan <encap_id>


# IGMP snooping overview for a VLAN

show ip snooping vlan <vlan_id>


# Groups learned on that VLAN

show ip snooping groups vlan <vlan_id> <multicast_group_address>


2 . 2 Layer-3 multicast (between BDs or outside)
  • Enable multicast first in the VRF, then per-BD.

  • Uses PIM:

    • ASM → needs a Rendezvous Point (RP) (static / Auto-RP / BSR). In 2.0(1) the RP must be outside the fabric; best-practice = Anycast-RP + MSDP.

    • SSM → no RP; receivers join (S,G) via IGMP v3.

  • ACI EX-series leaves can run native PIM; non-EX needs an external router.


Must-know L3 commands


# PIM neighbours

show ip pim neighbor vrf <vrf_name>


# Multicast routing table

show ip mroute vrf <vrf_name> [group]


# Reverse-Path check for a source

show ip rpf <source_ip> vrf <vrf_name>


# Interface-level PIM status

show ip pim interface vrf <vrf_name> interface <intf>

2 . 3 Underlay magic (what you don’t normally see)
  • APIC allocates a GIPo (/28 from default 225.0.0.0/15) per BD & PIM-enabled VRF.

  • That address chooses an FTAG tree (built with IS-IS) for fabric-wide forwarding.

  • COOP messages from leaves tell spines which groups have receivers.

  • show isis internal mcast routes ftag (spine) shows trees.

2 . 4 Talking to the outside (L3Out)
  • Non-EX fabric → use external PIM router.

  • EX-only fabric → border leaves can be the routers.

  • Border leaf tasks:

    • Form PIM neighbours on L3Out.

    • Optional ACLs to block unwanted PIM Join floods.

    • Optional MSDP to discover external sources.

    • Use loopback for internal PIM peerings (stripe-winner logic).

3 . Typical Troubles we See

Symptom

What to look at

Multicast tick-box missing

VRF not enabled before BD.

IGMP Snooping weirdness

Wrong IGMP policy / Querier not elected / BD mistakenly treated as PIM-enabled.

No PIM neighbour

Bad PIM policy, trying to peer over vPC-peer-link, IP reachability.

RPF-fail drops

Unicast path mismatch, RP unreachable, “ip data-plane-learning” disabled.

Excess flooding

IGMP snooping disabled or mis-set, TTL decrements in L2-only BD.

Multi-Pod issues

IPN PIM/IGMP, spine not joining, HA pairs across pods drop keepalives.

External sources not seen

ACL blocks Join, MSDP not forming, border-leaf list wrong.

4 . Step-by-Step Troubleshooting Flow

  1. Basic reachability

    • ping source ↔ receiver.

  2. Check config in APIC

    • VRF → Enable Multicast.

    • BD → Enable Multicast + IGMP policy.

    • VRF → PIM Settings (ASM/SSM, RP).

  3. Inside one BD (L2)

    • Map BD to VLAN → show system internal epm vlan <encap_id>

    • IGMP status → show ip snooping vlan <vlan>

    • Group learners → show ip snooping groups vlan <vlan> <group>

  4. Across BDs / to outside (L3)

    • PIM neighbours → show ip pim neighbor vrf <vrf>

    • mroute entry → show ip mroute vrf <vrf> <group>

    • RPF check → show ip rpf <source> vrf <vrf>

    • Interface PIM mode → show ip pim interface vrf <vrf> interface <intf>

  5. Border / External

    • L3Out interface up & PIM sparse-mode.

    • ACLs: show running-configuration pim (look for “ip pim join-filter”).

    • MSDP peers if used: show ip msdp summary.

  6. Multi-Pod / IPN

    • IPN devices: show ip igmp groups, PIM config.

    • Confirm GIPo range identical in every pod.

5 . Handy moquery Snippets


# VRF-level PIM config (JSON)

moquery -c pimCtxP -o json


# Check if multicast enabled on BD "BD1"

moquery -c fvBD -f 'fv.BD.name=="BD1"' | grep mcastAllow


# Grab IGMP snoop policy details

moquery -c igmpSnoopPol -f 'igmp.SnoopPol.name=="<policy>"' -o json

6 . Best-Practice Checklist

  • Turn on multicast in VRF before BD.

  • Leave PIM off in BDs that only need L2 flooding (avoids TTL headaches).

  • Redundant RP (Anycast RP + MSDP) for ASM; IGMPv3 for SSM.

  • On mixed-hardware fabrics, let an external router handle PIM.

  • Border-leaf ACLs to drop joins for unwanted groups (DoS prevention).

  • Monitor: APIC → Fabric → Inventory → Protocols; syslog/SNMP.


CLI Examples


Leaf-302# show ip snooping vlan 10

IGMP Snooping information for VLAN 10

IGMP snooping enabled

Lookup mode: IP

Optimised Multicast Flood (OMF) enabled

IGMP querier present, address: 10.0.10.254, version: 2

Switch-querier disabled

IGMPv3 Explicit tracking: Disabled

Leaf-302# show ip snooping groups vlan 10 239.1.1.1

VLAN Group Address Type Port List

---- --------------- ---------- ------------------------------------

10 239.1.1.1 dynamic Eth1/1

Leaf-302# show system internal epm vlan 86

VLAN Encap BD Name Flags SVI IP Address

---- ------------- --------------- ----------- ------------------

86 vxlan-16850433 BD1 0x0 10.0.10.1

Leaf-303# show ip pim neighbor vrf default

PIM Neighbor information for VRF default

Neighbor Interface Uptime Expires DRPriority Bidir Flags

10.20.20.2 Ethernet1/20 00:05:32 00:01:30 1 No

Leaf-303# show ip mroute vrf default 239.1.1.1

(*, 239.1.1.1), 00:04:58/00:02:05, RP 10.10.10.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Tunnel16, Forward/Sparse, 00:04:58/00:01:59


(192.168.1.10, 239.1.1.1), 00:03:20/00:02:27, flags: FT

Incoming interface: Ethernet1/10, RPF nbr 192.168.1.10

Outgoing interface list:

Tunnel16, Forward/Sparse, 00:03:20/00:01:59

Spine-301# show isis internal mcast routes ftag | grep FTAG

FTAG Routes

FTAG ID: 0[Enabled] Cost:( 0/ 0/ 0)

FTAG ID: 1 [Enabled] Cost:( 2/ 8/ 0)

FTAG ID: 2 [Enabled] Cost:( 2/ 9/ 0)

...

FTAG ID: 5[Enabled] Cost:( 0/ 0/ 0)

Leaf-301# show running-configuration pim

Building configuration...


ip pim vrf default

rendezvous-point 10.10.10.1 group-list 224.0.0.0/4


interface Ethernet1/10

ip pim sparse-mode

7 . Quick Test Scenario

Receiver on Leaf-302 VLAN 10 can’t see group 239.1.1.1 (source is external)

  1. Leaf-302 → IGMP snooping/group checks (see above).

  2. Border leaf → show ip pim neighbor, ensure peering with external router.

  3. Border leaf → show ip mroute vrf <vrf> 239.1.1.1 – is there an entry?

  4. If missing, run show ip rpf <source> vrf <vrf> – fix RPF or add ip mroute.

  5. Confirm border ACLs aren’t blocking Joins.

  6. Re-verify VRF+BD multicast boxes ticked in APIC.


Absolutely — let’s go through how to create and apply a multicast contract between two EPGs in different tenants (inter-tenant), specifically allowing:

  • IP (for data traffic)

  • IGMP (protocol 2)

  • PIM (protocol 103)

This ensures multicast group joins and routing messages flow properly between the source and receiver sides.


Lab Example: Inter-Tenant Multicast Contract

🔹 Scenario Recap:

  • Source EPG: Stream-TX in Tenant-A

  • Receiver EPG: Stream-RX in Tenant-B

  • Multicast Group: 239.2.2.2

  • Contract Direction: Provided by Source (Tenant-A), Consumed by Receiver (Tenant-B)


Step-by-Step: Contract Creation (on APIC GUI)


✅ In Tenant-A (Provider)

1. Create Filter

Path:Tenant-A → Policies → Protocol → Filters → Create Filter

Filter Name: Multicast-Allow

Add 3 Entries:

EtherType

Protocol

Dest Port

IPv4

2 (IGMP)

*

IPv4

103 (PIM)

*

IPv4

*

*

2. Create Contract

Path:Tenant-A → Policies → Contracts → Create Contract

  • Name: Allow-Multicast

  • Add Subject: Multicast-Subject

  • Attach Filter: Multicast-Allow

  • Scope: Global (important for inter-tenant usage)

3. Apply Contract to EPG

Path:Tenant-A → Application Profiles → AppProfile → EPGs → Stream-TX

  • Provide Contract: Allow-Multicast


✅ In Tenant-B (Consumer)

4. Import Contract from Tenant-A

Path:Tenant-B → Contracts → External Contracts → Import

  • Name: Allow-Multicast

  • Source Tenant: Tenant-A

5. Apply Contract to Receiver EPG

Path:Tenant-B → Application Profiles → AppProfile → EPGs → Stream-RX

  • Consume Contract: Allow-Multicast


CLI Verification (Optional)

On APIC:

# Verify contracts
moquery -c vzBrCP | grep -i multicast
moquery -c vzFilterEntry | grep -E 'protocol|etherT'

On Leaf:

# Show zoning rules
show zoning-rule | grep Stream

✅ Once this is in place:

  • IGMP joins from Tenant-B are permitted to reach Tenant-A

  • PIM messages flow between the VRFs (via L3Out if needed)

  • IP multicast data is allowed through fabric contracts


Feature

IGMP Interface Policy

IGMP Snooping Policy

Scope

Applied to L3Out interfaces or routed domains

Applied to Bridge Domains (L2 fabric)

Purpose

Controls multicast group management for routed (Layer 3) interfaces

Controls multicast behavior within L2 domains via IGMP snooping

Querier Role

Needed when fabric is acting as multicast router on routed interface (L3)

Needed if no querier exists in the L2 BD — fabric sends IGMP queries

Common Use

L3 multicast (e.g., source/receiver on different subnets)

L2 multicast (e.g., endpoints in same subnet)

Applied Where?

On routed interface profiles in L3Out

On Bridge Domains via Multicast Policy

Querier IP (GIPO)

Often required if APIC/fabric acts as querier

Querier just maintains IGMP group activity in absence of external router

Configuration Path

Tenant → L3Out → Interface → IGMP Policy

Fabric → Access Policies → Global Policies → IGMP Snooping Policies



Scenario 1: L2 Multicast in ACI (Same BD)

Use Case

  • IPTV server in EPG-Source

  • IPTV clients in EPG-Clients

  • Both EPGs belong to the same Bridge Domain: BD-Media

  • VRF = VRF-Media

Configuration

  1. Enable IGMP Snooping on BD

    • Tenant → VRF-Media → BD-Media → Policy

    • Enable: ✅ IGMP Snooping

    • Apply IGMP Snooping Policy (with querier enabled if no external router present)

  2. Create Multicast Policy (Optional)

    • Fabric → Policies → Protocol → IGMP Snooping

    • Configure timers (query interval, last member query, etc.)

  3. EPG association

    • Ensure both EPG-Source and EPG-Clients are mapped to BD-Media

Validation

show ip igmp snooping groups vrf VRF-Media
show system internal bridge-domain multicast bd all

✅ Expect to see multicast group (e.g., 239.1.1.1) joined by client ports.


Scenario 2: L3 Multicast in ACI (Across BDs/External)

Use Case

  • IPTV server in BD-Source

  • IPTV clients in BD-Receiver

  • Both BDs belong to VRF-TV

  • Multicast must also be available externally via L3Out to a WAN router

  • RP (Rendezvous Point) = 10.0.0.1 in WAN

Configuration

  1. Enable Multicast in VRF

    • Tenant → VRF-TV → Multicast → Enable

    • Under Rendezvous Points, add 10.0.0.1 as Static RP

  2. Assign GIPo to VRF

    • Auto-assigned from fabric pool OR manually set under BD → Policy → Advanced/Troubleshooting

  3. PIM Settings

    • Under VRF → Multicast → PIM Setting

    • Ensure VRF has a GIPo (e.g., 225.1.192.0)

  4. Interface-Level (L3Out)

    • Tenant → L3Out → Logical Interface Profile

    • Apply PIM Interface Policy (default timers, DR priority if needed)

    • Apply IGMP Policy if receivers directly connect here

Validation

show ip pim neighbor vrf VRF-TV
show ip pim rp mapping vrf VRF-TV
show ip mroute vrf VRF-TV

✅ Expect: PIM neighbor with WAN router, RP mapping to 10.0.0.1, multicast routes for 239.x.x.x.


Scenario 3: Inter-VRF Multicast in ACI

Use Case

  • IPTV source in Prod_VRF

  • IPTV clients in Guest_VRF

  • Same fabric, but receivers are isolated in their VRF

  • Need to leak multicast from Prod → Guest

Configuration

  1. Enable Multicast in Both VRFs

    • Tenant → Prod_VRF → Multicast → Enable

    • Tenant → Guest_VRF → Multicast → Enable

  2. Configure Inter-VRF Multicast

    • Tenant → VRF (e.g., Guest_VRF) → Multicast → Inter-VRF Multicast

    • Add:

      • Source VRF = Prod_VRF

      • Route-map (optional: filter which multicast groups to leak)

  3. Rendezvous Points

    • Both VRFs must point to the same RP (static or fabric RP)

Validation

show ip mroute vrf Guest_VRF
show ip mroute vrf Prod_VRF

✅ Expect: Receiver in Guest_VRF able to join and receive traffic from source in Prod_VRF.


Key Differences Between the 3 Scenarios

Feature

L2 Multicast (BD)

L3 Multicast (VRF)

Inter-VRF Multicast

Scope

Same subnet only

Routed subnets/External

Across VRFs

Policy Used

IGMP Snooping Policy

IGMP Policy + PIM Interface Policy

Inter-VRF Multicast Config

Replication

Within BD

VRF GIPo replication

VRF leak via APIC config

RP Needed?

❌ No

✅ Yes (PIM-SM)

✅ Yes (shared RP)

Use Case

Local app discovery

IPTV, WAN multicast

IPTV stream to multiple VRFs



ACI Multicast Troubleshooting Scenario


Scenario

  • Source: IPTV server in BD-Video (VLAN 22, VRF = JIL-VRF-HUB)

  • Receivers: Clients in the same BD and in another BD within the same VRF

  • Multicast group: 239.1.1.2

  • Observation: Receivers are not getting the stream

You must troubleshoot step by step whether it is an L2 snooping issue, COOP control-plane issue, or PIM/routing issue.


1️⃣ Check if Multicast Routing is Enabled in BD

  • If PIM is disabled → fabric will treat multicast as L2 flood within BD (subject to IGMP snooping).

  • If PIM is enabled → fabric uses always route model: even intra-BD multicast is routed (TTL decrements, MAC rewrite).

📌 Command (BD level):

# On APIC, check BD Policy
APIC GUI → Tenant → BD-Video → Policy → General
PIM = Enabled ?

2️⃣ Verify IGMP Snooping Behavior on Leaf

Check if receivers joined group and snooping table is populated.

Leaf# show ip igmp snooping vlan 22
Leaf# show ip igmp snooping groups vlan 22
Leaf# show ip igmp snooping groups 239.1.1.2 vlan 22

✅ Expected:

  • Group 239.1.1.2 listed

  • Active ports = receiver interfaces

  • Querier present (e.g., 10.1.1.1)

If not:

  • Enable IGMP querier in BD snooping policy

  • Check IGMP joins from host with packet capture


3️⃣ Check Querier Status

Leaf# show ip igmp snooping querier vlan 22

✅ Expected: Querier IP present.If empty → No IGMP queries → Receivers won’t sustain group membership.

Fix: Enable querier on BD SVI (via IGMP Snooping Policy).


4️⃣ Check COOP Database on Spine

The Spines maintain:

  • mrouter repo = upstream router ports

  • mgroup repo = multicast group membership per BD VNID

📌 Commands:

# On spine, check mrouter entries
Spine# show coop internal info repo mrouter | grep -A 20 <BD-VNID>

# On spine, check mgroup entries
Spine# show coop internal info repo mgroup | grep -A 20 <BD-VNID>

# Trace COOP logs
Spine# show coop internal event-history trace-detail-mc | more

✅ Expected:

  • BD VNID is present with mrouter + mgroup entries

  • If missing, leaf did not publish join/querier → issue on leaf


5️⃣ Check IGMP Events on Leaf

Trace whether IGMP joins were processed and published to COOP.

Leaf# show ip igmp snooping internal event-history vlan 22
Leaf# show ip igmp snooping internal event-history coop | grep <timestamp>

6️⃣ Check Fabric Multicast (Overlay Replication)

Every BD has a GIPo (e.g., 225.0.140.16). Fabric uses this to replicate BUM traffic.

Leaf# show ip igmp snooping internal fabric-groups
Leaf# show fabric multicast event-history mrib | grep -i 239.1.1.2

✅ Expected:

  • Group maps to BD’s GIPo

  • Spine replicates to correct egress leaves


7️⃣ Check Routed Multicast (If PIM Enabled)

If BD has PIM enabled → check routing (always-route model):

Leaf# show ip pim neighbor vrf common:JIL-VRF-HUB
Leaf# show ip pim rp mapping vrf common:JIL-VRF-HUB
Leaf# show ip mroute vrf common:JIL-VRF-HUB
Leaf# show ip igmp groups vrf common:JIL-VRF-HUB

✅ Expected:

  • PIM neighbors up

  • RP known (static/fabric RP)

  • show ip mroute has (*,G) or (S,G) entry for 239.1.1.2

  • IGMP group membership shown

If not → PIM/RP misconfig or missing contracts between EPGs/VRFs.


Typical Root Causes

  1. IGMP Snooping Disabled → traffic flooded everywhere, inefficient.

  2. No Querier → joins age out, receivers drop stream.

  3. PIM enabled in BD unexpectedly → multicast routed even within same subnet (TTL decrement, receiver confusion).

  4. Contracts Missing → multicast (IGMP/PIM) blocked between tenants/VRFs.

  5. COOP Database Missing Entry → leaf did not publish group or mrouter → bug/misconfig.


Structured Troubleshooting Flow

  1. Step 1: Is PIM enabled in BD? → Yes → Routed, No → Snooping.

  2. Step 2: Check IGMP snooping group entries on receiver leaf.

  3. Step 3: Check querier exists.

  4. Step 4: Check COOP mrouter/mgroup repos on spine.

  5. Step 5: Check fabric replication to BD’s GIPo.

  6. Step 6: If PIM enabled → verify RP, neighbors, mroute.

  7. Step 7: Check contracts (if inter-tenant).




Multicast Stream Not Reaching Clients

Scenario

  • Tenant: Sales

  • VRF: JIL-VRF-HUB

  • Bridge Domain: BD-Video (VLAN 22)

  • Source: IPTV server in EPG-Source

  • Receivers: Clients in EPG-Clients

  • Multicast Group: 239.1.1.2

  • Expected Behavior: Clients should receive stream.


Step 1: Verify if PIM is enabled in BD

APIC → Tenant → Sales → Networking → BD-Video → Policy

Output:

PIM: Enabled
IGMP Snooping: Enabled
Querier: Not configured

⚠️ Problem: Because PIM is enabled, multicast runs in always-route mode.Even source/receivers in same BD are routed. Querier missing.


Step 2: Check IGMP Snooping on Receiver Leaf

Leaf102# show ip igmp snooping vlan 22

Output:

IGMP snooping enabled
Lookup mode: IP
IGMP querier: none present
Active ports: Eth1/7 Eth1/8 Po3

⚠️ Problem: No querier → Joins will age out.


Step 3: Check IGMP Groups

Leaf102# show ip igmp snooping groups vlan 22

Output:

Vlan  Group Address    Ver  Type  Port list
22    239.1.1.2        v2   D     <empty>

⚠️ Problem: Group exists but no ports → Receivers dropped.


Step 4: Check COOP on Spine

Spine1# show coop internal info repo mgroup | grep -A 20 15499182

Output:

No entries for BD VNID 15499182

⚠️ Problem: Leaf didn’t publish mgroup → missing querier/joins.


Step 5: Check Multicast Routes (Since PIM is Enabled)

Leaf102# show ip mroute vrf common:JIL-VRF-HUB

Output:

No entries found for group 239.1.1.2

⚠️ No (S,G) or (*,G) routes → multicast tree not built.


Fixes

✅ Fix 1: Enable IGMP Querier

On APIC:

  • Tenant → BD-Video → Policy → IGMP Snooping Policy

  • Enable Querier with SVI IP as source

Now validate:

Leaf102# show ip igmp snooping querier vlan 22
IGMP querier present, address: 10.1.1.1, version: 3

✅ Fix 2: Verify IGMP Groups After Querier

Leaf102# show ip igmp snooping groups vlan 22
Vlan  Group Address    Ver  Type  Port list
22    239.1.1.2        v2   D     Eth1/7 Eth1/8 Po3

Receivers reappear ✅

✅ Fix 3: Check COOP Re-Registration

Spine1# show coop internal info repo mgroup | grep -A 20 15499182
BD-VNID 15499182 → Group 239.1.1.2 with members Leaf102

✅ Fix 4: Check Multicast Route Table

Leaf102# show ip mroute vrf common:JIL-VRF-HUB
(*,239.1.1.2), uptime 00:00:15, RP 10.0.0.1, flags: SPT

✅ Route installed.


Final Validation

  1. Receiver leaf IGMP groups

show ip igmp snooping groups vlan 22
  1. Querier status

show ip igmp snooping querier vlan 22
  1. Spine control-plane

show coop internal info repo mgroup | grep <BD-VNID>
  1. Multicast routing

show ip mroute vrf common:JIL-VRF-HUB

✅ Stream should now flow to receivers.


Key TAC Lessons

  1. If PIM enabled on BD → Always-route model applies (TTL decrement expected).

  2. Querier is critical for maintaining snooping tables.

  3. COOP spine repos (mrouter, mgroup) are gold for checking fabric-wide multicast state.

  4. show ip mroute confirms multicast routing tree.


Excellent question 👌 — this is a common confusion point in ACI. Let’s break it down into concept, then how to differentiate in both APIC GUI and CLI outputs.



How to differentiate between internal GIPo (fabric-only) and user multicast groups (e.g., 239.x.x.x used by your apps) in both APIC and CLI outputs?



Concept Recap

  • Internal GIPo (Global IP Pool)

    • Allocated by the fabric from a multicast pool (e.g., 225.x.x.x)

    • Used only by ACI’s internal VXLAN overlay to replicate BUM traffic (Broadcast, Unknown Unicast, Multicast) between leafs/spines.

    • Never exposed to endpoints or applications.

  • User/Application Multicast Groups (e.g., 239.x.x.x)

    • Chosen by applications (IPTV, financial feeds, etc.).

    • Endpoints join via IGMP to these groups.

    • Routed/forwarded by ACI using IGMP/PIM.


How to Differentiate in APIC GUI


  1. Bridge Domain → Policy → Advanced/Troubleshooting

    • Field: Multicast Address

    • Example: 225.0.140.16

    • ✅ This is a GIPo address, used internally for BD replication.

  2. Tenant → VRF → Multicast → Rendezvous Points / IGMP Setting

    • Here you’ll configure or see user multicast groups (239.x.x.x) for RP/SSM.

    • ✅ These are application-facing groups.

👉 Rule of Thumb:

  • 225.x.x.x (from fabric pool) → Internal GIPo

  • 239.x.x.x → App/User traffic


How to Differentiate in CLI Outputs

1. Check GIPo Assignments

Leaf# show ip igmp snooping internal fabric-groups

Output:

BD VNID: 16383905
  GIPo: 225.0.140.16

✅ This 225.0.140.16 is internal (fabric-only).

2. Check IGMP Groups (User Joins)

Leaf# show ip igmp snooping groups vlan 22

Output:

Vlan  Group Address    Ver  Type  Port list
22    239.1.1.2        v2   D     Eth1/7 Eth1/8 Po3

✅ This 239.1.1.2 is a user multicast group, joined by receivers.

3. Check Multicast Routing Table

Leaf# show ip mroute vrf <VRF_NAME>

Output:

(*,239.1.1.2), uptime 00:00:15, RP 10.0.0.1

✅ Here you’ll only see user multicast groups, never GIPo.

4. Check Fabric Replication (Overlay Only)

Spine# show coop internal info repo mgroup | grep <BD-VNID>

Output:

BD VNID 15499182 → Fabric replication GIPo 225.0.140.16

✅ Spines track GIPo for BUM replication, not user multicast groups.


Key Differentiation Rule

Address Type

Where You See It

Meaning

225.x.x.x (or pool-defined)

BD policy (APIC), fabric-groups (CLI)

Internal GIPo (fabric replication only)

239.x.x.x (user apps)

IGMP groups (CLI), mroute table, endpoint traffic

Application/User multicast group

⚠️ Tip: You’ll never see 225.x.x.x in show ip igmp groups or show ip mroute. Those commands only show real multicast groups joined by endpoints. If you see 225.x.x.x, you’re looking at fabric replication (GIPo), not app traffic.


Multicast in ACI: Key Concepts

  1. L2 Multicast:

    • IGMP Snooping: Used to manage multicast group memberships within a Bridge Domain. It ensures that multicast traffic is only forwarded to ports with interested hosts.

    • PIM on Bridge Domain: If multicast traffic needs to exit the L2 domain to an L3Out, enabling PIM on the Bridge Domain helps forward the multicast traffic properly.

    • GIPO (IGMP Querier): Only needed if there’s no external multicast router. It sends IGMP queries to hosts to maintain group memberships. If there’s an external router, GIPO isn’t necessary.

  2. L3 Multicast:

    • PIM on VRF and L3Out: Enabling PIM on the VRF ensures proper multicast routing across subnets and to external networks.

    • RP (Rendezvous Point): In PIM-Sparse Mode, the RP is crucial for managing multicast group memberships and establishing the multicast distribution tree.

    • GIPO in L3 Multicast: Typically not required because the external router will handle IGMP queries and group management.

  3. Best Practices:

    • L2 Multicast: Enable IGMP snooping and PIM on the Bridge Domain if traffic needs to cross into the L3 domain.

    • L3 Multicast: Focus on enabling PIM on the VRF and L3Out, and configure the RP as needed.

In summary, you generally only need GIPO in L2 scenarios if there’s no external multicast router. For L3 multicast, the external router handles the IGMP management, so you can skip the GIPO.




Resources :








Recent Posts

See All
Quality of Service (QoS) in Cisco ACI

Configuring Quality of Service (QoS)  in Cisco ACI (Application Centric Infrastructure)  involves creating and applying QoS policies that...

 
 
 

Follow me

© 2021 by Mukesh Chanderia
 

Call

T: 8505812333  

  • Twitter
  • LinkedIn
  • Facebook Clean
©Mukesh Chanderia
bottom of page