top of page

ACI Basics

  • Writer: Mukesh Chanderia
    Mukesh Chanderia
  • Jun 17, 2023
  • 24 min read

Updated: Mar 24


The Cisco Application Policy Infrastructure Controller (APIC) works as a smart policy manager, sending the required policy settings to the fabric and applying any needed changes.


It is separate from both the control and data planes, meaning it doesn't get involved in the traffic flow or affect traffic flow in any way. This separation allows the APIC to manage policies efficiently without affecting the movement of data in the network.



The rules to connect the APIC server to leaf switches are as follows:

• All ports must have the same speed, either 10-Gigabit or 25-Gigabit.

• ETH2-1 and ETH2-2 is one port-channel pair, corresponding to eth2-1 ('ifconfig'

output) from the APIC OS.

• ETH2-3 and ETH2-4 is the other port-channel pair, corresponding to eth2-2

('ifconfig' output) on APIC OS.

Only one connection is allowed per port-channel pair. For example, connect one

cable to either ETH2-1 or ETH2-2, and connect another cable to either ETH2-3

or ETH2-4 (Never connect both ETHs in a port channel pair. This will lead to

fabric discovery issues).


LLDP (Link Layer Discovery Protocol)


  • Purpose: LLDP helps the APIC discover and connect to leaf switches in the Cisco ACI fabric.

  • VIC Adapter Behavior: The Cisco VIC adapter on the APIC can generate its own LLDP packets. By default, LLDP is disabled on the VIC to let the APIC operating system handle discovery.

  • Why Disabled?: If LLDP is enabled on the VIC, the adapter itself will handle LLDP packets instead of passing them to the APIC OS, causing the APIC to fail in discovering leaf switches.

  • Checking LLDP Status (SSH Commands):

    1. scope chassis

    2. scope adapter 1

    3. show detail

    4. Confirm LLDP: Disabled is displayed.

  • Enabling/Disabling LLDP:

    • If LLDP is set to “Enabled” but needs to be disabled, use:

      sql

      CopyEdit

      set lldp disabled commit

    • These changes take effect after the server is reset.


TPM (Trusted Platform Module)


  • Purpose: The TPM is a hardware component that verifies and authenticates the server.

  • APIC Requirement: To boot and operate properly (including upgrades), the TPM must be:

    1. Enabled

    2. Activated

    3. Owned (from the BIOS perspective)

  • Consequence of Incorrect TPM State: If the TPM is not in the required state, the APIC may fail to boot or upgrade.

  • How to Verify TPM State: Check the BIOS under Advanced > Trusted Computing to confirm the TPM is enabled, activated, and owned.


leaf101# show discoveryissues

Checking the platform type................LEAF!

Check01 - System state - in-service [ok]

Check02 - DHCP status [ok]

TEP IP: 10.0.72.67 Node Id: 101 Name: leaf101

Check03 - AV details check [ok]

Check04 - IP rechability to apic [ok]

Ping from switch to 10.0.0.1 passed

Check05 - infra VLAN received [ok]

infra vLAN:3967

Check06 - LLDP Adjacency [ok]

Found adjacency with SPINE

Found adjacency with APIC

Check07 - Switch version [ok]

version: n9000-14.2(1j) and apic version: 4.2(1j)

Check08 - FPGA/BIOS out of sync test [ok]

Check09 - SSL check [check]

SSL certificate details are valid

Check10 - Downloading policies [ok]

Check11 - Checking time [ok]

2019-09-11 07:15:53

Check12 - Checking modules, power and fans [ok]



leaf101# moquery -c topSystem

Total Objects shown: 1

# top.System

address : 10.0.72.67

bootstrapState : done

...

serial : FDO20160TPS

serverType : unspecified

siteId : 1

state : in-service

status :

systemUpTime : 00:18:17:41.000

tepPool : 10.0.0.0/16

unicastXrEpLearnDisable : no

version : n9000-14.2(1j)

virtualMode : no


DHCP status


(none)# tcpdump -ni kpm_inb port 67 or 68


interface kpm_inb allows you to see all CPU inband control plane network traffic.


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on kpm_inb, link-type EN10MB (Ethernet), capture size 65535 bytes

16:40:11.041148 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from a0:36:9f:c7:a1:0c, length 300

^C

1 packets captured

1 packets received by filter

0 packets dropped by kernel


apic1# ps aux | grep dhcp

root 18929 1.3 0.2 818552 288504 ? Ssl Sep26 87:19 /mgmt//bin/dhcpd.bin -f -4 -cf

/data//dhcp/dhcpd.conf -lf /data//dhcp/dhcpd.lease -pf /var/run//dhcpd.pid --no-pid bond0.3967

admin 22770 0.0 0.0 9108 868 pts/0 S+ 19:42 0:00 grep dhcp


IP reachability to APIC


leaf101# iping -V overlay-1 10.0.0.1


LLDP --> Infra Vlan


The leaf will determine the infra VLAN based on LLDP packets received from other ACI

nodes. The first one it receives will be accepted when the switch is in discovery.


(none)# moquery -c lldpInst


(none)# moquery -c lldpInst

Total Objects shown: 1

# lldp.Inst

adminSt : enabled

childAction :

ctrl :

dn : sys/lldp/inst

holdTime : 120

infraVlan : 3967

initDelayTime : 2

lcOwn : local

modTs : 2019-09-12T07:25:33.194+00:00

monPolDn : uni/fabric/monfab-default

name :

operErr :

optTlvSel : mgmt-addr,port-desc,port-vlan,sys-cap,sys-desc,sys-name

rn : inst

status :

sysDesc : topology/pod-1/node-101

txFreq : 30


(none)# show vlan encap-id 3967

VLAN Name Status Ports

---- -------------------------------- --------- -------------------------------

8 infra:default active Eth1/1

VLAN Type Vlan-mode

---- ----- ----------

8 enet CE


If the infra VLAN has not been programmed on the switchport interfaces connected to

the APICs, check for wiring issues detected by the leaf.


(none)# moquery -c lldpIf -f 'lldp.If.wiringIssues!=""'

Total Objects shown: 1

# lldp.If id : eth1/1

adminRxSt : enabled

adminSt : enabled

adminTxSt : enabled

childAction :

descr :

dn : sys/lldp/inst/if-[eth1/1]

lcOwn : local

mac : E0:0E:DA:A2:F2:83

modTs : 2019-09-30T18:45:22.323+00:00

monPolDn : uni/fabric/monfab-default

name :

operRxSt : enabled

operTxSt : enabled

portDesc :

portMode : normal

portVlan : unspecified

rn : if-[eth1/1]

status :

sysDesc :

wiringIssues : infra-vlan-mismatch


FPGA/EPLD/BIOS out of sync --> F1582



The FPGA, EPLD and BIOS versions could affect the leaf node's ability to bring up the

modules as expected. If these are too far out of date, the interfaces of the switch could

fail to come up. The user can validate the running and expected versions of FPGA,

EPLD, and BIOS with the following moquery commands.


(none)# moquery -c firmwareCardRunning

Total Objects shown: 2

# firmware.CardRunning

biosVer : v07.66(06/11/2019)

childAction :

descr :

dn : sys/ch/supslot-1/sup/running

expectedVer : v07.65(09/04/2018) interimVer : 14.2(1j)

internalLabel :

modTs : never

mode : normal

monPolDn : uni/fabric/monfab-default

operSt : ok

rn : running

status :

ts : 1970-01-01T00:00:00.000+00:00

type : switch

version : 14.2(1j)

# firmware.CardRunning

biosVer : v07.66(06/11/2019)

childAction :

descr :

dn : sys/ch/lcslot-1/lc/running

expectedVer : v07.65(09/04/2018) interimVer

internalLabel :

modTs : never

mode : normal

monPolDn : uni/fabric/monfab-default

operSt : ok

rn : running

status :

ts : 1970-01-01T00:00:00.000+00:00

type : switch

version : 14.2(1j)


(none)# moquery -c firmwareCompRunning

Total Objects shown: 2

# firmware.CompRunning childAction :

descr :

dn : sys/ch/supslot-1/sup/fpga-1/running

expectedVer : 0x14 internalLabel :

modTs : never

mode : normal

monPolDn : uni/fabric/monfab-default

operSt : ok

rn : running

status :

ts : 1970-01-01T00:00:00.000+00:00

type : controller

version : 0x14

# firmware.CompRunning

childAction :

descr :

dn : sys/ch/supslot-1/sup/fpga-2/runnin

expectedVer : 0x4

internalLabel :

modTs : never

mode : normal

monPolDn : uni/fabric/monfab-default

operSt : ok

rn : running

status :

ts : 1970-01-01T00:00:00.000+00:00

type : controller

version : 0x4




To validate SSL certificate during the discovery of a switch, use the following command.


(none)# cd /securedata/ssl && openssl x509 -noout -subject -in server.crt

subject= /serialNumber=PID:N9K-C93180YC-EX SN:FDO20432LH1/CN=FDO20432LH1



apic1# acidiag -h


leaf-a# show interface brief


leaf# show ip interface brief vrf overlay-1 IP Interface Status for VRF "overlay-1"(4) eth1/49              unassigned protocol-up/link-up/admin-up eth1/49.7unnumbered           protocol-up/link-up/admin-up (lo0) eth1/50 unassigned protocol-down/link-down/admin-up eth1/51 unassigned protocol-down/link-down/admin-up eth1/52 unassigned protocol-down/link-down/admin-up eth1/53 unassigned protocol-down/link-down/admin-up eth1/54 unassigned protocol-down/link-down/admin-up vlan7 10.0.0.30/27 protocol-up/link-up/admin-up lo0 10.0.32.64/32       protocol-up/link-up/admin-up lo1023 10.0.0.32/32         protocol-up/link-up/admin-up


Configuration of Loopback and TEP IP Addresses in Cisco ACI


  1. Loopback 0 Interface

    • Assignment:

      • TEP IP Address: The Loopback 0 interface is assigned the Tunnel Endpoint (TEP) IP address.

      • Source: Obtained via DHCP from the APIC (Application Policy Infrastructure Controller).

      • Designation: Referred to as a Physical Tunnel Endpoint (PTEP).

      • Example Address: In this scenario, the PTEP address is 10.0.32.64.

    • Additional Configuration:

      • The PTEP address is also configured as unnumbered on a subinterface of the spine-facing link.


  2. Fabric TEP (FTEP)

    • Purpose:

      • Utilized for VXLAN encapsulation to a vSwitch TEP, if available.

    • Configuration Details:

      • Unique Address: Cisco ACI specifies a unique FTEP address that remains consistent across all leaf nodes.

      • Mobility Support: This consistency enables downstream TEP device mobility.

      • Example Address: In this scenario, the FTEP address is 10.0.0.32.


  3. Overlay-1 VRF

    • Inclusion:

      • Both the PTEP and FTEP IP addresses are part of the overlay-1 VRF.

    • Functionality:

      • Manages the routing and encapsulation processes for VXLAN tunnels within the Cisco ACI fabric.



APIC Reset


If you need to reset your device, there are two commands you can use: acidiag touch clean and acidiag touch setup.


apic# acidiag touch clean

This command will wipe out this device.


The acidiag touch clean command removes all policy-related data but keeps important network settings like the fabric name, IP address, and login details. This is useful when you want to reset policy settings but keep the core network configurations.


apic# acidiag touch setup

This command will reset the device configuration, Proceed? [y/N] y


The acidiag touch setup command resets the device back to its factory default settings. This is handy if you're planning to repurpose the device for something else, such as moving it between different pods.


Note : you need to reboot the devices after both commands to take command effect.


apic# acidiag reboot

This command will restart this device, Proceed? [y/N] y


You could wipe the switches using the following commands:


Switches


switch# setup-clean-config.sh or acidiag touch clean

This command will wipe out this device, Proceed? [y/N] y

switch# reload


APIC initial config


Press Enter at anytime to assume the default values. Use ctrl-d at anytime to restart from the beginning.


Cluster configuration ...

Enter the fabric name [ACI Fabric1]: Fabric

Enter the fabric ID (1-128) [1]: 1

Enter the number of active controllers in the fabric (1-9) [3]: 3

Is this a standby controller? [NO]: NO

Is this an APIC-X? [NO]: NO

Enter the controller ID (1-3) [1]: 2

Standalone APIC Cluster ? yes/no [no] no

Enter the POD ID (1-254) [1]: 1

Enter the controller name [apic1]: apic2

Enter address pool for TEP addresses [10.0.0.0/16]: 10.0.0.0/16

Note: The infra VLAN ID should not be used elsewhere in your environment

and should not overlap with any other reserved VLANs on other platforms.

Enter the VLAN ID for infra network (1-4094): 3967


Out-of-band management configuration ...",

Enable IPv6 for Out of Band Mgmt Interface? [N]: N

Enter the IPv4 address [192.168.10.1/24]: 192.168.11.2/24

Enter the IPv4 address of the default gateway [None]: 192.168.11.254

Enter the interface speed/duplex mode [auto]: auto


Cluster configuration ...

Fabric name: Fabric

Fabric ID: 1

Number of controllers: 3

Controller name: apic2

POD ID: 1

Controller ID: 2

TEP address pool: 10.0.0.0/16

Infra VLAN ID: 3967


Out-of-band management configuration ...

Management IP address: 192.168.11.2/24

Default gateway: 192.168.11.254

Interface speed/duplex mode: auto


admin user configuration ...

The admin user configuration will be syncronized

from the first controller after this controller joins the cluster.


The above configuration will be applied ...


Warning: TEP address pool and Infra VLAN ID cannot be changed later, these are permanent until the fabric is wiped.


Would you like to edit the configuration? (y/n) [n]: n


apic1# acidiag fnvread

ID Pod ID Name Serial Number IP Address Role State LastUpdMsgId

------------------------------------------------------

101 1 leaf1 S/N 10.0.2.64/32 leaf active 0

102 1 leaf2 S/N 10.0.3.65/32 leaf active 0

201 1 spine1 S/N 10.0.32.66/32 spine active 0



On Cisco APIC, verify the LLDP neighbors on the fabric-facing interfaces eth2-1 and eth2-2 using the acidiag run lldptool command.


apic1# acidiag run lldptool in eth2-1

Chassis ID TLV

MAC: 00:3a:9c:7e:58:c2

Port ID TLV

Local: Eth1/2

Time to Live TLV

120

Port Description TLV

topology/pod-1/paths-101/pathep-[eth1/2]

System Name TLV

leaf-a

System Description TLV

topology/pod-1/node-101

System Capabilities TLV

System capabilities: Bridge, Router

Enabled capabilities: Bridge, Router

Management Address TLV

IPv4: 192.168.10.211

Ifindex: 83886080

Cisco 4-wire Power-via-MDI TLV

4-Pair PoE supported

Spare pair Detection/Classification not required

PD Spare pair Desired State: Disabled

PSE Spare pair Operational State: Disabled

Cisco Port Role TLV

4

Cisco Port Mode TLV

0

Cisco Port State TLV

1

Cisco Model TLV

N9K-C93180YC-FX

Cisco Serial Number TLV

FDO23161CZ0

Cisco Firmware Version TLV

n9000-15.2(1g)

Cisco Node Role TLV

1

Cisco Infra VLAN TLV

369

Cisco Name TLV

leaf-a

Cisco Fabric Name TLV

Fabric

Cisco Node IP TLV

IPv4:10.0.32.64

Cisco Node ID TLV

101

Cisco POD ID TLV

1

Cisco Appliance Vector TLV

Id: 1

IPv4: 10.0.0.1

UUID: 9df7d5a0-ca14-33eb-beda-e526c6a0aa53

LLDP-MED Capabilities TLV

Device Type: netcon

Capabilities: LLDP-MED, Network Policy, Extended Power via MDI-PSE

LLDP-MED Network Policy TLV

01400000

End of LLDPDU TLV



From APIC , Cross-check the chassis ID with the Cisco APIC UUID obtained from the leafs .


Leaf : show lldp neighbour detail

Leaf : show lldp traffic


(none)# Prompt means switch hasn’t been discovered yet


(none)# moquery -c faultInfo (contails all fault)


TPM Disabled in BIOS → Enable it


LLDP Enabled in CIMC/VIC → Disable it



“Show cli list” → to view all CLI commands available


APIC Logs

—-------------

/var/log/dme/log

/var/log/dme/oldlog


Switch Logs

—---------------

/var/log/dme/log

/var/log/dme/oldlog

/var/sysmgr/tmp_logs


APIC# show epg BLUE detail


Leaf1# iping -V tenant:vrf01 -S 172.16.1.1[GW BD IP] 172.16.1.22 (Destination)


apic1# acidiag avread

Local appliance ID=1 ADDRESS=10.0.0.1 TEP ADDRESS=10.0.0.0/16 ROUTABLE IP ADDRESS=0.0.0.0 CHASSIS_ID=9df7d5a0-ca14-11eb-beda-e526c7a0aa53

Cluster of 1 lm(t):1(zeroTime) appliances (out of targeted 1 lm(t):1(2021-06-11T09:39:44.787+00:00)) with FABRIC_DOMAIN name=Fabric set to version=5.2(1g) lm(t):1(2021-06-11T09:40:01.215+00:00); discoveryMode=PERMISSIVE lm(t):0(1970-01-01T00:00:00.001+00:00); drrMode=OFF lm(t):0(1970-01-01T00:00:00.001+00:00); kafkaMode=OFF lm(t):0(1970-01-01T00:00:00.001+00:00)

appliance id=1 address=10.0.0.1 lm(t):1(2021-06-10T19:44:55.051+00:00) tep address=10.0.0.0/16 lm(t):1(2021-06-10T19:44:55.051+00:00) routable address=0.0.0.0 lm(t):1(zeroTime) oob address=192.168.11.1/24 lm(t):1(2021-06-10T19:45:00.131+00:00) version=5.2(1g) lm(t):1(2021-06-10T19:45:00.188+00:00) chassisId=9df7d5a0-ca14-11eb-beda-e526c7a0aa53 lm(t):1(2021-06-10T19:45:00.188+00:00) capabilities=0X7EEFFFFFFFFF--0X2020--0X1 lm(t):1(2021-06-11T09:44:04.539+00:00) rK=(stable,present,0X206173722D687373) lm(t):1(2021-06-10T19:45:00.134+00:00) aK=(stable,present,0X206173722D687373) lm(t):1(2021-06-10T19:45:00.134+00:00) oobrK=(stable,present,0X206173722D687373) lm(t):1(2021-06-10T19:45:00.134+00:00) oobaK=(stable,present,0X206173722D687373) lm(t):1(2021-06-10T19:45:00.134+00:00) cntrlSbst=(APPROVED, FCH2128V0F0) lm(t):1(2021-06-10T19:45:00.188+00:00) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):1(2021-06-10T19:44:55.051+00:00) commissioned=YES lm(t):1(zeroTime) registered=YES lm(t):1(2021-06-10T19:44:55.051+00:00) standby=NO lm(t):1(2021-06-10T19:44:55.051+00:00) DRR=NO lm(t):0(zeroTime) apicX=NO lm(t):1(2021-06-10T19:44:55.051+00:00) virtual=NO lm(t):1(2021-06-10T19:44:55.051+00:00) active=YES(2021-06-10T19:44:55.051+00:00) health=(applnc:255 lm(t):1(2021-06-10T19:47:00.737+00:00) svc's)

---------------------------------------------

clusterTime=<diff=-7610 common=2021-06-11T18:30:33.430+00:00 local=2021-06-11T18:30:41.040+00:00 pF=<displForm=0 offsSt=0 offsVlu=0 lm(t):1(2021-06-11T09:39:41.180+00:00)>>

---------------------------------------------


Interfaces in APIC (ifconfig)


bond0: A logical bond that bundles the physical interfaces attached to the fabric (eth2-1 and eth2-2).


bond1: A logical bond that provides OOB connectivity.


bond0.369: Subinterface of the bond0 interface that carries Infra traffic, such as packets encapsulated with Infra VLAN (369) 802.1Q header. The IP address of this subinterface is 10.0.0.1/32. It belongs to the TEP address pool (10.0.0.0/16) that was configured in the setup utility.


oobmgmt: Logical interface for OOB management configured during the initial setup.



The bonding mode is set to fault-tolerance (active-backup). In the example below, eth2-2, facing leaf-b, is active.


Identify the active link on Cisco APIC


/proc/net/bonding/bond0


leaf2 must have been discovered first.


APIC’s bond0 is active/standby port-channel


apic1# cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.7.1 (April 30, 2023)


Bonding Mode: fault-tolerance (active-backup)

Primary Slave: None

Currently Active Slave: eth2-2

MII Status: up

MII Polling Interval (ms): 60

Up Delay (ms): 0

Down Delay (ms): 0


Slave Interface: eth2-1

MII Status: up

Speed: 10000 Mbps

Duplex: full

Link Failure Count: 1

Permanent HW addr: 38:90:a5:40:76:ea

Slave queue ID: 0


Slave Interface: eth2-2

MII Status: up

Speed: 10000 Mbps

Duplex: full

Link Failure Count: 1

Permanent HW addr: 38:90:a5:40:76:eb

Slave queue ID: 0


Packet Drop


Leaf

SSH to the leaf and run these commands. This example is for ethernet 1/31.

ACI-LEAF# vsh_lc



Spine

A fixed spine (N9K-C9332C and N9K-C9364C) can be checked using the same method as the leaf switches.

For a modular spine (N9K-C9504 etc.), the linecard must be attached to before the platform counters can be viewed. SSH to the spine and run these commands. This example is for ethernet 2/1.

ACI-SPINE# vsh

ACI-SPINE# attach module 2

module-2# show platform internal counters port 1



Queuing stats counters are shown using 'show queuing interface'.


ACI-LEAF# show queuing interface ethernet 1/5



Viewing statistics in GUI

The location is 'Fabric > Inventory > Leaf/Spine > Physical interface > Stats/ Error Counters /QoS Stats




leaf-a# show vrf

VRF-Name VRF-ID State Reason

black-hole 3 Up --

overlay-1 4 Up --

Note

Cisco ACI uses a dedicated VRF as an infrastructure to carry VXLAN traffic. The transport infrastructure for VXLAN traffic is known as overlay-1, which exists as part of the tenant “infra.”


leaf-a# show vrf

VRF-Name VRF-ID State Reason

black-hole 3 Up --

overlay-1 4 Up --


Cisco ACI uses a dedicated VRF as an infrastructure to carry VXLAN traffic. The transport infrastructure for VXLAN traffic is known as overlay-1, which exists as part of the tenant “infra.” Leaf nodes are known as PTEPs (physical tunnel endpoints).



VRF


Policy Control Enforcement in VRF

  • Default Security:

    • VRF normally blocks communication between different Endpoint Groups (EPGs) unless there are specific rules (contracts) that allow it.


  • Policy Control Enforcement Feature:

    • This feature allows you to turn off the default security settings.

    • When turned off, the rules (contracts) are ignored.

    • EPGs can freely communicate with each other as long as they can connect on the network (Layer 2 or Layer 3).






Endpoint Groups (EPGs) and Bridge Domains (BDs)



Bridge Domains


Bridge domains are essential components within Cisco ACI that offer the following characteristics:

  • Layer 2 Forwarding Domains:They act as Layer 2 forwarding domains, enabling the seamless transmission of packets within the same network segment.

  • Default Gateway and Subnet Configuration:Bridge domains provide endpoints with default gateway services and subnet configurations, ensuring efficient network communication and IP addressing.

  • Association with a Single VRF:Each bridge domain is linked to a single Virtual Routing and Forwarding (VRF) instance, maintaining network segmentation and isolation at the Layer 3 level.

  • Flexibility in Network Design:

    • Multiple Bridge Domains per Tenant:

      Tenants can incorporate one or more bridge domains, allowing for granular network segmentation within their allocated space.

    • Multiple Bridge Domains per VRF:

      A single VRF can encompass multiple bridge domains, facilitating the grouping of various Layer 2 domains under a unified routing instance.

  • Support for Multiple Subnets:Bridge domains can contain multiple subnets, offering the versatility to host different IP subnets within the same Layer 2 domain.


Characteristics Endpoint Groups (EPGs) and Bridge Domains (BDs)


  1. Security Isolation with EPGs and BDs

    • Multiple EPGs within a BD:

      • EPGs are defined within a Bridge Domain (BD) to provide security isolation.

      • This adds an extra layer of segmentation beyond traditional Layer 2 separation.

  2. Layer 2 Segmentation Differences

    • Traditional Networks:

      • VLAN ID is the primary method for Layer 2 network separation.

    • Cisco ACI:

      • BDs are not directly tied to VLAN IDs.

      • Introduces EPGs as a finer segmentation layer, with VLAN IDs used for security separation rather than just Layer 2 separation.

      • EPG offers more granular security controls compared to BDs.

  3. Endpoint Definition and Management

    • Endpoint Composition:

      • An endpoint consists of a MAC address and can have one or more IP addresses, representing a single device.

    • Traditional Networks:

      • Use separate tables for managing network addresses:

        1. MAC Address Table: For Layer 2 forwarding.

        2. Routing Information Base (RIB): For Layer 3 forwarding.

        3. ARP Table: For mapping IP addresses to MAC addresses.

    • Cisco ACI:

      • Consolidates MAC Address Table and ARP Table into a single Endpoint Table.

      • Advantages:

        • Reduces the need for separate processing of ARP traffic.

        • Detects IP and MAC address changes quickly without waiting for Gratuitous ARP (GARP).

        • Learns MAC and IP addresses directly from packet inspection in the data plane.

  4. Learning and Forwarding Mechanism

    • Endpoint Learning:

      • MAC and IP addresses are learned in hardware by inspecting the source MAC and source IP of incoming packets.

      • No reliance on ARP for obtaining the MAC address of the next hop.

    • Resource Efficiency:

      • Minimizes processing and generation of ARP traffic.

    • IP/MAC Mobility Detection:

      • Quickly identifies changes in IP and MAC addresses when new traffic is sent from a host.

  5. L3Out Functionality

    • Despite the Endpoint Table:

      • Cisco ACI still uses the RIB and ARP table for L3Out (Layer 3 External) functionalities.

  6. Forwarding Table Lookup Order in Cisco ACI

    • Primary Lookup:

      • Endpoint Table: Accessed using the show endpoint command.

    • Secondary Lookup:

      • Routing Information Base (RIB): Accessed using the show ip route command.


APIC# show epg BLUE detail



Layer 3 Configurations in Cisco ACI


1. Unicast Routing

  • Enable Default Gateway:

    • Acts as the default gateway for the bridge domain.

    • Routes network traffic within the fabric.

  • IP Mapping:

    • When enabled, the endpoint table on leaf switches maps IP addresses to Tunnel Endpoints (TEPs) for the bridge domain.

  • IP Learning:

    • IP addresses are learned even without a subnet configured under the bridge domain.


2. Subnet Address Configuration

  • Purpose:

    • Sets the IP addresses for Switched Virtual Interfaces (SVIs), which serve as default gateways for the bridge domain.

  • Options for Configuring a Subnet:

    1. Private to VRF:

      • The subnet is restricted to its specific Virtual Routing and Forwarding (VRF) within the tenant.

      • It does not extend beyond that VRF.

    2. Advertised Externally:

      • The subnet can be shared with external networks.

      • Makes it accessible through a routed connection.

    3. Shared between VRFs:

      • The subnet can be shared and exported to multiple VRFs within the same tenant or across different tenants.

      • Ideal for shared services, such as connecting to an Endpoint Group (EPG) in another VRF or tenant.

      • Allows bidirectional traffic flow between VRFs.

  • Important Notes:

    • For shared services, configure the subnet under the EPG, not the bridge domain.

    • Set the subnet scope to both "advertised externally" and "shared between VRFs."


3. Default Settings and Best Practices

  • Unicast Routing:

    • Default State: Enabled by default when configuring a default gateway within the Cisco ACI fabric.

  • When to Disable Unicast Routing:

    • If the default gateway is set outside the fabric (e.g., on a firewall).

    • Alternative: Enable ARP flooding when unicast routing is disabled.

  • Reason to Disable:

    • Prevents unnecessary IP learning.

    • Avoids unexpected IP forwarding issues.


Key Takeaways


  • Unicast Routing:

    • Essential for routing traffic within the fabric and mapping IPs to TEPs.

    • Can be disabled if the default gateway is external, but requires enabling ARP flooding.


  • Subnet Address Options:

    • Private to VRF: Limited to a single VRF.

    • Advertised Externally: Accessible from outside networks.

    • Shared between VRFs: Allows multiple VRFs or tenants to use the same subnet for shared services.


  • Configuration Best Practices:

    • Use "advertised externally" and "shared between VRFs" scopes for subnets under EPGs when sharing services.

    • Disable unicast routing only when necessary to avoid IP forwarding issues.


-------------------------------------------------------------------------------------------------------------------------------


General Troubleshooting


avread --> Displays APICs within the cluster.


fnvread --> Displays the address and state of switch nodes registered with the fabric.


fnvreadex --> Displays additional information for switch nodes registered with the fabric.


rvread service --> Summarizes the data layer state. The output shows a summary of the data layer state for each service. The shard view shows replicas in ascending order.


rvread service shard --> Displays the data layer state for a service on a specific shard across all replicas.


rvread service shard replica --> Displays the data layer state for a service on a specific shard and replica.


crashsuspecttracker --> Tracks states of a service or data subset that indicate a crash.


dbgtoken--> Generates a token to permit remote SSH access.


version --> Displays the APIC ISO software version.



The command output format for command "acidiag rvread" is as follows:


(svcID, shardID, replicID) st: lm(t): le: reSt: voGr: cuTerm: lCoTe: lColn: veFiSt: veFiEn: lm(t): lastUpdt


Difference between svcID, shardID, and replicID


1. svcID (Service ID):

  • Definition: The svcID represents a logical service or an instance of a particular service running on the APIC.

  • Purpose: Each service running on the APIC is assigned a unique svcID to distinguish it from other services. This allows the system to manage and route service-specific requests accurately.

  • Examples of Services: Configuration management, telemetry, health monitoring, etc.


2. shardID (Shard ID):

  • Definition: A shardID represents a specific partition or subset of data in the distributed database used by APIC.

  • Purpose:

    • APIC uses a distributed database to store fabric configuration and operational state.

    • To ensure scalability and efficient data management, the database is divided into smaller units called shards.

    • Each shard is responsible for a subset of the total data.

  • Relationship to svcID: Each service (svcID) interacts with one or more shards (shardID) to access or manage the data it needs.


3. replicID (Replica ID):

  • Definition: The replicID identifies a specific replica of a shard in the distributed database.

  • Purpose:

    • For redundancy and high availability, each shard is replicated across multiple APIC nodes in the cluster.

    • Each replica of a shard is assigned a unique replicID to distinguish it from other replicas.

  • Replication Mechanism:

    • Replication ensures that if one APIC node fails, other nodes can continue to provide access to the data in that shard.

    • The system uses consensus algorithms like Raft to synchronize replicas.


Interconnection between svcID, shardID, and replicID:


  • Service Interaction: Services (svcID) query or update data stored in the distributed database. Each query or update is routed to the appropriate shard (shardID).

  • Shard and Replica Mapping: Each shard (shardID) has multiple replicas (replicID) distributed across the APIC cluster. When a service requests data, the system retrieves it from one of the available replicas.

  • High Availability: If a replica (replicID) becomes unavailable, the system uses another replica of the same shard to ensure uninterrupted operation of services.


Note :


  1. svcID identifies the service making a request.

  2. The request is routed to the appropriate shardID that contains the data required by the service.

  3. The shardID directs the request to one of its replicIDs, ensuring that the service can access data even if some replicas are down


Explanation of Fields:


svcID: Represents the service ID. You can find the full list of service IDs by running the command man acidiag on the APIC. Examples of service IDs include:


APIC# man acidiag


Service IDs:

1 - cliD

2 - controller

3 - eventmgr

4 - extXMLApi

5 - policyelem

6 - policymgr

7 - reader

8 - ae

9 - topomgr

10 - observer

11 - dbgr

12 - observerelem

13 - dbgrelem

14 - vmmmgr

15 - nxosmock

16 - bootmgr

17 - appliancedirector

18 - adrelay

19 - ospaagent

20 - vleafelem

21 - dhcpd

22 - scripthandler

23 - idmgr

24 - ospaelem

25 - osh

26 - opflexagent

27 - opflexelem

28 - confelem

29 - vtap

30 - snmpd

31 - opflexp

32 - analytics

33 - policydist

34 - plgnhandler

35 - domainmgr

36 - licensemgr

37 - no service

38 - platformmgr

39 - edmgr


shardID: Indicates the shard number, ranging from 1 to 32.


replicID: The replica ID, which can be 1, 2, or 3.




st (State): The state of the replica. Possible values are:


0 = UNKNOWN

1 = FAILED

2 = DOWN

3 = STATELESS_RECOVERY

4 = RECOVERY

5 = INITIALIZING

6 = UP


lm(t): Indicates the APIC where the shard leader is running.


reSt (Replica State): Shows the replica state, which can be:


LEADER

FOLLOWER

MINORITY

voGr (Vote Granted): Tracks votes granted.


cuTerm (Current Term): The current term of the replica.


lCoTe (Last Committed Term): The last term that was committed.


lColn (Last Committed Index): The last index that was committed.


veFiSt (Version Field Start): The starting version field.


veFiEn (Version Field End): The ending version field.


If the output from the acidiag rvread command is too large, you can count the occurrences of a specific replica state, such as reSt (Replica State).


For example:


admin@pic:~> acidiag rvread | grep reSt | wc -l

369


This output shows there are 369 replicas. From this, you might determine that 369 replicas are unhealthy 



Data States

COMATOSE: 0

NEWLY_BORN: 1

UNKNOWN: 2

DATA_LAYER_DIVERGED: 11

DATA_LAYER_DEGRADED_LEADERSHIP: 12

DATA_LAYER_ENTIRELY_DIVERGED: 111

DATA_LAYER_PARTIALLY_DIVERGED: 112

DATA_LAYER_ENTIRELY_DEGRADED_LEADERSHIP: 121

DATA_LAYER_PARTIALLY_DEGRADED_LEADERSHIP: 122

FULLY_FIT: 255


APIC# acidiag rvread 9 15

(9,15,1) st:6 lm(t):3(2024-01-06T12:29:47.065+00:00) le: reSt:LEADER voGr:0 cuTerm:0x50 lCoTe:0x4f lCoIn:0x78000000001d9864 veFiSt:0x13 veFiEn:0x13 lm(t):3(2024-01-06T12:29:47.053+00:00) stMmt:1 lm(t):0(zeroTime) ReTx:0 lm(t):0(zeroTime) lastUpdt 2024-01-07T04:44:20.873+00:00


APIC# acidiag rvread 9 11

(9,11,1) st:6 lm(t):2(2024-01-06T12:29:24.547+00:00) le: reSt:LEADER voGr:0 cuTerm:0x52 lCoTe:0x51 lCoIn:0x58000000001e1304 veFiSt:0x29 veFiEn:0x29 lm(t):2(2024-01-06T12:29:24.507+00:00) stMmt:1 lm(t):0(zeroTime) lp: clSt:2 lm(t):2(2024-01-06T12:04:38.6


FOLLOWER state, indicating they are not local to the APIC where the command was run.


Login as root


Since service ID 9 is topomgr


systemctl start topomgr

systemctl stop topmgr

systemctl restart topomgr

systemctl status topomgr


Example: APIC1 is in partial diverge state


APIC# rvread


\- unexpected state;    /-unexpected mutator;


s->  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32lcl


r->123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123lcl


  1


  2


  3


  4


  5


  6


  7


  8


  9


 10


 11             \                           \                         \


 12


 13


 14


 15


Non optimal leader for shards : 11:1,11:16,11:19,11:25,11:28,11:31


Since service 11 is dbgr & leader for shard 11 is APIC3


Action Plan:


Stop the dbgr service and start that on 3 APICs and APIC1 is back in fully-fit state


acidiag stop dbgr

acidiag start dbgr


Note : There will be fault F1419 generated for failed DME's


APIC SSD REPLACEMENT PROCEDURE



CIMCServer# scope sol 

  

Server /sol # set enabled yes 

  

Server /sol *# set baud-rate 115200 

  

Server /sol *# commit 

  

Server /sol *#connect host 




APIC CPU and Memory


apic# ps aux --sort -%mem

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

1000     22836  1.3  4.9 11636484 4790212 ?    Ssl  Jan06  14:06 /etc/alternatives/jre_openjdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.ne

ifc       5775  1.6  2.1 2716716 2121416 ?     Ssl  Jan06  17:49 /mgmt//bin/svc_ifc_reader.bin --x

root      7380  1.8  1.2 1980428 1226688 ?     Ssl  Jan06  19:28 /mgmt//bin/nginx.bin -p /data//nginx/

ifc       5766  2.1  1.0 1695524 1006004 ?     Ssl  Jan06  23:04 /mgmt//bin/svc_ifc_policymgr.bin --x

ifc       5765  1.7  1.0 1642268 995828 ?      Ssl  Jan06  19:02 /mgmt//bin/svc_ifc_observer.bin --x




apic# top -o %MEM

top - 05:39:56 up 17:46,  1 user,  load average: 2.70, 2.54, 2.42

Tasks: 681 total,   1 running, 304 sleeping,   0 stopped,   0 zombie

%Cpu(s):  3.2 us,  2.8 sy,  0.0 ni, 94.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

KiB Mem : 97353248 total, 51438976 free, 19963508 used, 25950764 buff/cache

KiB Swap:        0 total,        0 free,        0 used. 76119576 avail Mem 


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                   

22836 1000      20   0   11.1g   4.6g  25616 S   0.0  4.9  14:06.19 java                                                                                                                                                                                      

 5775 ifc       20   0 2716716   2.0g 166900 S   0.0  2.2  17:50.15 svc_ifc_reader.                                                                                                                                                                           

 7380 root      20   0 1980428   1.2g 198468 S   5.9  1.3  19:28.69 nginx.bin                                                                                                                                                                                 

 5766 ifc       20   0 1695524 982.4m 224212 S   0.0  1.0  23:04.94 svc_ifc_policym                                                                                                                                                                           

                   



apic# ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -n30

  PID  PPID CMD                         %MEM %CPU

22836 22834 /etc/alternatives/jre_openj  4.9  1.3

 5775     1 /mgmt//bin/svc_ifc_reader.b  2.1  1.6

 7380     1 /mgmt//bin/nginx.bin -p /da  1.2  1.8

 5766     1 /mgmt//bin/svc_ifc_policymg  1.0  2.1

 5765     1 /mgmt//bin/svc_ifc_observer  1.0  1.7

 1811 32429 java -Xms1g -Xmx2g -XX:+Hea  0.9  0.9

30639 30450 java -Xmx4096m -Djava.secur  0.8  1.5

 5772     1 /mgmt//bin/svc_ifc_eventmgr  0.7  2.0

32227 32226 /etc/alternatives/jre_1.8.0  0.7 21.5

 1563 31801 java -XX:+UseG1GC -XX:MaxGC  0.6  1.1

 5780     1 /mgmt/opt/controller/decoy/  0.6  0.0



Command Output


apic1# acidiag fnvread ID Pod ID Name Serial Number IP Address Role State LastUpdMsgId ------------------------------------------------------ 101 1 leaf-a FDO23161CZ0 10.0.32.64/32    leaf active 0 102 1 leaf-b FDO23161MNG 10.0.32.65/32    leaf active 0 201 1 spine FDO231113UJ 10.0.32.66/32   spine active 0



leaf-a# show lldp neighbors Device ID Local Intf Hold-time Capability Port ID 3560-x.dc.local Eth1/1 120 BR Gi1/0/3 apic1 Eth1/2 120 eth2-1 spine Eth1/49 120 BR Eth1/1 Total entries displayed: 3


List all the EPGs in the fabric:

admin@apic:~> moquery -c fvAEPg | grep dn
dn                   : uni/tn-infra/ap-access/epg-default
dn                   : uni/tn-infra/ap-ave-ctrl/epg-ave-ctrl
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-App_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-D_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-Wb_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-Bckp_EPG


List all VLAN use anywhere as encap in the fabric:


admin@apic1:~> moquery -c vlanCktEp | grep '^encap' | sort -u encap : vlan-10 encap : vlan-112 encap : vlan-120



Where do you use vlan-120?

admin@apic1:~>  moquery -c fvIfConn | egrep "dn.*vlan-120]"
dn          : uni/epp/fv-[uni/tn-DC/ap-App/epg-EPG1]/node-102/stpathatt-[n7k2-vpc]/conndef/conn-[vlan-120]-[0.0.0.0]
dn          : uni/epp/fv-[uni/tn-DC/ap-App/epg-EPG1]/node-101/stpathatt-[n7k2-vpc]/conndef/conn-[vlan-120]-[0.0.0.0]
dn          : uni/epp/fv-[uni/tn-DC/ap-App/epg-EPG1]/node-101/stpathatt-[eth1/33]/conndef/conn-[vlan-120]-[0.0.0.0]

List all the EPGs in the fabric:

admin@apic1:~> moquery -c fvAEPg | grep dn
dn                   : uni/tn-infra/ap-access/epg-default
dn                   : uni/tn-infra/ap-ave-ctrl/epg-ave-ctrl
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-App_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-DB_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-Web_EPG
dn                   : uni/tn-Sales/ap-eCommerce_AP/epg-Backup_EPG

ERROR


An FCS error happens when a switch gets a data packet (called a "frame") with a bad checksum. This checksum, found at the end of the frame, is used to check for errors. If it doesn’t match, it usually means there’s a physical problem, like a bad cable or interference.


In cut-through switching, the switch starts sending the frame to the next device before it fully receives it. Since the checksum is at the end, the switch can’t check it before forwarding. To let the receiving device know the frame is bad, the switch recalculates the checksum and intentionally makes it incorrect (reversing or modifying it). This tells the receiving device that the frame is corrupted and should be discarded.



What is Cyclic Redundancy Check (CRC)?


  • Definition:

    • CRC is a method used in Ethernet to check if data frames have been corrupted during transmission.

    • It uses a mathematical formula (polynomial function) to produce a 4-byte (4B) number called the CRC value.


  • How It Works:

    • When a frame is sent, the CRC value is calculated and added to the end of the frame.

    • The receiving switch also calculates the CRC value for the incoming frame.

    • If the calculated CRC doesn't match the one in the frame, an error is detected.


  • Error Detection:

    • Single Bit Errors: CRC can detect any single bit error.

    • Double Bit Errors: CRC can detect many, but not all, double bit errors.

    • Purpose: Ensures that the frame wasn't corrupted while traveling through the network.


  • Causes of CRC Errors:

    • Duplex Mismatch: When devices are set to different duplex modes (one half, one full).

    • Faulty Cables: Damaged or poor-quality cables can introduce errors.

    • Broken Hardware: Defective network components like switches or ports.


  • Expectations:

    • Some CRC errors are normal.

    • Ethernet standards allow for a very low error rate (1 bit error in 1,012 bits).


Store-and-Forward vs. Cut-Through Switching


  • Common Features:

    • Both types of switches use the destination MAC address to decide where to send data packets.

    • They learn and remember MAC addresses by looking at the source MAC addresses in incoming packets.


  • Store-and-Forward Switching:

    • Process:

      1. Receives the entire data frame.

      2. Checks the frame for errors using CRC.

      3. If no errors are found, forwards the frame to the destination.

    • Advantages:

      • Ensures that only error-free frames are sent out.

    • Use Case:

      • Common in most traditional networks.


  • Cut-Through Switching:

    • Process:

      1. Starts forwarding the frame as soon as it reads the destination MAC address.

      2. Continues forwarding the frame while still receiving the rest of it.

      3. Checks the CRC after the frame has already been forwarded.

      4. If CRC fails, the frame is marked as corrupted.


    • Advantages:

      • Faster forwarding with lower latency.


    • Use Case:

      • Ideal for high-speed networks where speed is critical.


  • ACI-Specific Behavior:

    • Cut-Through Switching:

      • Used when the incoming (ingress) port is faster or the same speed as the outgoing (egress) port.

    • Store-and-Forward Switching:

      • Used when the incoming port is slower than the outgoing port.


Stomping


  • What is Stomping?

    • When a frame has a CRC error in a cut-through switch, the switch can't drop it immediately.

    • Instead, it alters the CRC value to a known bad value, signalling that the frame is corrupted.


  • Impact of Stomping:

    • The bad frame with the incorrect CRC is sent out to all connected devices.

    • Devices receiving the frame recognise the bad CRC and drop the frame.

    • This ensures that corrupted frames don't continue to circulate in the network.


ACI and CRC: Identifying Faulty Interfaces


  • Leaf Switches:

    • Downlink Port Errors:

      • If a leaf switch shows CRC errors on a downlink port, the issue is likely with:

        • The downlink SFP (Small Form-factor Pluggable) module.

        • Components on the connected external device or network.


  • Spine Switches:

    • Local Port Errors:

      • If a spine switch shows CRC errors, the problem is usually with:

        • The local port or its SFP module.

        • The fiber cable or the neighbor's SFP module.


      • Note: CRC errors from leaf downlinks are not passed to spines if the frame headers are intact and VXLAN is used. If headers are corrupted, the frame is dropped.


  • Fabric Links on Leaf Switches:

    • If a leaf switch shows CRC errors on fabric links, possible issues include:

      • The local fiber or SFP pair.

      • The spine's incoming fiber.

      • The spine's SFP pair.

      • A stomped frame traveling through the fabric.


Troubleshooting Stomping


  • Identify Problematic Interfaces:

    • Look for interfaces showing FCS (Frame Check Sequence) errors on the fabric.

    • FCS errors are local to a port, so the issue is likely with:

      • The fiber cable.

      • The SFP module on either end of the connection.

  • Understand Error Counts:

    • The CRC error count in the show interface command includes both FCS errors and stomped frames.


  • Steps to Troubleshoot:


    1. Check Interfaces:

      • Use network commands to view which ports have high FCS or CRC errors.

    2. Inspect Physical Connections:

      • Examine and replace faulty cables or connectors.

    3. Verify SFP Modules:

      • Ensure that SFPs on both ends are functioning correctly.

    4. Monitor and Test:

      • After making changes, monitor the error counters to see if they decrease.

    5. Replace Hardware if Needed:

      • If errors persist, consider replacing the problematic switch port or SFP module.


-------------------------------------------------------------------------------------------------------------------------------


When upgrading a Cisco ACI switch from a 32-bit image to a 64-bit image, the switch will reboot twice during the process. This happens because the system needs to transition from the 32-bit architecture to the 64-bit architecture while preserving the correct state.


Why Does It Reboot Twice?

  1. First Reboot:

    • The switch boots into a temporary transition mode to prepare for the upgrade from 32-bit to 64-bit.

    • It performs necessary system changes to support the new architecture.

  2. Second Reboot:

    • The switch boots into the full 64-bit image, fully transitioning to the new architecture.


Additional Considerations


  • Fabric Impact: Since the switch undergoes multiple reboots, it will temporarily lose connectivity. If upgrading in a production environment, ensure proper redundancy and upgrade switches in a staggered fashion.

  • Data Migration: Certain configurations and stored logs might not carry over between 32-bit and 64-bit images. It's recommended to back up configurations before performing the upgrade.

  • Upgrade Best Practices:

    • Always follow Cisco’s official upgrade guides for your specific ACI version.

    • Ensure compatibility between the APIC version and the new switch firmware.

    • Check upgrade paths—you may need an intermediate step before moving directly to the 64-bit image.



Recent Posts

See All
MultiCast In ACI

Understanding Multicast in Cisco ACI 1. Multicast Traffic Flow in ACI In ACI, multicast traffic is primarily managed within Bridge...

 
 
 
Quality of Service (QoS) in Cisco ACI

Configuring Quality of Service (QoS)  in Cisco ACI (Application Centric Infrastructure)  involves creating and applying QoS policies that...

 
 
 

Commentaires


Follow me

© 2021 by Mukesh Chanderia
 

Call

T: 8505812333  

  • Twitter
  • LinkedIn
  • Facebook Clean
©Mukesh Chanderia
bottom of page