Introduction
Cisco Application Centric Infrastructure (ACI) Release 4.0(1):
Introduces Cisco Mini ACI Fabric for small-scale deployments.
Components:
APIC Cluster:
1 Physical APIC
2 Virtual APICs (vAPIC) running on virtual machines.
Benefits:
Reduced Physical Footprint: Smaller space requirements.
Cost-Effective: Lower initial investment.
Ideal For:
Colocation facilities.
Single-room data centers.
Scenarios with limited rack space or budget constraints.
Cisco Mini ACI Guidelines and Limitations
Supported Configurations
On-Premises Site:
Supported with Cisco ACI Multi-Site.
Unsupported Features
Cisco ACI Multi-Pod
Cisco ACI Virtual Pod
Remote Leaf Switches
Installing/Running Apps on APIC:
Not supported on Cisco Application Policy Infrastructure Controller (APIC).
Time Synchronization
Requirement:
Physical APIC and ESXi servers hosting virtual APICs must be time-synced using the same NTP server.
Ensures smooth upgrades and cluster convergence.
Connectivity Requirements
Direct Connection:
ESXi Hosts Hosting Virtual APICs must connect directly to Cisco ACI leaf switches.
No Intermediate Switches:
Cannot use intermediate switches, including UCS Fabric Interconnects from Cisco APIC Release 6.0(2).
Upgrade Limitations
Policy-Based Upgrades:
Not supported from releases prior to Cisco APIC Release 6.0(2).
Deployment Steps (Release 6.0(2) and Later):
Deploy One Physical APIC via BootX GUI.
Add All Switches after the physical APIC is fully functional.
Deploy Virtual APICs using the GUI.
Configure BootX Deployment for virtual APICs.
Physical APIC
Role and Configuration
Installation:
Must install and configure a physical APIC first.
Functions:
Discovers Spine and Leaf Switches in the fabric.
Discovers and Configures Virtual APICs during their installation.
Facilitates Cluster Upgrades: Manages upgrades for the APIC cluster.
Dependency:
Fabric Discovery Control: Physical APIC controls fabric discovery.
Availability Impact:
If the physical APIC is unavailable, cannot make physical fabric changes (e.g., adding/removing switches) until it is recovered.
Virtual APIC
Bootstrapping and Cluster Formation
Initial Boot Process:
Discovery: Connects to the physical APIC through the Cisco ACI infra VLAN.
Certificate Signing: Uses a pass-phrase from the physical APIC to get its certificate signed.
Cluster Formation: Exchanges discovery messages and forms the APIC cluster.
Full Functionality: Achieved after data layer synchronization with the physical APIC.
Management Requirements
In-Band Management Constraints:
Subnet Restrictions:
Node Management IP Subnet and Application EPG IP Subnet cannot be the same when using in-band management with a virtual APIC.
Key Takeaways
Cisco Mini ACI Fabric:
Tailored for small-scale deployments with reduced costs and physical footprint.
Combines one physical APIC with two virtual APICs for efficient management.
Guidelines and Limitations:
Supports on-premises deployments with Multi-Site but excludes Multi-Pod and Virtual Pod features.
Requires strict time synchronization and direct connectivity to leaf switches.
Upgrade paths are limited based on APIC release versions.
APIC Roles:
Physical APIC: Essential for initial setup, discovery, and cluster management.
Virtual APICs: Supplement the physical APIC but rely on it for critical operations.
Deployment Best Practices:
Ensure all APICs are time-synced.
Connect ESXi hosts directly to leaf switches without intermediaries.
Follow the correct deployment sequence, especially for releases 6.0(2) and later.
Installing and Configuring Virtual APIC (vAPIC)
Steps to Install and Configure vAPIC
Configure Leaf Switch Ports for Infra VLAN Trunking
Enable infra VLAN on ports connected to ESXi hosts.
Use the Attachable Access Entity Profile (AEP) to enable infra VLAN via the APIC GUI.
Set Up VMware vSwitch or Distributed Virtual Switch (DVS) and ESXi Host
Ensure VMware ESXi hosts are running version 7.0 or later.
Configure VMware standard vSwitch or DVS for infra VLAN trunking.
Disable the Discovery Protocol in the Distributed Switch settings.
Allow VLAN-0 for infra VLAN trunking in the Distributed Port Group.
Obtain Passphrase from Physical APIC
Log in to the physical APIC.
Navigate to System > System Settings > APIC Passphrase.
Copy the current passphrase (expires after 60 minutes).
Install and Configure Each Virtual APIC Server
Deployment Options:
Use the provided OVA image for easy deployment via VMware vCenter.
Alternatively, install vAPIC directly on the ESXi host using the physical APIC's ISO file.
Resource Requirements for vAPIC VM:
Memory: 96 GB
CPU: 16 cores
Storage:
HDD 1: 3001 GB (Standard HDD)
HDD 2: 1001 GB SSD
Network Interfaces:
NIC 1: Out of Band
NIC 2: ACI Infra VLAN trunking
Virtual APIC Installation Prerequisites
Deploy and Run ACI Fabric with Physical APIC
Ensure the ACI fabric and physical APIC are operational before adding virtual APICs.
Physical APIC handles fabric discovery and vAPIC registration.
Time Synchronization
Configure both physical and ESXi hosts to use the same NTP server.
Prevents time mismatches and synchronization issues during upgrades or restarts.
Direct Connectivity
Connect ESXi hosts directly to Cisco ACI leaf switches.
Do not use intermediate switches (e.g., UCS fabric interconnects).
Storage and Networking Configuration
Ensure combined storage of at least 600 GB on ESXi hosts.
Configure virtual switches for VLAN trunking and disable discovery protocols.
Virtual Switch Configuration
Choose Virtual Switch Type
Use either a standard vSwitch or a Distributed Virtual Switch (DVS).
Configure Uplinks for Infra VLAN Trunking
Set up uplinks to handle the ACI infra VLAN and any additional data VLANs.
Set Up Port Groups
Create port groups configured for VLAN trunking with the infra VLAN ID.
Disable the Discovery Protocol to allow direct LLDP packet reception from APIC.
VMM Domain Integration (Optional)
If integrating with VMM domains, configure port groups through the APIC GUI.
Ensure VLAN trunking is correctly set in the vSphere client.
Deployment Procedure
Access Physical APIC GUI
Log in to the physical APIC interface.
Navigate to Controllers
Go to System > Controllers.
Add Virtual APIC
Select Quick Start > Add Virtual APIC.
Click Add Virtual APIC.
Specify Leaf Switches and Ports
Assign leaf switches and corresponding ports for vAPIC connections.
Provide vCenter Information
Enter vCenter details if deploying via VMware vCenter.
Include credentials, host IP, and datacenter information.
Submit and Deploy
Click Submit to initiate the virtual APIC deployment.
Obtaining Passphrase from Physical APIC
Log In to Physical APIC
Access the physical APIC interface.
Navigate to APIC Passphrase
Go to System > System Settings > APIC Passphrase.
Copy Passphrase
Retrieve the current passphrase for vAPIC deployment.
Note: Passphrase expires after 60 minutes; obtain a new one if needed.
Guidelines and Limitations
Supported Configurations
On-Premises Sites with Cisco ACI Multi-Site support.
Unsupported Features
Cisco ACI Multi-Pod
Cisco ACI Virtual Pod
Remote Leaf Switches
Running Apps on APIC
Upgrade Constraints
Policy-Based Upgrades not supported before APIC Release 6.0(2).
For Release 6.0(2) and later:
Deploy one physical APIC first.
Add switches and then deploy virtual APICs via the GUI.
Operational Dependencies
Physical APIC is essential for fabric discovery and cannot be bypassed.
If the physical APIC fails, fabric changes are restricted until recovery.
Important Notes
vMotion Not Supported:
Current release does not support vMotion for vAPIC virtual machines.
In-Band Management Constraints:
Node Management IP Subnet and Application EPG IP Subnet must be different when using in-band management with vAPIC.
Storage and Connectivity:
Ensure sufficient storage and direct connectivity for seamless vAPIC operation.
Key Takeaways
Cisco Mini ACI Fabric:
Ideal for small-scale deployments with reduced costs and space.
Combines one physical APIC with two virtual APICs for efficient management.
Installation Steps:
Follow a structured four-step process: configure VLAN trunking, set up VMware vSwitch/DVS, obtain passphrase, and deploy vAPIC servers.
Prerequisites:
Ensure ACI fabric is operational, maintain time synchronization, and establish direct connectivity between ESXi hosts and leaf switches.
Configuration Best Practices:
Use Cisco-provided OVA for easy deployment.
Maintain proper virtual switch settings and port group configurations.
Regularly update and manage passphrases to prevent synchronization issues.
Limitations:
Certain advanced features and configurations are not supported in Mini ACI Fabric.
Physical APIC is crucial for fabric management and must remain available.
Deploying Virtual APIC Using an OVA in Cisco Mini ACI Fabric
Prerequisites:
Ensure the ESX host and virtual machines are configured as per the Virtual APIC Installation Prerequisites.
Verify the ACI fabric and physical APIC are operational.
Make sure the physical APIC and ESXi hosts are time-synced using the same NTP server.
Installation Steps
Download the Virtual APIC OVA Image
Visit the Cisco Software Download page: Cisco APIC Software.
Select APIC Software.
Choose the desired release version.
Download the OVA image to a location accessible by your VMware vCenter server.
Log in to VMware vCenter
Access your VMware vCenter server where the virtual APIC will be deployed.
Deploy the OVF Template
Right-click the ESXi host where you want to deploy the virtual APIC.
Select Deploy OVF Template from the context menu.
Select the OVA File
Browse and choose the virtual Cisco APIC OVA file you downloaded.
Click Next to proceed.
Choose Deployment Location
Select the datacenter or folder where you want to install the virtual APIC.
Click Next.
Review Deployment Details
Check the details of the deployment.
Click Next to continue.
Configure Storage Settings
In the Select storage step, click Advanced.
SSD Storage Disk Group:
Choose a high-performance SSD with at least 100 GB free space.
All Other Disks Disk Group:
Select a storage device with at least 300 GB free space for the main virtual APIC image.
Click Next to continue.
Set Up Networks
In the Select networks step, assign the necessary networks for:
Out-of-Band (OOB) Management
Infra VLAN Trunking
Click Next to proceed.
Customize the Template
Provide the following fabric details:
Controller ID: Assign 2 or 3 (1 is reserved for the physical APIC).
TEP Pool: Enter the pool of Tunnel Endpoint (TEP) addresses.
TEP Netmask: Specify the netmask for TEP addresses.
VLAN ID: Enter the VLAN ID for the Infra network.
IPv4 OOB IP Address: Provide the OOB management IP.
IPv4 OOB Network Mask: Enter the netmask for the OOB network.
IPv4 OOB Gateway: Specify the gateway for the OOB network.
Passphrase: Enter the passphrase obtained from the physical APIC.
Important: The passphrase expires after 60 minutes. If deployment is delayed:
Obtain a new passphrase from the physical APIC.
Update the passphrase in the VM's vApp Options after wiping the VM using the following steps:
Log in to the virtual APIC VM using the "rescue-user" account.
Run
acidiag touch clean
acidiag touch setup
Power down the VM.
Edit Settings of the VM in vCenter.
Go to VM Options > vApp properties and enter the new passphrase.
Power on the VM.
Finish Deployment
Review all settings and click Finish to start deploying the virtual APIC.
Start the Virtual APIC VM
Once deployment completes, power on the virtual APIC VM.
The virtual APIC will communicate with the physical APIC and join the cluster.
Note: Initial startup and synchronization may take several minutes.
Additional Notes
vMotion Not Supported:
Currently, vMotion cannot be used with virtual APIC VMs.
In-Band Management Constraints:
Node Management IP Subnet and Application EPG IP Subnet must differ when using in-band management with vAPIC.
Virtual Switch Configuration:
Choose between Standard vSwitch or Distributed Virtual Switch (DVS).
Configure uplinks for Infra VLAN trunking and any additional data VLANs.
Disable the Discovery Protocol to allow direct LLDP packet reception from APIC.
Create port groups with VLAN trunking for the Infra VLAN ID.
Physical APIC Requirements:
Must be installed first and remain available for fabric management.
vAPICs rely on the physical APIC for fabric discovery and cluster management.
Key Takeaways
Cisco Mini ACI Fabric:
Ideal for small deployments with limited space and budget.
Combines one physical APIC with two virtual APICs for efficient management.
Installation Process:
Involves downloading the OVA, deploying it via vCenter, configuring storage and networks, and finalizing with passphrase and cluster synchronization.
Best Practices:
Ensure time synchronization between physical and virtual APICs.
Directly connect ESXi hosts to leaf switches without intermediaries.
Follow the correct deployment sequence, especially for newer APIC releases.
Limitations:
Certain advanced features are not supported in Mini ACI Fabric.
Physical APIC is crucial for cluster operations and fabric changes.
Deploying Virtual APIC Directly in ESXi
Overview
Deploying a Virtual APIC (vAPIC) directly on an ESXi host involves installing the APIC software using an ISO file. This setup is part of the Cisco Mini ACI Fabric for smaller deployments.
Before You Begin
Prerequisites:
Ensure the ESX host and switches are configured as per the Virtual APIC Installation Prerequisites.
Verify that the ACI fabric and physical APIC are up and running.
Make sure the physical APIC and ESXi hosts are time-synced using the same NTP server to prevent synchronization issues.
Procedure
Download the APIC ISO Image
Visit the Cisco Software Download page.
Select APIC Software.
Choose the desired release version.
Download the APIC ISO image to a location accessible by your ESXi server.
Copy the APIC ISO to ESXi Host
Transfer the downloaded ISO image to your ESXi host.
Log in to VMware ESXi Host
Use the vSphere client to access your VMware ESXi host.
Create a Virtual Machine (VM) for vAPIC
In the vSphere client, create a new VM with the required hardware specifications as outlined in the Virtual APIC Installation Prerequisites.
Configure the VM with the APIC ISO
Set the APIC ISO image as the boot device for the newly created VM.
Power on the VM to start the installation.
The installation will proceed similarly to a physical APIC setup.
After installation, the VM will automatically power down.
Power On the VM
Start the VM again to begin the initial configuration.
Provide Fabric Information During Initial Boot
When the vAPIC VM boots up for the first time, it will request fabric details to complete the setup:
Fabric Name: Enter the name of your ACI fabric.
Fabric ID: Provide a unique identifier for your fabric.
Number of Active Controllers: Enter 1, since only a single POD is supported in Mini ACI fabric.
Standby Controller: Select NO, as you are configuring an active controller.
APIC-X: Select NO, as vAPIC is not an APIC-X.
Controller ID: Assign 2 or 3, depending on whether this is the second or third controller. Controller ID 1 is reserved for the physical APIC.
Controller Name: Provide a hostname for the vAPIC.
TEP Pool: Specify the pool of Tunnel Endpoint (TEP) addresses.
VLAN ID: Enter the VLAN ID used for the Infra network.
Out-of-Band (OOB) Management Information:
IPv4 OOB IP Address: Enter the management IP address.
IPv4 OOB Network Mask: Provide the network mask for OOB management.
IPv4 OOB Gateway: Specify the gateway for the OOB network.
Passphrase: Input the passphrase obtained from the physical APIC.
Important Notes:
The passphrase expires after 60 minutes. If deployment is delayed, obtain a new passphrase from the physical APIC.
To update the passphrase after deployment:
Log in to the vAPIC VM using the "rescue-user" account.
Run the following commands:
plaintext
Copy code
acidiag touch clean acidiag touch setup
Power down the VM.
Edit Settings in vCenter:
Go to VM Options > vApp properties.
Enter the new passphrase.
Power on the VM.
Confirm and Deploy
Review all the entered details.
Click Finish to start the deployment process.
The virtual APIC will communicate with the physical APIC and join the cluster.
Note: The initial startup and synchronization may take several minutes.
Additional Notes
vMotion Not Supported:
Currently, vMotion cannot be used with vAPIC virtual machines.
In-Band Management Constraints:
Node Management IP Subnet and Application EPG IP Subnet must be different when using in-band management with vAPIC.
Virtual Switch Configuration:
Choose Virtual Switch Type: Use either a Standard vSwitch or a Distributed Virtual Switch (DVS).
Configure Uplinks: Set up uplinks for Infra VLAN trunking and any additional data VLANs.
Disable Discovery Protocol: Prevent the Discovery Protocol to allow direct LLDP packet reception from APIC.
Create Port Groups: Configure port groups with VLAN trunking for the Infra VLAN ID.
Physical APIC Requirements:
Must be installed first and remain available for fabric management.
vAPICs rely on the physical APIC for fabric discovery and cluster management.
Key Takeaways
Cisco Mini ACI Fabric:
Designed for small deployments with limited space and budget.
Utilizes one physical APIC and two virtual APICs for efficient management.
Installation Steps:
Involve downloading the OVA/ISO, deploying it via vCenter or directly on ESXi, configuring storage and networks, and finalizing with passphrase and cluster synchronization.
Best Practices:
Ensure time synchronization between physical and virtual APICs.
Directly connect ESXi hosts to leaf switches without using intermediate switches.
Follow the correct deployment sequence, especially for newer APIC releases.
Limitations:
Some advanced features are not supported in Mini ACI Fabric.
Physical APIC is essential for cluster operations and fabric changes; its availability is critical.
Upgrading or Downgrading Virtual APIC in Cisco Mini ACI Fabric
Overview
Virtual APICs (vAPIC): Manage network policies in a Cisco Mini ACI Fabric.
Physical APIC: Central controller that manages vAPICs.
Upgrading or Downgrading vAPIC
Upgrade Process:
Do Not Upgrade vAPIC Directly:
Instead, upgrade the physical APIC.
The physical APIC will automatically update the virtual APICs.
Use the Same ISO:
Utilize the same Cisco ACI ISO image used for upgrading a fully physical APIC cluster.
Downgrade Limitations:
No Downgrades to Pre-4.0(1):
You cannot downgrade to any version earlier than Cisco APIC 4.0(1).
Upgrading Mini ACI Fabric to Release 6.0(2) or Later
Limitations After Upgrade:
No Downgrades Post-6.0(2):
Once upgraded to release 6.0(2), downgrading to versions before 6.0(2) is not supported.
Supported Cluster Size:
Only one physical APIC and two virtual APICs on an ESXi host are supported.
Upgrade Restrictions:
Policy-based upgrades are not supported for Mini ACI before release 6.0(2).
VMM Domain Specific:
After upgrading to 6.0(2), only CDP Adjacency is supported for VMM domains if enabled on the DVS where vAPICs are deployed.
Before You Begin
Check Prerequisites:
Ensure all Virtual APIC Installation Prerequisites are met.
Verify the ACI fabric and physical APIC are operational.
Confirm that the physical APIC and ESXi hosts are time-synced using the same NTP server.
Upgrade Procedure
Verify Cluster Health:
Use commands like acidiag avread, show version, and show controller to ensure the cluster is healthy.
Reduce Cluster Size:
In the Cisco APIC GUI, reduce the cluster size from 3 to 1.
Confirm the reduction using the acidiag avread command.
Decommission Virtual APIC Nodes:
Decommission Node 3:
Remove node 3 from the virtual APIC cluster.
Wait a few minutes for the process to complete.
Refer to the Cisco APIC Cluster Management guide for detailed steps.
Decommission Node 2:
Remove node 2 from the virtual APIC cluster.
Wait a few minutes for the process to complete.
Refer to the Cisco APIC Cluster Management guide for detailed steps.
Delete Old Virtual APICs:
In the VMware vCenter GUI, delete the old virtual APIC instances (nodes 2 and 3) running the older APIC software.
Reject and Delete Unauthorized APICs (If Needed):
In the Cisco APIC GUI, navigate to System > Controllers.
Expand Controllers > apic_controller_name > Cluster as Seen by Node.
If any APICs are listed under Unauthorized Controllers, perform reject and delete actions.
Confirm Cluster Size:
Run acidiag avread and show controller commands to ensure the cluster size is now 1 and the cluster is healthy.
Upgrade Physical APIC:
Upgrade the physical APIC to release 6.0(2).
Wait and verify the upgrade using acidiag avread, show version, and show controller commands.
Upgrade Fabric Nodes:
Upgrade the leaf and spine switches to release 6.0(2).
Ensure all fabric nodes and the physical APIC are upgraded before adding other APICs.
Deploy New Virtual APICs:
After upgrading, deploy new virtual APICs on nodes 2 and 3.
Deploying Virtual APIC Directly in ESXi: Step-by-Step Guide
Before You Begin
Ensure Prerequisites:
ESX Host and Switches: Configure as per Virtual APIC Installation Prerequisites.
ACI Fabric and Physical APIC: Must be up and running.
Time Synchronization: Physical APIC and ESXi hosts should use the same NTP server to prevent sync issues.
Procedure
Download the APIC ISO Image
Visit the Cisco Software Download page.
Click on APIC Software.
Select the desired release version.
Download the APIC ISO image to a location accessible by your ESXi server.
Copy the APIC ISO to ESXi Host
Transfer the downloaded ISO image to your ESXi host.
Log in to VMware ESXi Host
Use the vSphere client to access your VMware ESXi host.
Create a Virtual Machine (VM) for vAPIC
In the vSphere client, create a new VM.
Ensure the VM meets the hardware requirements listed in Virtual APIC Installation Prerequisites.
Configure the VM with the APIC ISO
Set the APIC ISO image as the boot device for the VM.
Power on the VM to start the installation.
The installation process will mimic that of a physical APIC.
After installation, the VM will power down automatically.
Power On the VM
Start the VM again to begin the initial configuration.
Provide Fabric Information During Initial Boot
When the vAPIC VM starts, enter the following details:
Fabric Name: Your ACI fabric’s name.
Fabric ID: Unique identifier for your fabric.
Number of Active Controllers: Enter 1 (only single POD supported).
Standby Controller: Select NO (configuring an active controller).
APIC-X: Select NO (vAPIC is not APIC-X).
Controller ID: Assign 2 or 3 (1 is reserved for physical APIC).
Controller Name: Hostname for the vAPIC.
TEP Pool: Pool of Tunnel Endpoint (TEP) addresses.
VLAN ID: VLAN ID for the Infra network.
Out-of-Band (OOB) Management:
IPv4 OOB IP Address
IPv4 OOB Network Mask
IPv4 OOB Gateway
Passphrase: Enter the passphrase obtained from the physical APIC.
Important:
Passphrase Expiration: The passphrase expires after 60 minutes.
If Deployment Delayed:
Obtain a new passphrase from the physical APIC.
Update the passphrase in the VM’s vApp Options after wiping the VM:
Log in to the vAPIC VM using the "rescue-user" account.
Run the following commands:
plaintext
Copy code
acidiag touch clean acidiag touch setup
Power down the VM.
Edit Settings in vCenter:
Go to VM Options > vApp properties.
Enter the new passphrase.
Power on the VM.
Confirm and Deploy
Review all entered details.
Click Finish to start deploying the virtual APIC.
The virtual APIC will communicate with the physical APIC and join the cluster.
Note: Initial startup and synchronization may take several minutes.
Additional Configuration
Check Deployment Status:
In the vCenter GUI, go to the Monitor tab.
Ensure the deployment shows as successful.
Enable CDP on Interface Policy Groups:
Access Cisco APIC GUI:
Navigate to Fabric > Access Policies > Interfaces > VPC Interface.
Configure Settings:
Link Aggregation Type: Select Virtual Port Channel.
CDP Policy: Choose system-cdp-enabled.
Enable CDP and Disable LLDP:
Based on VMM Domain Configuration:
A. If VMM Domain is NOT Configured:
Use VMware vCenter GUI:
Enable CDP and disable LLDP.
Allow VLAN 0: Add VLAN 0 to the port group to permit LLDP packets.
B. If VMM Domain is Configured:
Modify vSwitch Policy in Cisco APIC GUI:
Go to Virtual Networking > VMware > VMM Domain > vSwitch Policy.
Enable CDP.
Adjust LLDP Policy:
Navigate to Fabric > Access Policies > Policies > Interface > LLDP Interface.
Set Received State to Enabled and Transmit State to Disabled.
Verify CDP Adjacency:
Run the command on the leaf switch:
plaintext
Copy code
sw1-leaf1# show cdp nei interf eth1/37
Ensure devices are correctly listed and connected.
Disable LLDP on vSwitch Policy:
Modify the vSwitch policy to disable LLDP.
Allow VLAN 0: Add VLAN 0 to the port group to allow LLDP packets.
Final Steps
Power On Virtual APIC Instances 2 and 3
Start the virtual APIC VMs.
Confirmation Message: Look for “System pre-configured successfully.”
Access Bootstrapping:
Visit: https://<VAPIC_INSTANCE_IP_ADDRESS> to complete setup.
Add Virtual APICs to the Cluster
In Cisco APIC GUI:
Navigate to System > Controller.
Expand and select Controller 1 > Cluster as Seen by Node.
Add Nodes 2 and 3:
Follow the Add Node procedure from the Cisco APIC Cluster Management chapter in the Getting Started Guide.
Wait for Cluster to Fully Fit:
Allow some time for the cluster to stabilize and confirm all nodes are active.
Verify Cluster Health
Use Commands:
acidiag avread
show version
show controller
Ensure:
The Mini ACI cluster is healthy and fully operational.
Important Notes
vMotion Not Supported:
Currently, vMotion cannot be used with virtual APIC virtual machines.
Time Synchronization:
Ensure all APICs and ESXi hosts are time-synced to prevent synchronization issues.
Direct Connectivity:
ESXi hosts must connect directly to leaf switches without using intermediate switches.
Key Takeaways
Upgrade via Physical APIC:
Always upgrade the physical APIC to update virtual APICs.
No Downgrades to Older Versions:
Downgrading to versions before Cisco APIC 4.0(1) is not supported.
Limited Cluster Size:
Mini ACI Fabric supports only one physical APIC and two virtual APICs.
Follow Proper Procedure:
Adhere to the step-by-step upgrade process to ensure a smooth transition.
Ensure Time Sync and Direct Connections:
Proper time synchronization and direct connectivity between ESXi hosts and leaf switches are crucial for successful upgrades.
Configuration Essentials:
CDP Enabled: Essential for device discovery and communication.
LLDP Disabled (if needed): Prevents conflicts with CDP.
Be Aware of Limitations:
Understand the limitations post-upgrade, such as unsupported downgrade paths and specific feature constraints.
Comentários