Showing posts with label ACI Fabric. Show all posts
Showing posts with label ACI Fabric. Show all posts

Tuesday, 28 April 2026

Leaf Node ID Swap in Cisco ACI: Risks, Precautions, and Steps

Cisco ACI Leaf Node ID Swap Steps When Leaves Are Part of vPC

Step 0 – Preconditions
Confirm maintenance window is approved. Ensure alternate connectivity or downtime is acceptable. Make sure you have console or OOB access to both leaf switches.

Step 1 – Drain Traffic and Clear Endpoints
Shut down or migrate all server-facing interfaces connected to the vPC pair.
From APIC, navigate to Fabric → Inventory → Pod → Node → Leaf → Endpoints.
Verify endpoint count is zero on both leaves.

Step 2 – Remove vPC and Port-Channel Configuration
Delete vPC protection group policies.
Delete all vPC port-channels.
Remove interface policy associations.
Remove all static EPG bindings that reference the vPC or either leaf.
At this stage, the leaves must have no access policy dependencies.

Step 3 – Remove L3Out (If Leaves Are Border Leaves)
If the vPC pair is used for L3Out, remove both leaves from the L3Out logical node profile and logical interface profile.
Confirm external routing is stable via remaining border leaves.

Step 4 – Decommission First Leaf (Leaf A)
In APIC, go to Fabric → Inventory → Fabric Membership.
Select Leaf A and perform Decommission.
Wait until the status shows Decommissioned.
Do not power off yet.

Step 5 – Clean Leaf A
Connect to Leaf A using console or OOB.
Run acidiag touch clean and then reload the switch.
This removes old node ID, certificates, and fabric identity.

Step 6 – Decommission Second Leaf (Leaf B)
In APIC, again go to Fabric → Inventory → Fabric Membership.
Select Leaf B and perform Decommission.
Wait until the status shows Decommissioned.

Step 7 – Clean Leaf B
Connect to Leaf B using console or OOB.
Run acidiag touch clean and reload the switch.
Both leaves are now clean and discovery-ready.

Step 8 – Re-add Leaf A with New Node ID
Power on Leaf A only.
Ensure fabric uplinks to spines are connected.
From APIC Fabric Membership, approve the switch and assign the new desired node ID.
Wait until Leaf A is fully discovered and stable.

Step 9 – Re-add Leaf B with New Node ID
Power on Leaf B.
From APIC Fabric Membership, approve it and assign the other node ID.
Wait until Leaf B is fully discovered and stable.

Step 10 – Rebuild vPC Configuration
After both leaves are healthy, recreate the vPC protection group.
Recreate vPC port-channels and interface policies.
Reapply static EPG bindings to the vPC.
Do not rebuild vPC if only one leaf is active.

Step 11 – Validation
Verify fabric health is green.
Ensure no vPC, access, or infra faults exist.
Confirm port-channels are up on both leaves.

Step 12 – Restore Traffic
Enable server-facing interfaces.
Bring servers or upstream devices back online.
Verify endpoint learning and confirm no MAC flapping or faults.

Final Rule

Never attempt a live node ID swap. Always decommission, clean, and re-add both vPC peer leaves in a controlled sequence.

Precautions

Swapping node IDs between Cisco ACI leaf switches is a sensitive operation, especially when the leaves are configured as a vPC pair. Unlike traditional networks, Cisco ACI tightly binds policies, forwarding state, and infrastructure objects to node IDs, making a node ID swap a planned maintenance activity, not a live change. When vPC is involved, the risk multiplies because both leaves act as a single logical endpoint for servers and network devices.

This article explains the critical precautions you must follow when performing a Cisco ACI leaf node ID swap in a vPC environment, based on real production experience and Cisco‑accepted operational practices.

Why Node ID Swap Is Risky in vPC‑Based ACI Fabrics

In Cisco ACI, a leaf’s node ID is not just an identifier; it is embedded into multiple internal constructs such as vPC identifiers, static EPG bindings, endpoint tables, and forwarding databases. In a vPC pair, both leaves jointly provide forwarding for a single logical port‑channel. Swapping node IDs without proper preparation can cause MAC flapping, endpoint blackholing, broken port‑channels, and fabric faults.

There is no supported in‑place node ID change in Cisco ACI. The only supported method to swap node IDs is to decommission, clean, and re‑add the leaf switches with the desired node IDs.

Precaution 1: Treat the vPC Pair as a Single Failure Domain

The most important rule is to treat both vPC peers as a single unit, even though they are two physical switches. Never attempt a node ID swap on only one vPC peer while the other peer is actively forwarding traffic. ACI vPC forwarding relies on consistent node information across both leaves. Any mismatch can result in unpredictable traffic loss.

Before starting, ensure:

  • All connected servers or upstream devices are drained or shut down.
  • No single‑homed devices depend on the vPC pair.
  • Maintenance is scheduled during a proper change window.

Precaution 2: Ensure Zero Active Endpoints on Both Leaves

A node ID swap must never be performed while endpoints are active. In ACI, endpoints can be learned dynamically through traffic, and their state is tied to the leaf node ID. If endpoints remain on either vPC peer, swapping node IDs will cause immediate disruption.

From APIC, verify that both leaves show zero endpoints before proceeding. If endpoints are present, migrate workloads, shut down interfaces, or disconnect cables until endpoint learning is cleared.

Precaution 3: Remove vPC and Port‑Channel Policies Before Decommissioning

ACI does not automatically clean up vPC policies during decommissioning. All vPC‑related constructs must be removed manually. This includes:

  • vPC protection group
  • Port‑channel policies
  • Interface policy associations
  • Static EPG bindings referencing the vPC

Leaving these objects in place can block decommissioning or result in orphaned configuration that causes faults after the swap. A clean policy removal ensures that the fabric does not retain references to the old node IDs.

Precaution 4: If the vPC Pair Is Also a Border Leaf, Remove L3Out First

When a vPC pair is serving as a border leaf for L3Out, the risk is even higher. External routing protocols such as BGP or OSPF depend on stable leaf identities. Before any node ID swap:

  • Remove the leaves from all L3Out logical node profiles.
  • Ensure routing is fully operational on alternate border leaves.
  • Validate external reachability before continuing.

Failure to do this can result in complete north‑south traffic outages.

Precaution 5: Always Clean Both Leaves Using acidiag

After decommissioning each leaf, it is mandatory to run:

acidiag touch clean
reload

on both vPC peers. Cleaning only one switch is a common and dangerous mistake. If one leaf still retains fabric identity or certificates, the fabric may encounter node ID conflicts, discovery failures, or inconsistent vPC behavior when the switches are re‑added.

Cleaning ensures that the switch boots in a discovery‑ready state with no residual ACI identity.

Precaution 6: Re‑Add Leaves Sequentially, Not in Parallel

When re‑adding switches with swapped node IDs, never power up or approve both leaves at the same time. Always follow a controlled order:

  1. Bring up the first leaf and assign its new node ID.
  2. Wait for full fabric stability and health.
  3. Bring up the second leaf and assign its new node ID.

This approach avoids node ID collisions, partial vPC instantiation, and confusing APIC fault scenarios.

Precaution 7: Rebuild vPC Only After Both Leaves Are Fully Healthy

Do not recreate vPC configurations until both leaves are fully discovered, healthy, and visible in the fabric. Building vPC with only one peer active leads to port‑channel inconsistencies and deployment failures.

Once both leaves are stable:

  • Recreate vPC protection groups.
  • Recreate port‑channels.
  • Reapply static EPG bindings.
  • Validate that both leaves appear in all bindings.

Only after this should server ports or network devices be reconnected.

Precaution 8: Validate vPC Health Before Allowing Traffic

Before reintroducing traffic, perform strict validation:

  • No vPC‑related faults in APIC.
  • Port‑channels show operational status.
  • No access, fabric, or infra faults.
  • Leaf interfaces are up and error‑free.

Once validation is complete, gradually restore server or upstream connectivity and observe endpoint learning behavior.

Common Mistakes to Avoid

The most common mistakes during node ID swap in vPC environments include attempting a live swap, forgetting to remove vPC policies, cleaning only one leaf, or restoring traffic before full validation. Each of these can result in extended outages and complex recovery procedures.

Final Takeaway

A Cisco ACI leaf node ID swap in a vPC environment is a full teardown and rebuild operation, not a minor change. Success depends on treating both leaves as a single unit, removing all dependencies, cleaning both switches, and performing a controlled re‑addition process. When executed correctly, the swap is safe and fully supported, but shortcuts almost always lead to problems.

One‑Line Summary

In Cisco ACI, swapping node IDs on vPC‑connected leaf switches requires full vPC teardown, clean decommissioning of both leaves, and a controlled rebuild to avoid traffic loss and fabric instability.

Tuesday, 5 August 2025

Concept of vPC in ACI

Concept of vPC in ACI

In Cisco ACI, a Virtual Port Channel (vPC) enables two separate leaf switches to present a unified port channel to a connected endpoint—such as a server, firewall, or another switch that supports link aggregation protocols like LACP.

In this setup, two ACI leaf nodes (e.g., Leaf201 and Leaf202) act as vPC peers, forming a logical construct known as a vPC domain. One of these peers is elected as the primary, while the other assumes the secondary role.




ACI’s MCT-Based Architecture

Unlike traditional vPC implementations that rely on a dedicated peer-link, ACI leverages the fabric itself to manage synchronization and control-plane communication. This architecture is referred to as Multichassis EtherChannel Trunk (MCT).

🔧 Key Characteristics:

  • No physical peer-link is required between Leaf201 and Leaf202.
  • Instead, the ACI fabric handles all peer communication and synchronization.
  • ZMQ (Zero Message Queue) replaces traditional CFS (Cisco Fabric Services) for messaging between vPC peers.

How Peer Communication Works in ACI

  • ZMQ, a high-performance messaging library using TCP, is embedded as libzmq on each switch.
  • Applications that require peer communication (like the vPC manager) use this library to exchange messages.

🔄 Peer Reachability Mechanism:

  • The vPC manager subscribes to routing updates via URIB.
  • When IS-IS discovers a route to the peer (e.g., Leaf202 sees Leaf201), URIB notifies the vPC manager.
  • The manager then attempts to establish a ZMQ socket with the peer.
  • If the route is withdrawn (e.g., due to link failure), the vPC manager is notified and the MCT link is brought down accordingly.

Upgrade Best Practices with vPC

To ensure high availability during fabric upgrades, it's recommended to divide switches into at least two upgrade groups. For example:

  • Group A: Leaf201, Leaf203, Spine101
  • Group B: Leaf202, Leaf204, Spine102

This strategy ensures that at least one vPC peer remains active during the upgrade, preventing service disruption for connected endpoints.


Glossary

Term

Description

ACI

Application Centric Infrastructure

vPC

Virtual Port Channel

MCT

Multichassis EtherChannel Trunk

ZMQ

Zero Message Queue

URIB

Unicast Routing Information Base

IS-IS

Intermediate System to Intermediate System

LACP

Link Aggregation Control Protocol


 VPC Design Options:-

Option 1 -VPC with SAME Leaf interfaces across two leafs with Combined Profiles



Option 2 -  VPC with SAME Leaf interfaces across two leafs with Individual Profiles.




Option 3 -  VPC with DIFFERENT Leaf interfaces across two leafs with Individual Profiles





Monday, 4 August 2025

Complete Steps to Create vPC in Cisco ACI (via APIC GUI)

 Understanding vPC in Cisco ACI: A Modern Approach to High Availability

In the evolving landscape of data center networking, Virtual Port Channel (vPC) stands out as a cornerstone of high availability and link redundancy. While traditional NX-OS environments rely on CLI-driven configurations, Cisco ACI reimagines vPC through a policy-driven, intent-based model that aligns with the fabric’s overarching design philosophy.

Unlike legacy setups, ACI abstracts physical connectivity into logical constructs, allowing administrators to define vPC behavior through interface policy groups, switch profiles, and attachable access entity profiles (AAEPs). This not only simplifies deployment but also ensures consistency across the fabric.

At its core, a vPC in ACI enables two leaf switches to present a unified uplink to a downstream device—be it a server, firewall, or load balancer—without relying on spanning tree protocols. The result is active-active forwarding, improved bandwidth utilization, and seamless failover.

In this guide, we’ll walk through the step-by-step configuration of vPC in Cisco ACI, demystifying each component and highlighting best practices to ensure a robust and scalable deployment.

Note:- In Cisco ACI, a Fabric Extender (FEX) can be integrated using a port channel in a straight-through topology, where each FEX connects directly to a leaf switch. While vPCs can be established between hosts and the FEX for redundancy and load balancing, the FEX itself does not support vPC connectivity to multiple leaf switches.

 Complete Steps to Create vPC in Cisco ACI (via APIC GUI)

Step 1: Leaf Onboarding (One-by-One)

🔍 Monitor Discovery in APIC

  1. Log in to the APIC GUI
  2. Navigate to:
    Fabric → Inventory → Fabric Membership → Nodes Pending Registration
  3. Wait for Leaf101 to appear
    • You’ll see its Serial Number
    • Node Role: Leaf
    • Status: Blank / Not Registered

📝 Register Leaf101

  1. Right-click on Leaf101’s serial number
  2. Click Register
  3. In the registration window, enter:
    • Node ID: 101
    • Node Name: Leaf101
    • Click Register
    • Wait for it to appear in Registered Nodes

📝 Register Leaf102

  1. Repeat the same steps for Leaf102:
    • Wait for it to appear in Nodes Pending Registration
    • Right-click → Register
    • Enter:
      • Node ID: 102
      • Node Name: Leaf102
    • Click Register
    • Wait for it to appear in Registered Nodes

🔢 Step-by-Step ACI Configuration Flow

2. VLAN Pool (VLAN 113)

  • Navigate to:
    Fabric → Access Policies → Pools → Right Click on VLAN and click Create Vlan Pool
  • Create VLAN Pool:
    • Name: VLAN_113_Pool
    • Mode: Static
    • Click + under Encap Blocks

Ø  Range:  113 – 113

Ø  Allocation mode: Static

    • Click Ok - >Submit

3. Domain (Physical Domain)

  • Go to:
    Fabric → Access Policies → Physical and External Domains ->Right Click on Physical domain ->
  • Create Physical Domain:
    • Name: PhysDom_VLAN113
    • VLAN Pool: VLAN_113_Pool
    • Click Submit

4. AEP (Attachable Access Entity Profile)

  • Navigate to:
    Fabric → Access Policies → Policies-> Global → Right Click on Attachable Access Entity Profiles -> Click Create Attachable Access Entity Profiles
  • Create AEP:
    • Name: AEP_VLAN113
    • Click + under Domains and Associated Domain: PhysDom_VLAN113
    • Click Update ->Next -> Finish

5. Interface Policy Group (vPC)

  • Go to:
    Fabric → Access Policies → Interface → Leaf Interfaces - >Policy Groups->Right click on VPC Interfaces - >Create VPC Interfaces
  • Create VPC Interface Policy Group:
    • Name:  vPC_LF101_LF102_1_1
    • AEP: AEP_VLAN113
    • Port Channel Policy: system-lacp-Active
    • Link Level Policy: system-link-level-XG-Auto
  • Click Next > Finish

6. Create vPC Policy (Your Mentioned Step)

  • Go to:
    Fabric → Access Policies → Policies → Switch
  • Right-click on Virtual Port Channel Default

Name:VPC_101_102

ID:10

VPC Domain Policy: Default

Switch1: Leaf101

Switch2: Leaf102

This step ensures the vPC behavior is defined at the switch policy level.

7. Interface Profile

  • Navigate to:
    Fabric → Access Policies → Interface → Leaf Interface -> Profiles
  • Right click on the interface profile and click Create Interface Profile:
    • Name: IntProf_Leaf101_102
  • Click + under Interface Selector:
    • Name: Eth1_4
    • Interface ID: 1/4
    • Policy Group: vPC_LF101_LF102_1_1
  • Click Ok - > Submit

 

8. Switch Profile

  • Go to:
    Fabric → Access Policies → Switches → Profiles
  • Right Click on Profile and click Create Leaf Profile:
    • Name: LeafProf_101_102
  • Click + under Leaf Selector:
    • Name: Leaf101_102
    • Node Block: From 101 to 102
  • Click Update -> Next
  • Attach Interface Profile:
    • IntProf_Leaf101_102

9. Create Tenant

  • Navigate to:
    Tenants
  • Click Add Tenant
    • Name: Tenant_WebApp
  • Click Submit

10. Create VRF

  • Navigate to:
    Tenants
  • Click Networking -> VRF -> Right click on VRF -> click Create VRF
    • Name: WebApp_VRF
    • Uncheck Create A Bridge Domain
  • Click Finish

11 Create BD

  • Navigate to:
    Tenants
  • Click Networking -> Bridge Domain -> Right click on Bridge Domain-> click Create Bridge Domain
    • Name: WebApp_BD
    • VRF: WebApp_VRF
    • Click Next
    • Click + under Subnet

Ø  Gateway IP: 10.1.1.1/24

Ø  Check “Make this IP address Primary”

Ø  Scope: check “Advertised Externally”

  • Click OK -> Next-> Finish

 

12. Create Application Profile (AP)

  • Inside Tenant_WebApp, go to:
    Application Profiles
  • Right Click on Application Profile  and Create Application Profile:
    • Name: WebApp_AP
  • Click Submit

13. Create Endpoint Group (EPG)

  • Inside WebApp_AP, go to:
    EPGs
  • Right Click on Application EPG and click Create Application EPG:
    • Name: WebApp_EPG
    • Bridge Domain: WebApp_BD
    • Click Finish
  • Right Click on WebApp_EPG and click ADD Physical Domain Association:
    • Domain Association: PhysDom_VLAN113
    • Click Submit

14. Create Contract (Allow TCP Port 80)

  • Go to:
    Tenant_WebApp → Contracts -> Standard
  • Right Click on Standard -> Click Create Contract:
    • Name: Allow_HTTP
  • Click + under Subject:
    • Name: HTTP_Subject
    • Filter: Click + under Filter
      • Click + Under Name
      • Name: HTTP_Filter
      • Click + under Entries
      • Name: HTTP_Entry
      • EtherType: IP
      • IP Protocol: TCP
      • Destination Port: From http – To http
      • Click Update-> Submit
      • Click Update -> Ok -> Submit
  • Provide contract to/from EPG as needed

15. Static Binding of EPG to Port

  • Go to Tenant WebApp_EPG-> Application Profiles -> WebApp_AP ->  Application EPGs -> WebApp_EPGs
  • Right Click on WebApp_EPGs -> Click Deploy Static EPG on PC,VPC, or Interface
    • Path Type: Virtual Port Channel
    •  Path: Leaf101/eth1/4 and Leaf102/eth1/4
    • Mode: Trunk
    • Encapsulation: vlan-113

 

Static Binding Deployment Options in ACI - Immediate Vs On demand

 🔧 Static Binding Deployment Options in ACI

When you statically bind an EPG to a port (interface), you’ll see a Deployment Immediacy setting. This determines when the configuration is pushed to the fabric.

1. Immediate

  • Definition: The configuration is deployed to the switch as soon as you save it.
  • Use Case: Use this when the endpoint is already connected or expected to connect soon.
  • Behavior: The policy is applied to the interface regardless of whether an endpoint is present.

2. On-Demand

  • Definition: The configuration is deployed only when an endpoint is detected on the interface.
  • Use Case: Useful for reducing unnecessary configuration on interfaces until needed.
  • Behavior: The policy is not pushed until the APIC sees a MAC address or IP on that port.

📌 Where You Set This

When creating a static binding in the APIC GUI:

  • Navigate to the EPG → Static Ports
  • Add a static port binding
  • Choose the Deployment Immediacy: Immediate or On-Demand

Best Practices

  • Use Immediate for known, always-on devices (like servers).
  • Use On-Demand for dynamic environments (like user laptops or VMs that move).

 

Sunday, 3 August 2025

Forward Error Correction (FEC) in Cisco ACI

 In Cisco ACIForward Error Correction (FEC) is a mechanism used to improve the reliability of high-speed data transmission across physical links, especially in environments using 25G, 40G, 100G, or 400G interfaces.

🔍 What Is Forward Error Correction?

FEC is a technique where the sender adds redundant data (parity bits) to each transmission. If some bits are corrupted during transit, the receiver can detect and correct those errors without needing a retransmission. Think of it like sending a puzzle with extra pieces so the receiver can still complete it even if a few pieces go missing.

🧠 How FEC Works in Cisco ACI

In ACI, FEC is negotiated between switches and endpoints during auto-negotiation. The devices advertise their supported FEC modes and agree on the best one. Common FEC modes include:

  • FC-FEC (Firecode FEC): Used for 25G links.
  • RS-FEC (Reed-Solomon FEC): Used for 25G, 100G, and 400G links.
  • CL91-RS-FEC and IEEE-RS-FEC: Advanced versions for higher speeds.
  • AUTO-FEC: Automatically selects the best FEC mode based on link capabilities.

⚙️ Why It Matters

FEC is especially important in Cisco ACI because:

  • High-speed links (like 25G or 100G) are more prone to bit errors.
  • Breakout ports (e.g., 4x25G from a 100G port) often require FEC to maintain link stability.
  • Copper DAC cables used in short-distance connections rely on FEC to compensate for signal degradation.

Use Cases

  • Ensuring error-free transmission over high-speed links.
  • Supporting auto-negotiation on breakout ports.
  • Enhancing link reliability without increasing latency or requiring retransmissions.

 

Symmetric hashing in Cisco ACI

 

🔄 Symmetric Hashing in Cisco ACI: A Traffic Balancing Philosophy

Imagine a highway with multiple lanes, and cars (data packets) trying to reach their destination. Normally, each car chooses a lane based on its starting point and destination. But what if the return journey picks a different lane? That’s what happens with asymmetric hashing — the forward and reverse paths of a data flow may travel through different physical links.

In Cisco ACI, symmetric hashing is like a rule that says: “If you go out through lane 3, you must come back through lane 3.” It ensures that both directions of a traffic flow — from source to destination and back — follow the same physical path within a port channel.

This matters a lot when you're dealing with devices like firewalls, load balancers, or any system that tracks sessions. If traffic enters through one link and exits through another, it can confuse these devices, leading to dropped packets or broken connections.


Symmetric hashing is not supported on the following switches:
  • Cisco Nexus 93128TX
  • Cisco Nexus 9372PX
  • Cisco Nexus 9372PX-E
  • Cisco Nexus 9372TX
  • Cisco Nexus 9372TX-E
  • Cisco Nexus 9396PX
  • Cisco Nexus 9396TX

🧠 Why Cisco ACI Made It Optional

Cisco ACI’s default behavior is asymmetric — it spreads traffic across links based on a hash of various packet fields (IP, MAC, ports). This works well for general load balancing. But when precision and consistency are needed, ACI gives you the option to enable symmetric hashing in the port-channel policy.

Once enabled, you can choose the hashing algorithm — like using only IP addresses or including Layer 4 ports — to fine-tune how traffic is distributed.

Use Cases That Benefit

  • Firewall clusters that expect consistent ingress/egress paths.
  • Load balancers that rely on session stickiness.
  • Troubleshooting scenarios where symmetric paths simplify packet tracing.

 

Difference between “Treat as Virtual IP Address” and “Make this IP Address Primary” in Cisco ACI

 


🧠 Cisco ACI Demystified: “Treat as Virtual IP Address” vs “Make this IP Address Primary

In the world of Cisco ACI, Bridge Domains (BDs) are the backbone of Layer 2 networking. But when configuring subnets within a BD, two deceptively similar options often confuse engineers:

  •  Make this IP Address Primary
  • 🌐 Treat as Virtual IP Address

Let’s break down what each of these means, when to use them, and how they impact your ACI fabric.


🔹 What is “Make this IP Address Primary”?

This option is used to define the default gateway for endpoints within the Bridge Domain.

Key Characteristics:

  • Only one primary IP per BD.
  • Used for routing traffic between subnets or to external networks.
  • Responds to ARP requests from endpoints.
  • Can be advertised externally if route advertisement is enabled.

📌 When to Use:

  • In single-site ACI deployments.
  • When you want the fabric to act as the default gateway for endpoints.
  • For standard BD configurations where no multi-site or stretched fabric is involved.

🔹 What is “Treat as Virtual IP Address”?

This option is designed for multi-site or stretched fabric deployments where you want a consistent gateway IP and MAC address across multiple locations.

🌐 Key Characteristics:

  • Requires a Virtual MAC address.
  • Enables Common Pervasive Gateway (CPG) functionality.
  • Ensures seamless endpoint mobility across sites.
  • Can coexist with a primary IP in the same BD.

📌 When to Use:

  • In multi-pod or multi-site ACI environments.
  • When you need Layer 3 gateway consistency across data centers.
  • For active-active data center designs.

🔁 Side-by-Side Comparison

Feature

Make this IP Primary

Treat as Virtual IP Address

Default Gateway Role

Yes

Yes (in multi-site)

Number per BD

One

Multiple (with virtual MAC)

Requires Virtual MAC

No

Yes

Use Case

Single-site routing

Multi-site gateway consistency

Supports Endpoint Mobility

Limited

Seamless

Route Advertisement

Yes (if enabled)

Yes (if enabled)


🧪 Real-World Example

Imagine you have two data centers—DC1 and DC2—connected via ACI Multi-Site. You want VMs to move between them without changing their default gateway.

  • You’d configure the same subnet in both sites.
  • Use “Treat as Virtual IP Address” with a shared virtual MAC.
  • This ensures the gateway IP and MAC remain consistent, avoiding disruptions.

🧩 Final Thoughts

Both options serve critical but distinct purposes. Choosing the right one depends on your ACI topology and traffic flow requirements. For most single-site deployments, “Make this IP Address Primary” is sufficient. But for advanced, distributed environments, “Treat as Virtual IP Address” is your go-to for seamless mobility and high availability.

 

Sunday, 20 July 2025

Cisco ACI – Port Channel (eth1/4 & eth1/5) Trunk Configuration for VLAN 420

 

Cisco ACI – Port Channel (eth1/4 & eth1/5) Trunk Configuration for VLAN 420 – Complete Guide


In modern data center architectures, Cisco ACI (Application Centric Infrastructure) plays a vital role in automating and simplifying complex network configurations. One such common scenario is setting up a Port Channel trunk to carry specific VLAN traffic—like VLAN 420—across fabric leaf switches. This step-by-step guide walks you through the complete configuration of a Port Channel using interface eth1/4 and eth1/5 on Leaf 101, allowing VLANs 400–500, and deploying VLAN 420 in production.

Note - Multivlan on Same port on same switch in same EPG is not supported.


✅ Objective

Configure a Port Channel (eth1/4 & eth1/5) on Leaf 101 in trunk mode to carry VLAN 420, using a static EPG binding, and associate it with the necessary ACI components like VLAN Pool, Physical Domain, AAEP, Bridge Domain, EPG, and Contract.


✅ Prerequisites

  • Cisco ACI Fabric running with APIC access.

  • Leaf 101 is discovered and operational.

  • End host (e.g., server or hypervisor) connected to eth1/4 and eth1/5.

  • Basic understanding of ACI policies and constructs.


Step-by-Step Summary

Step

Task

Navigation Path

1

Create VLAN Pool (400–500, static)

Fabric > Access Policies > Pools > VLAN

2

Create Physical Domain linked to VLAN Pool

Fabric > Access Policies > Physical and External Domains > Physical Domains

3

Create Interface Policies (Link Level, CDP, LLDP)

Fabric > Access Policies > Policies > Interface

4

Create AAEP and associate Physical Domain

Fabric > Access Policies > Policies > Global > Attachable Access Entity Profiles

5

Create Leaf Port Channel Policy Group

Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups > Port Channel

6

Create Leaf Interface Profile and assign eth1/4 & eth1/5

Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles

7

Create Leaf Switch Profile and assign Node 101 and Interface Profile

Fabric > Access Policies > Switches > Leaf Switch Profiles

8

Create Tenant, VRF, and Bridge Domain

Tenants

9

Create Application Profile and EPG

Tenants > Tenant Name > Application Profiles

10

Deploy Static EPG on Port Channel (Trunk mode, VLAN 420)

Tenants > Tenant Name > Application Profile > EPG > Static Ports

11

Associate EPG with Physical Domain

Tenants > Tenant Name > Application Profile > EPG > Domains

12

Create Contract, add Subject, Filters, and associate with EPG

Tenants > Tenant > Contracts & Application Profile > EPG > Contracts

13

Associate Contract with EPG

Tenants > Tenant > Contracts & Application Profile > EPG


Step 1 – Create VLAN Pool (VLANs 400–500)

  • Path: Fabric > Access Policies > Pools > VLAN
  • Action:
    • Right-click on "VLAN" > Create VLAN Pool
    • Name: VLANPool-400-500
    • Allocation Mode: Static Allocation
    • Add Encap Block:
      • From: 400
      • To: 500
      • Allocation Type: Static
    • Click OK > Submit

Step 2 – Create Physical Domain

  • Path: Fabric > Access Policies > Physical and External Domains > Physical Domains
  • Action:
    • Right-click Physical Domains > Create Physical Domain
    • Name: physDom-400-500
    • Associate VLAN Pool: VLANPool-400-500
    • Click Submit

Step 3 – Create Interface Policies

  • Path: Fabric > Access Policies > Policies > Interface
  • Create: Whatever parameters you want to set on the interface
    • Link Level Policy: 10G-Auto
    • CDP Policy: CDP-Enabled
    • LLDP Policy: LLDP-Enabled
    • Portchannel: PCP_101_1_4_1_5

Ø  Mode: LACP Active

Ø  Click Submit

 


Step 4 – Create AAEP

  • Path: Fabric > Access Policies > Policies > Global > Attachable Access Entity Profiles
  • Action:
    • Right-click Attachable Access Entity Profiles > Create AAEP
    • Name: AAEP_400-500
    • Click+ under Domain and Associate Domain: physDom-400-500
    • Click Update > Next > Finish

Step 5 – Create Leaf Port Channel Policy Group

  • Path: Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups > PC Interface
  • Action:
    • Right-click PC Interface > Create PC Interface Policy Group
    • Name: PCPG_101_1_4_and_1_5
    • Interface Type: PC (Port Channel)
    • Policies:
      • Link Level: 10G-Auto
      • CDP: CDP-Enabled
      • LLDP: LLDP-Enabled
      • Portchannel: PCP_101_1_4_1_5
      • AAEP: AAEP_400-500
  • Click Next - > Finish

⚠️ Note: VLAN Trunking is controlled through Static Binding and Domain VLAN Range, not inside the PC Policy Group.


Step 6 – Create Leaf Interface Profile

  • Path: Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles
  • Action:
    • Right Click on Profiles and Create Leaf Interface Profile: Leaf101_IntProf_PC
    • Add Interface Selector: Click + under Interface Selectors
      • Name: PC-eth1_4-1_5
      • Interface IDs: 1/4,1/5
      • Interface Policy Group: PCPG-101
  • Click Ok and then Submit

Step 7 – Create Leaf Switch Profile

  • Path: Fabric > Access Policies > Switches > Leaf Switch >Profiles
    • Right Click on Profiles and Create Leaf Profile: Leaf101-SWProf-PC
    • Click + under Leaf Selectors

Ø  Name: LS101

Ø  Blocks: 101

    • Click update, then Next Associate Interface Selector Profile: Leaf101-IntProf-PC
  • Click Finish

Step 8 – Create Tenant, VRF, and Bridge Domain

  • Path: Tenants
  • Action:
    • Click Add Tenants and Create Tenant: T1 and Click Submit
    • Create VRF : Path Tenants->Networking->VRFs

Ø  Right click on VRFs and Create VRF: VRF-T1, uncheck “Create A Bridge Domain” and click Finish

    • Create Bridge Domains : Path Tenants->Networking-> Bridge Domains

Ø  Right click on Bridge Domain > Create Bridge Domain: BD-420

Ø  Associate with VRF-T1 and Next

Ø  Click + on Subnets and Add Gateway IP: 192.168.42.1/24

  • Click Ok, Next and then Finish

Step 9 – Create Application Profile and EPG

  • Path: Tenants > T1 > Application Profiles
  • Action:Right Click on Application Profiles
    • Create Application Profile: App420 and click Submit
  • Create EPG Path: Tenants > T1 > Application Profiles> App420
    • Right Click on Application EPG > Create Application EPG:

Ø  Name: EPG-420

Ø  Associate with Bridge Domain: BD-420

Ø  Click Finish


Step 10 – Deploy Static EPG on Port Channel (Trunk, VLAN 420)

  • Path: Tenants > T1 > App420 > EPG-420 > Application EPGs > EPG-420
  • Action:
    • Right-click EPG-420 > Click Deploy Static EPG on PC, VPC or Interface
    • Path Type: Direct Port Channel
    • Path:  PCPG-101
    • Port Encap: 420
    • Mode: Trunk
  • Click Next>Finish

Step 11 – Associate EPG with Physical Domain

  • Path: Tenants > T1 > App420 > EPG-420
  • Action:
    • Right Click EPG-420 and click on Add Physical Domain Association
    • Domain: physDom-400-500
  • Click Submit

 

Step 12 – Create Contract and Associate with EPG

🔹 12.1 – Create Filter

  • Path: Tenants > T1 > Contracts
  • Right-click Filters > Create Filters: Filter-TCP80
  • Click + under Entries
    • Node: Entry_TCP80
    • EtherType: IP
    • IP Protocol: tcp
    • Stateful: checked
    • Destination Port/Range: From/To:http
    • Click Update and then Submit

🔹 12.2 – Create Contract

  • Path: Tenants > T1 > Contracts
  • Right-click Standard > Create Contract: Contract-420
  • Click + under Subject ,Name:Subject-420
  • Click + under Filters
    • Name: choose T1/Filter-TCP80
    • Action: Permit
    • Click Update and then Submit
  • Click OK, then Submit

🔹 12.2 – Associate Contract with EPG

  • Path: Tenants > T1 > Application Profile>App420 >Application EPG> EPG-420
  • Right Click on EPG-420
  • Click Add Provided Contracts
    • Select: Contract-420
  • Click Add, then Submit