Tuesday, 5 August 2025

Twinax vs DAC Cable: What's the Difference?

 

Twinax vs DAC Cable: What's the Difference?

In high-speed data center environments, Twinax and Direct Attach Copper (DAC) cables are often mentioned interchangeably—but they’re not exactly the same. Understanding their distinctions helps in selecting the right connectivity solution for your Cisco ACI fabric or any modern network deployment.

🔌 Twinax Cable

  • Definition: Twinax (short for twin axial) is a type of cable that uses two conductors within a single shielded cable to transmit differential signals.
  • Use Case: It’s the physical medium used in many short-range, high-speed connections, especially in data centers.
  • Form Factor: Twinax is the underlying cable technology used in DAC cables.

🔗 DAC Cable (Direct Attach Copper)

  • Definition: DAC is a complete cable assembly that includes Twinax cabling with integrated transceivers at both ends (usually SFP+, QSFP+, or QSFP28).
  • Use Case: Commonly used for short-distance connections between switches, servers, and storage devices—typically up to 7 meters.
  • Types:
    • Passive DAC: No signal amplification; ideal for short distances (up to ~5m).
    • Active DAC: Includes signal conditioning electronics; supports slightly longer distances (up to ~10m).

🆚 Key Differences

Feature

Twinax Cable

DAC Cable

Definition

Cable type with twin conductors

Cable assembly with connectors

Includes Transceivers

No

Yes

Application

Used inside DAC or other assemblies

Plug-and-play for switch/server links

Distance Support

Depends on implementation

Typically 1–10 meters

 

Monday, 4 August 2025

Complete Steps to Create vPC in Cisco ACI (via APIC GUI)

 Understanding vPC in Cisco ACI: A Modern Approach to High Availability

In the evolving landscape of data center networking, Virtual Port Channel (vPC) stands out as a cornerstone of high availability and link redundancy. While traditional NX-OS environments rely on CLI-driven configurations, Cisco ACI reimagines vPC through a policy-driven, intent-based model that aligns with the fabric’s overarching design philosophy.

Unlike legacy setups, ACI abstracts physical connectivity into logical constructs, allowing administrators to define vPC behavior through interface policy groups, switch profiles, and attachable access entity profiles (AAEPs). This not only simplifies deployment but also ensures consistency across the fabric.

At its core, a vPC in ACI enables two leaf switches to present a unified uplink to a downstream device—be it a server, firewall, or load balancer—without relying on spanning tree protocols. The result is active-active forwarding, improved bandwidth utilization, and seamless failover.

In this guide, we’ll walk through the step-by-step configuration of vPC in Cisco ACI, demystifying each component and highlighting best practices to ensure a robust and scalable deployment.

Note:- In Cisco ACI, a Fabric Extender (FEX) can be integrated using a port channel in a straight-through topology, where each FEX connects directly to a leaf switch. While vPCs can be established between hosts and the FEX for redundancy and load balancing, the FEX itself does not support vPC connectivity to multiple leaf switches.

 Complete Steps to Create vPC in Cisco ACI (via APIC GUI)

Step 1: Leaf Onboarding (One-by-One)

🔍 Monitor Discovery in APIC

  1. Log in to the APIC GUI
  2. Navigate to:
    Fabric → Inventory → Fabric Membership → Nodes Pending Registration
  3. Wait for Leaf101 to appear
    • You’ll see its Serial Number
    • Node Role: Leaf
    • Status: Blank / Not Registered

📝 Register Leaf101

  1. Right-click on Leaf101’s serial number
  2. Click Register
  3. In the registration window, enter:
    • Node ID: 101
    • Node Name: Leaf101
    • Click Register
    • Wait for it to appear in Registered Nodes

📝 Register Leaf102

  1. Repeat the same steps for Leaf102:
    • Wait for it to appear in Nodes Pending Registration
    • Right-click → Register
    • Enter:
      • Node ID: 102
      • Node Name: Leaf102
    • Click Register
    • Wait for it to appear in Registered Nodes

🔢 Step-by-Step ACI Configuration Flow

2. VLAN Pool (VLAN 113)

  • Navigate to:
    Fabric → Access Policies → Pools → Right Click on VLAN and click Create Vlan Pool
  • Create VLAN Pool:
    • Name: VLAN_113_Pool
    • Mode: Static
    • Click + under Encap Blocks

Ø  Range:  113 – 113

Ø  Allocation mode: Static

    • Click Ok - >Submit

3. Domain (Physical Domain)

  • Go to:
    Fabric → Access Policies → Physical and External Domains ->Right Click on Physical domain ->
  • Create Physical Domain:
    • Name: PhysDom_VLAN113
    • VLAN Pool: VLAN_113_Pool
    • Click Submit

4. AEP (Attachable Access Entity Profile)

  • Navigate to:
    Fabric → Access Policies → Policies-> Global → Right Click on Attachable Access Entity Profiles -> Click Create Attachable Access Entity Profiles
  • Create AEP:
    • Name: AEP_VLAN113
    • Click + under Domains and Associated Domain: PhysDom_VLAN113
    • Click Update ->Next -> Finish

5. Interface Policy Group (vPC)

  • Go to:
    Fabric → Access Policies → Interface → Leaf Interfaces - >Policy Groups->Right click on VPC Interfaces - >Create VPC Interfaces
  • Create VPC Interface Policy Group:
    • Name:  vPC_LF101_LF102_1_1
    • AEP: AEP_VLAN113
    • Port Channel Policy: system-lacp-Active
    • Link Level Policy: system-link-level-XG-Auto
  • Click Next > Finish

6. Create vPC Policy (Your Mentioned Step)

  • Go to:
    Fabric → Access Policies → Policies → Switch
  • Right-click on Virtual Port Channel Default

Name:VPC_101_102

ID:10

VPC Domain Policy: Default

Switch1: Leaf101

Switch2: Leaf102

This step ensures the vPC behavior is defined at the switch policy level.

7. Interface Profile

  • Navigate to:
    Fabric → Access Policies → Interface → Leaf Interface -> Profiles
  • Right click on the interface profile and click Create Interface Profile:
    • Name: IntProf_Leaf101_102
  • Click + under Interface Selector:
    • Name: Eth1_4
    • Interface ID: 1/4
    • Policy Group: vPC_LF101_LF102_1_1
  • Click Ok - > Submit

 

8. Switch Profile

  • Go to:
    Fabric → Access Policies → Switches → Profiles
  • Right Click on Profile and click Create Leaf Profile:
    • Name: LeafProf_101_102
  • Click + under Leaf Selector:
    • Name: Leaf101_102
    • Node Block: From 101 to 102
  • Click Update -> Next
  • Attach Interface Profile:
    • IntProf_Leaf101_102

9. Create Tenant

  • Navigate to:
    Tenants
  • Click Add Tenant
    • Name: Tenant_WebApp
  • Click Submit

10. Create VRF

  • Navigate to:
    Tenants
  • Click Networking -> VRF -> Right click on VRF -> click Create VRF
    • Name: WebApp_VRF
    • Uncheck Create A Bridge Domain
  • Click Finish

11 Create BD

  • Navigate to:
    Tenants
  • Click Networking -> Bridge Domain -> Right click on Bridge Domain-> click Create Bridge Domain
    • Name: WebApp_BD
    • VRF: WebApp_VRF
    • Click Next
    • Click + under Subnet

Ø  Gateway IP: 10.1.1.1/24

Ø  Check “Make this IP address Primary”

Ø  Scope: check “Advertised Externally”

  • Click OK -> Next-> Finish

 

12. Create Application Profile (AP)

  • Inside Tenant_WebApp, go to:
    Application Profiles
  • Right Click on Application Profile  and Create Application Profile:
    • Name: WebApp_AP
  • Click Submit

13. Create Endpoint Group (EPG)

  • Inside WebApp_AP, go to:
    EPGs
  • Right Click on Application EPG and click Create Application EPG:
    • Name: WebApp_EPG
    • Bridge Domain: WebApp_BD
    • Click Finish
  • Right Click on WebApp_EPG and click ADD Physical Domain Association:
    • Domain Association: PhysDom_VLAN113
    • Click Submit

14. Create Contract (Allow TCP Port 80)

  • Go to:
    Tenant_WebApp → Contracts -> Standard
  • Right Click on Standard -> Click Create Contract:
    • Name: Allow_HTTP
  • Click + under Subject:
    • Name: HTTP_Subject
    • Filter: Click + under Filter
      • Click + Under Name
      • Name: HTTP_Filter
      • Click + under Entries
      • Name: HTTP_Entry
      • EtherType: IP
      • IP Protocol: TCP
      • Destination Port: From http – To http
      • Click Update-> Submit
      • Click Update -> Ok -> Submit
  • Provide contract to/from EPG as needed

15. Static Binding of EPG to Port

  • Go to Tenant WebApp_EPG-> Application Profiles -> WebApp_AP ->  Application EPGs -> WebApp_EPGs
  • Right Click on WebApp_EPGs -> Click Deploy Static EPG on PC,VPC, or Interface
    • Path Type: Virtual Port Channel
    •  Path: Leaf101/eth1/4 and Leaf102/eth1/4
    • Mode: Trunk
    • Encapsulation: vlan-113

 

Static Binding Deployment Options in ACI - Immediate Vs On demand

 🔧 Static Binding Deployment Options in ACI

When you statically bind an EPG to a port (interface), you’ll see a Deployment Immediacy setting. This determines when the configuration is pushed to the fabric.

1. Immediate

  • Definition: The configuration is deployed to the switch as soon as you save it.
  • Use Case: Use this when the endpoint is already connected or expected to connect soon.
  • Behavior: The policy is applied to the interface regardless of whether an endpoint is present.

2. On-Demand

  • Definition: The configuration is deployed only when an endpoint is detected on the interface.
  • Use Case: Useful for reducing unnecessary configuration on interfaces until needed.
  • Behavior: The policy is not pushed until the APIC sees a MAC address or IP on that port.

📌 Where You Set This

When creating a static binding in the APIC GUI:

  • Navigate to the EPG → Static Ports
  • Add a static port binding
  • Choose the Deployment Immediacy: Immediate or On-Demand

Best Practices

  • Use Immediate for known, always-on devices (like servers).
  • Use On-Demand for dynamic environments (like user laptops or VMs that move).

 

Sunday, 3 August 2025

Peer Dead Interval vs Delay Restore Timer in Cisco ACI

 

⏱️ Peer Dead Interval vs Delay Restore Timer in Cisco ACI: Timing the Trust

In Cisco ACI, timing is everything — especially when it comes to maintaining stable vPC (Virtual Port Channel) peer relationships. Two critical timers help manage how ACI reacts to peer disruptions and recoveries: Peer Dead Interval and Delay Restore Timer. Though they sound similar, they serve very different purposes.


🔍 Peer Dead Interval: Watching for Silence

Think of the Peer Dead Interval as a watchdog timer. It defines how long a switch should wait before declaring its vPC peer as dead — meaning unreachable or non-responsive.

  • Purpose: Detect peer failure.
  • Trigger: Lack of heartbeat (keepalive) messages.
  • Default: Typically 3.5 seconds in ACI.
  • Impact: If the peer is declared dead, the switch may take over certain roles or shut down vPC member ports to avoid split-brain scenarios.

🧠 Analogy: It’s like waiting for a friend to reply to your message. If they don’t respond within a few seconds, you assume something’s wrong.


Delay Restore Timer: Holding Back the Comeback

The Delay Restore Timer is used after a peer recovers. It delays the reactivation of vPC member ports or SVIs (Switched Virtual Interfaces) on the recovering switch.

  • Purpose: Prevent flapping and ensure stable reconvergence.
  • Trigger: Peer switch reboot or recovery.
  • Default: 10 seconds (can be customized).
  • Impact: Gives time for control plane protocols (like STP, routing) to settle before data plane resumes.

🧠 Analogy: It’s like giving your friend a moment to catch their breath after they’ve returned from a sprint — before asking them to jump back into a conversation.


🔄 Why Both Matter

Together, these timers ensure that:

  • ACI doesn’t overreact to temporary glitches.
  • Recovery is graceful, avoiding packet loss or loops.
  • Network stability is maintained even during failures and reboots.

 

Forward Error Correction (FEC) in Cisco ACI

 In Cisco ACIForward Error Correction (FEC) is a mechanism used to improve the reliability of high-speed data transmission across physical links, especially in environments using 25G, 40G, 100G, or 400G interfaces.

🔍 What Is Forward Error Correction?

FEC is a technique where the sender adds redundant data (parity bits) to each transmission. If some bits are corrupted during transit, the receiver can detect and correct those errors without needing a retransmission. Think of it like sending a puzzle with extra pieces so the receiver can still complete it even if a few pieces go missing.

🧠 How FEC Works in Cisco ACI

In ACI, FEC is negotiated between switches and endpoints during auto-negotiation. The devices advertise their supported FEC modes and agree on the best one. Common FEC modes include:

  • FC-FEC (Firecode FEC): Used for 25G links.
  • RS-FEC (Reed-Solomon FEC): Used for 25G, 100G, and 400G links.
  • CL91-RS-FEC and IEEE-RS-FEC: Advanced versions for higher speeds.
  • AUTO-FEC: Automatically selects the best FEC mode based on link capabilities.

⚙️ Why It Matters

FEC is especially important in Cisco ACI because:

  • High-speed links (like 25G or 100G) are more prone to bit errors.
  • Breakout ports (e.g., 4x25G from a 100G port) often require FEC to maintain link stability.
  • Copper DAC cables used in short-distance connections rely on FEC to compensate for signal degradation.

Use Cases

  • Ensuring error-free transmission over high-speed links.
  • Supporting auto-negotiation on breakout ports.
  • Enhancing link reliability without increasing latency or requiring retransmissions.

 

Symmetric hashing in Cisco ACI

 

🔄 Symmetric Hashing in Cisco ACI: A Traffic Balancing Philosophy

Imagine a highway with multiple lanes, and cars (data packets) trying to reach their destination. Normally, each car chooses a lane based on its starting point and destination. But what if the return journey picks a different lane? That’s what happens with asymmetric hashing — the forward and reverse paths of a data flow may travel through different physical links.

In Cisco ACI, symmetric hashing is like a rule that says: “If you go out through lane 3, you must come back through lane 3.” It ensures that both directions of a traffic flow — from source to destination and back — follow the same physical path within a port channel.

This matters a lot when you're dealing with devices like firewalls, load balancers, or any system that tracks sessions. If traffic enters through one link and exits through another, it can confuse these devices, leading to dropped packets or broken connections.


Symmetric hashing is not supported on the following switches:
  • Cisco Nexus 93128TX
  • Cisco Nexus 9372PX
  • Cisco Nexus 9372PX-E
  • Cisco Nexus 9372TX
  • Cisco Nexus 9372TX-E
  • Cisco Nexus 9396PX
  • Cisco Nexus 9396TX

🧠 Why Cisco ACI Made It Optional

Cisco ACI’s default behavior is asymmetric — it spreads traffic across links based on a hash of various packet fields (IP, MAC, ports). This works well for general load balancing. But when precision and consistency are needed, ACI gives you the option to enable symmetric hashing in the port-channel policy.

Once enabled, you can choose the hashing algorithm — like using only IP addresses or including Layer 4 ports — to fine-tune how traffic is distributed.

Use Cases That Benefit

  • Firewall clusters that expect consistent ingress/egress paths.
  • Load balancers that rely on session stickiness.
  • Troubleshooting scenarios where symmetric paths simplify packet tracing.

 

what “Multiple (with virtual MAC)” means in the context of “Treat as Virtual IP Address” in Cisco ACI.

 🧩 What Does “Multiple (with virtual MAC)” Mean?

When you select “Treat as Virtual IP Address”, you're telling Cisco ACI that this IP address should be used as a shared gateway across multiple sites or pods. To make this work, ACI uses a Virtual MAC address.

🔹 Why a Virtual MAC?

In a multi-site or stretched fabric, the same IP address (e.g., 192.168.10.1) might be configured in multiple locations. But MAC addresses are normally unique to each site. If the same IP has different MACs in different sites, it can confuse endpoints and break mobility.

So, ACI allows you to assign a Virtual MAC to the VIP. This ensures:

  • All sites use the same IP and MAC for the gateway.
  • Endpoints can move between sites without needing to relearn the gateway MAC.
  • Traffic flows seamlessly, even across geographically separated data centers.

🧠 “Multiple” Refers To:

  • You can have multiple subnets in a BD marked as Virtual IPs.
  • Each of these VIPs can share the same Virtual MAC.
  • This setup supports multiple gateway IPs across sites, all behaving consistently.

📌 Example Scenario

Let’s say you have:

  • DC1 and DC2 connected via ACI Multi-Site.
  • A BD with subnet 10.1.1.1/24 used as the gateway in both sites.
  • You mark 10.1.1.1 as “Treat as Virtual IP Address” and assign a Virtual MAC like 00:11:22:33:44:55.

Now:

  • Both DC1 and DC2 advertise 10.1.1.1 with the same MAC.
  • VMs can move between sites without changing their gateway.
  • Network traffic remains stable and predictable.