Showing posts with label CCIE Data Center. Show all posts
Showing posts with label CCIE Data Center. Show all posts

Sunday, 26 April 2026

Cisco ACI “Unknown” Leaf State Explained: Certificates, LLDP, Software, and Hardware Issues

  In a Cisco ACI fabric, one of the most frustrating issues during initial fabric bring‑up, expansion, or node replacement is seeing a leaf switch stuck in an “Unknown” state. When a leaf is in an unknown state, it means the APIC cannot fully discover, authenticate, or manage the node, preventing it from joining the fabric and participating in traffic forwarding.

This issue can occur during initial fabric deployment, adding a new leaf to an existing fabric, replacing failed hardware, performing software upgrades, or moving switches between fabrics.

Understanding why a leaf enters the “Unknown” state is critical for fast recovery. In most cases, the root cause is not a single configuration mistake but a failure in communication, authentication, compatibility, or initialization.

This article explains the most common causes of the “Unknown” leaf state in Cisco ACI, why they happen, and how to systematically troubleshoot them in real‑world environments.

1. What Does “Unknown” Leaf State Mean in Cisco ACI?

When a leaf is shown as “Unknown” in the APIC GUI, it indicates that the APIC can see the node attempting discovery, but the node cannot complete secure authentication or critical control‑plane messaging has failed.

At this stage, the leaf is not operational, not programmable, and cannot forward production traffic.

2. Certificate Issues Between Leaf and APIC

Cisco ACI uses mutual certificate‑based authentication between the APIC controllers and fabric nodes. Every leaf switch must present a valid certificate chain that is signed and trusted by the APIC.

If the certificate exchange fails, the leaf cannot authenticate correctly, and APIC marks it as Unknown.

Common certificate‑related problems include an invalid or corrupted certificate on the leaf, the leaf previously belonging to another ACI fabric, expired or mismatched certificates due to time drift, or incomplete cleanup after node replacement.

These issues are often seen when hardware is reused without full re‑initialization.

The most reliable resolution is to completely wipe and reinitialize the leaf switch, ensure it boots in ACI mode, and allow APIC to generate and install a fresh certificate.

3. LLDP Mismatch or LLDP Failure

Cisco ACI relies heavily on LLDP for fabric discovery and adjacency validation. LLDP is mandatory in ACI for identifying correct topological relationships between leaf and spine switches.

If LLDP is not exchanged correctly, discovery fails and the leaf remains in an Unknown state.

Typical LLDP problems include LLDP being disabled on connected devices, LLDP filtered due to security policies, incorrect cabling such as connecting a leaf to something other than a spine, or the switch running in NX‑OS mode instead of ACI mode.

Symptoms include missing neighbor information, partial discovery, or interfaces appearing operationally down.

To resolve LLDP issues, ensure LLDP is enabled end‑to‑end, verify correct cabling from leaf to spine only, confirm the switch is running in ACI mode, and check optics and interfaces on both ends.

4. Firmware or Software Incompatibility

ACI fabric components are designed to work within a compatible software matrix. Significant software mismatches between the APIC, leaf, and spine can prevent successful node onboarding.

This often occurs when a leaf is running an unsupported ACI version, the APIC has been upgraded but the leaf image was not updated, or an incorrect software image is installed on the switch.

Typical symptoms include the leaf being detected but never transitioning from Unknown to Active, along with compatibility or image‑related faults.

Resolution requires verifying Cisco’s supported version matrix and ensuring that the leaf software version is compatible with both the APIC and spine versions.

5. Hardware Problems

Physical layer issues are a common but frequently overlooked cause of Unknown leaf state. Even a simple faulty optic can completely prevent discovery.

Common hardware causes include defective or unsupported transceivers, damaged fiber or copper cables, faulty ports on the leaf or spine, or mismatched speed or media types.

Indicators include interfaces staying down, intermittent connectivity, missing LLDP information, or hardware‑related faults in APIC.

Troubleshooting involves replacing suspect cables and optics, using only Cisco‑supported transceivers, testing alternate ports, and validating interface status on both leaf and spine.

6. Time Synchronization Issues

Certificate validation in ACI is time‑sensitive. If the system time on the leaf is significantly out of sync with the APIC, certificate authentication can fail even if the configuration and connectivity are correct.

This is common in environments where NTP is misconfigured, unavailable, or the device has been powered off for an extended period.

Symptoms include authentication failures and persistent Unknown leaf state with no obvious physical or configuration issues.

Resolution involves verifying NTP configuration on APIC, ensuring the leaf can synchronize time, and reinitiating discovery after time correction.

7. Incorrect Node ID or Serial Number Issues

ACI uniquely identifies nodes using a combination of node ID, serial number, and certificates. If these identifiers do not match what APIC expects, the leaf will fail authentication.

This commonly occurs when a switch was previously part of another fabric, reused after RMA without proper cleanup, or when a node ID conflict exists.

Symptoms include the leaf appearing with unexpected identity information or being rejected during registration.

The safest resolution is to fully wipe the leaf configuration, reboot the device, and allow APIC to assign a fresh node identity.

8. Recommended Troubleshooting Sequence

When a leaf is stuck in Unknown state, follow this sequence:

First, verify physical connectivity and optics.
Second, confirm LLDP adjacency and cabling.
Third, check software compatibility.
Fourth, validate certificates and authentication.
Fifth, ensure correct time synchronization.
Finally, reinitialize the leaf if needed.

Following this order avoids unnecessary configuration changes and reduces downtime.

9. Best Practices to Prevent Unknown Leaf State

Always wipe reused hardware before deployment.
Keep APIC, spine, and leaf software versions compatible.
Use supported Cisco optics and cables.
Ensure stable NTP configuration.
Verify LLDP connectivity during installation.
Document node IDs and serial numbers carefully.

Most Unknown leaf issues are preventable with proper procedures.

10. Conclusion

An Unknown leaf state in Cisco ACI is always a symptom of a failed discovery, authentication, compatibility, or initialization process. Certificate issues, LLDP failures, firmware incompatibility, hardware problems, time synchronization issues, and incorrect node identity are the most common causes.

By understanding these root causes and following a structured troubleshooting approach, engineers can resolve Unknown leaf issues quickly and avoid prolonged deployment delays.

A clean initialization and methodical verification remain the most effective solution in Cisco ACI environments.

Cisco ACI L3Out Interview Questions Explained – Design, Implement, and Troubleshooting

  

Section 1: Basic Cisco ACI L3Out Interview Questions

1. What is L3Out in Cisco ACI?

L3Out (Layer‑3 Outside) is the ACI construct that provides external Layer‑3 connectivity between the ACI fabric and networks outside the fabric.


2. Why do we need L3Out?

L3Out is used to:

  • Connect ACI to external routers
  • Integrate firewalls
  • Provide north‑south traffic
  • Advertise routes between ACI and external networks

3. Is L3Out mandatory in ACI?

No. L3Out is required only if the ACI fabric needs external Layer‑3 communication.


4. Where is L3Out configured?

L3Out is configured under a Tenant, associated with a VRF, and deployed on leaf switches.


5. Is L3Out Layer‑2 or Layer‑3?

L3Out is strictly a Layer‑3 construct.


Section 2: L3Out Components Interview Questions

6. What are the main components of L3Out?

  • L3Out object
  • Logical Node Profile
  • Logical Interface Profile
  • External EPG
  • Contracts

7. What is a Logical Node Profile?

It defines which leaf nodes participate in the L3Out.


8. What is a Logical Interface Profile?

It defines:

  • Interface type (routed, SVI)
  • IP addressing
  • Encapsulation (VLAN)
  • Connectivity to external device

9. Can L3Out be deployed on multiple leafs?

Yes. L3Out is commonly deployed on multiple leaf switches for redundancy.


10. What happens if an L3Out leaf fails?

Traffic fails over to other L3Out‑enabled leafs, assuming proper design (ECMP / routing).


Section 3: L3Out and Routing Protocol Interview Questions

11. Which routing protocols are supported with L3Out?

  • Static routing
  • OSPF
  • BGP

12. Which routing protocol is most commonly used?

BGP, due to scalability and flexibility.


13. Is OSPF supported in L3Out?

Yes, but less commonly used in large deployments.


14. Can static routes be used in L3Out?

Yes, for simple or small environments.


15. Can L3Out support ECMP?

Yes. ACI supports ECMP for L3Out when routing protocols allow it.


Section 4: L3Out and VRF Association Questions

16. Is L3Out associated with a VRF?

Yes. Every L3Out must be associated with exactly one VRF.


17. Can one L3Out be shared across multiple VRFs?

No. One L3Out belongs to only one VRF.


18. Can multiple L3Outs exist in the same VRF?

Yes. A VRF can have multiple L3Outs.


19. Why would you create multiple L3Outs in one VRF?

  • Multiple external devices
  • Separate routing domains
  • Different security or routing policies

20. What happens if VRF association is wrong?

External routing will fail and traffic will be dropped.


Section 5: External EPG Interview Questions

21. What is an External EPG?

An External EPG represents external networks outside the ACI fabric.


22. Why is an External EPG required?

Because ACI is deny‑by‑default, and external networks must also follow ACI security policy.


23. How is traffic allowed between internal EPGs and External EPGs?

Using contracts.


24. Is External EPG similar to internal EPG?

Conceptually yes, but it represents external endpoints.


25. Can there be multiple External EPGs under one L3Out?

Yes.


Section 6: L3Out and Contracts (Very Important)

26. Is traffic allowed by default between ACI and external networks?

No. Traffic is denied by default.


27. How do you allow internal traffic to external networks?

Apply contracts between internal EPG and External EPG.


28. Can External EPG be provider or consumer?

It can be either or both, depending on traffic flow.


29. What happens if no contract is applied?

Traffic will be dropped, even though routing is correct.


30. Why do many L3Out issues occur?

Because routing works, but contracts are missing or incorrect.


Section 7: L3Out Design Interview Questions

31. Routed Interface vs SVI – what is preferred?

Routed interfaces are preferred for simplicity and scale.


32. When would you use SVI‑based L3Out?

When connecting to:

  • Traditional VLAN‑based networks
  • Legacy firewalls

33. Can L3Out connect to firewalls?

Yes, very commonly.


34. Can one firewall connect to multiple L3Outs?

Yes, depending on design.


35. Should L3Out be deployed on border leafs?

Yes. Border leafs are best practice.


Section 8: Advanced L3Out Interview Questions

36. How is route leaking handled in ACI?

Using Shared Services VRF and contracts.


37. Can L3Out be used with Shared Services VRF?

Yes, very commonly.


38. Can L3Out be stretched across sites?

  • Multi‑Pod: Yes
  • Multi‑Site: Via individual site L3Outs

39. How does L3Out behave in Multi‑Pod?

L3Out is shared across pods.


40. How does L3Out behave in Multi‑Site?

Each site has its own L3Out, orchestrated by NDO.


Section 9: L3Out and External Connectivity Troubleshooting Questions

41. Routing is correct but traffic fails – why?

Most likely contract or filter issue.


42. Endpoint can ping gateway but not internet – why?

External EPG contract missing or incorrect.


43. How to verify routes learned from L3Out?

  • APIC routes view
  • Leaf show commands
  • moquery

44. How do you verify contract programming?

Use:

show zoning-rule

45. How do you verify L3Out operational status?

  • APIC Health score
  • Faults
  • Leaf CLI

Section 10: MoQuery Commands for L3Out Verification

46. Verify L3Out configuration

moquery -c l3extOut

47. Verify External EPGs

moquery -c l3extInstP

48. Verify L3Out subnets

moquery -c l3extSubnet

49. Verify VRF association

moquery -c fvCtx

50. Check faults related to L3Out

moquery -c faultInst

Section 11: Common L3Out Mistakes (Interview Favorite)

51. Forgetting contracts

Most common mistake.

52. Wrong VRF association

Causes route blackholing.

53. Deploying L3Out on wrong leaf

Traffic won’t exit properly.

54. Using SVI instead of routed interface unnecessarily

Adds complexity.

55. Not planning for redundancy

Leads to single‑point failures.


Section 12: Scenario‑Based L3Out Interview Questions

56. When should you create multiple External EPGs?

When different external networks need different security policies.


57. Can multiple L3Outs advertise the same prefix?

Yes, but routing behavior must be carefully designed.


58. Can L3Out connect to non‑Cisco devices?

Yes. ACI is vendor‑agnostic at Layer‑3.


59. Can L3Out be used for Internet access?

Yes, with proper NAT/firewall integration.


60. What is the biggest design challenge in L3Out?

Balancing security, simplicity, and scalability.


Conclusion

Cisco ACI L3Out is the gateway between the ACI fabric and the external world. Interviews around L3Out focus on design understanding, security enforcement, VRF association, and troubleshooting approach, not just configuration steps.

If you understand:

  • How routing works
  • Why contracts are mandatory
  • Where L3Out should be placed
  • How to verify and troubleshoot

you will handle most Cisco ACI L3Out interview questions confidently.


✅ Interview Tip

When answering L3Out questions, always explain:

  1. Routing
  2. Security (contracts)
  3. Placement (leafs)
  4. Verification

Cisco ACI Interview Questions and Answers (ESG, Multi‑Site, NDO, MoQuery Explained)

Cisco Application Centric Infrastructure (ACI) is a cornerstone technology in modern enterprise data centers. As a result, Cisco ACI interview questions appear frequently in interviews for Network Engineers, Data Center Specialists, ACI Architects, and CCIE Data Center candidates.

This comprehensive guide brings together basic, intermediate, and advanced ACI interview questions, including Endpoint Security Groups (ESG), Multi‑Pod, Multi‑Site, and Nexus Dashboard / NDO, with concise, practical answers. It also includes comparison tables frequently used by interviewers to test real‑world understanding.


Section 1: Cisco ACI Fundamentals – Core Interview Questions

1. What is Cisco ACI?
Cisco ACI is a policy‑based data center networking solution that centralizes management and enforces application‑centric policies across a fabric.

2. What problem does Cisco ACI solve?
It reduces operational complexity, configuration drift, and scalability issues found in traditional networking.

3. Which switches are used in ACI?
Cisco Nexus 9000 series switches running in ACI mode.

4. What is APIC?
APIC (Application Policy Infrastructure Controller) is the centralized control and management platform for the ACI fabric.

5. Is APIC part of the data path?
No. APIC is out of the data path; traffic continues even if APIC is unavailable.


Section 2: ACI Architecture Interview Questions

6. What topology does ACI use?
Leaf–spine architecture.

7. What connects to leaf switches?
Endpoints such as servers, firewalls, load balancers, and L3Outs.

8. What is the role of spine switches?
High‑speed packet forwarding between leaf switches.

9. Can endpoints connect to spine switches?
No.

10. What happens if a spine fails?
Traffic reroutes through remaining spines without impact.


Section 3: ACI Logical Model Questions

11. What is a Tenant?
An administrative boundary representing an organization or business unit.

12. What is a VRF in ACI?
A Layer‑3 routing domain providing IP isolation.

13. What is a Bridge Domain (BD)?
A Layer‑2 forwarding domain that defines flooding and gateway behavior.

14. What is an Endpoint Group (EPG)?
A logical group of endpoints that share the same policy.

15. Is an EPG the same as a VLAN?
No. EPGs are policy objects, not VLANs.


Section 4: Traffic Flow and Contracts Interview Questions

16. What is the default traffic behavior in ACI?
Traffic between EPGs is denied by default.

17. How is traffic allowed?
Using contracts.

18. What is a contract?
A policy object that defines who talks, what traffic is allowed, and direction.

19. What are subjects?
Logical groupings of filters within a contract.

20. What is a filter?
Defines protocol, port, and direction.


Section 5: Advanced Policy – vzAny and Taboo

21. What is vzAny?
A special object that represents all EPGs within a VRF.

22. Why use vzAny?
To simplify policy and reduce TCAM usage.

23. What is a Taboo Contract?
A deny contract used to explicitly block traffic.

24. Does Taboo override permit contracts?
Yes. Deny always takes precedence.

25. When should Taboo be used?
Only for specific, unavoidable deny cases.


Section 6: Endpoint Security Group (ESG) Interview Questions

26. What is an ESG?
Endpoint Security Group is a policy‑based security construct independent of topology.

27. How is ESG different from EPG?
EPG is topology‑based; ESG is security‑policy‑based.

28. Can ESG span multiple EPGs?
Yes.

29. Does ESG use contracts?
Yes, contracts are applied directly between ESGs.

30. Is ESG mandatory?
No, it is optional and mainly used for zero‑trust designs.


🔍 Comparison Table: EPG vs ESG

FeatureEPGESG
Based onTopologySecurity policy
DependencyBD / VLANIndependent
ScopeLimitedCross‑EPG
Zero‑TrustBasicStrong
Use CaseGeneral policyAdvanced security

Section 7: ACI Multi‑Pod Interview Questions

31. What is ACI Multi‑Pod?
A single ACI fabric stretched across multiple locations (pods).

32. Is Multi‑Pod one fabric?
Yes.

33. How many APICs manage Multi‑Pod?
One APIC cluster.

34. Are L2 and L3 stretched?
Yes.

35. What is IPN?
Inter‑Pod Network connecting pods.

36. What is the main risk of Multi‑Pod?
Increased fault domain.


Section 8: ACI Multi‑Site Interview Questions

37. What is ACI Multi‑Site?
Multiple independent ACI fabrics managed under common policy.

38. Are fabrics independent?
Yes.

39. Is Layer‑2 stretched in Multi‑Site?
No, Multi‑Site is primarily Layer‑3.

40. What is Multi‑Site mainly used for?
Disaster recovery and geo‑redundancy.


🔍 Comparison Table: Multi‑Pod vs Multi‑Site

FeatureMulti‑PodMulti‑Site
FabricSingleMultiple
APICSharedSeparate
L2 StretchYesNo
Latency RequirementStrictRelaxed
Fault IsolationLowHigh
Best Use CaseMetro DCGeo‑redundancy

Section 9: Nexus Dashboard & NDO Interview Questions

41. What is Nexus Dashboard (ND)?
A unified platform hosting ACI‑related services like NDO, NDI, and Insights.

42. What is Nexus Dashboard Orchestrator (NDO)?
A tool used to orchestrate policies across multiple ACI sites.

43. What was NDO previously called?
MSO (Multi‑Site Orchestrator).

44. Does NDO replace APIC?
No.

45. What is a schema in NDO?
A logical template defining tenants and policies.


Section 10: Nexus Dashboard Insights (NDI)

46. What is NDI?
Nexus Dashboard Insights provides health analytics, anomaly detection, and assurance.

47. Does NDI configure the fabric?
No. It is analytics only.

48. Is NDI mandatory?
No, but highly recommended for large environments.


🔍 Comparison Table: ND vs NDO vs NDI

ComponentPurpose
Nexus DashboardPlatform
NDOPolicy orchestration
NDIAnalytics & assurance
APICFabric control

Section 11: Troubleshooting Interview Questions

49. What is a health score?
A numeric representation of object health.

50. What is a fault?
An abnormal condition detected in the fabric.

51. What is moquery?
A read‑only CLI tool to query ACI managed objects.

52. Is moquery safe in production?
Yes.

53. Why prefer moquery over GUI?
Speed and accuracy.


Section 12: Automation and Operations

54. Does ACI support automation?
Yes, via native REST APIs.

55. Can Ansible be used with ACI?
Yes.

56. What is Day‑0?
Fabric deployment.

57. What is Day‑1?
Policy configuration.

58. What is Day‑2?
Operations and troubleshooting.


Section 13: ACI Disadvantages (Interview Favorite)

59. What is the biggest challenge in ACI?
Learning curve.

60. Is ACI expensive?
Yes, compared to traditional designs.

61. Is ACI vendor locked?
Yes.

62. Can ACI be over‑engineered?
Yes, with poor design.


Conclusion

Cisco ACI interviews test more than definitions—they assess design thinking, security understanding, architecture choice, and operational awareness. A clear grasp of EPG vs ESG, Multi‑Pod vs Multi‑Site, and NDO vs NDI is critical for senior‑level roles.

If you understand why ACI behaves the way it does, not just how to configure it, you will stand out in interviews.

Sunday, 15 March 2026

Cisco ACI MoQuery – Advanced Commands for Day‑to‑Day Operations

Cisco ACI provides a powerful graphical interface through APIC, but experienced ACI engineers rarely rely only on the GUI during daily operations. In real production environments, engineers prefer moquery because it offers fast, accurate, and read‑only access to the Cisco ACI Management Information Tree (MIT).

Moquery is safe to use in production, does not impact traffic, and does not program hardware. It exposes the real‑time state of the fabric and eliminates guesswork during troubleshooting. For day‑to‑day ACI operations, moquery is often the first tool engineers reach for.


What Is MoQuery in Cisco ACI?

Moquery is a command‑line utility available directly on the APIC that allows engineers to query managed objects (MOs) stored in the ACI database. Unlike the APIC GUI, moquery does not hide relationships or simplify outputs. It shows raw and authoritative information exactly as it exists in the fabric.

Moquery is commonly used for:

  • Endpoint troubleshooting
  • Contract and policy validation
  • VRF and bridge domain verification
  • Fault analysis
  • Fabric and node health checks

Endpoint Troubleshooting Using MoQuery

Endpoint‑related issues are the most common problems in Cisco ACI environments. When endpoints are not reachable or behave unexpectedly, moquery provides immediate visibility.

To display all learned endpoints:

moquery -c fvCEp

This command shows:

  • MAC address
  • IP address
  • EPG association
  • Bridge Domain
  • Leaf and interface where the endpoint is learned

To find a specific IP address:

moquery -c fvCEp | grep 10.10.10.25

To find a specific MAC address:

moquery -c fvCEp | grep 00:50:56

These commands are used daily to identify incorrect endpoint learning, endpoint mobility events, duplicate IPs, and static path misconfigurations.


Validating Application Profiles and EPGs

To list all Endpoint Groups (EPGs) in a tenant:

moquery -c fvAEPg

This command is helpful when:

  • EPGs do not appear in the GUI
  • Verifying naming conventions
  • Confirming EPG existence during migrations

To identify which application profile an EPG belongs to:

moquery -c fvAEPg | grep dn

This is especially useful in environments with many application profiles and similarly named EPGs.


Contract Troubleshooting Using MoQuery

Contracts are one of the most frequent causes of traffic drops in Cisco ACI. Moquery allows engineers to validate contract relationships without relying on GUI assumptions.

To list all contracts:

moquery -c vzBrCP

To check which EPGs are providers of a contract:

moquery -c fvRsProv

To check which EPGs are consumers of a contract:

moquery -c fvRsCons

These commands confirm whether the correct EPGs are actually providing and consuming the intended contracts.


Validating Contract Subjects and Filters

Many contract issues occur not because the contract is missing, but because the filter is wrong.

To inspect contract subjects:

moquery -c vzSubj

To list filters:

moquery -c vzFilter

To validate filter entries (ports, protocol, and direction):

moquery -c vzEntry

These commands remove ambiguity and clearly show whether the contract allows the required traffic.


Taboo Contract Verification

Taboo Contracts explicitly deny traffic and override permit contracts. They should be used sparingly, as misconfiguration can cause outages.

To list all Taboo Contracts:

moquery -c vzTaboo

To inspect Taboo contract subjects:

moquery -c vzTSubj

If traffic is unexpectedly denied, these commands should always be checked early in troubleshooting.


Validating vzAny and VRF‑Level Policies

vzAny represents all EPGs within a single VRF and is commonly used for shared services or broad policy application.

To list all VRFs:

moquery -c fvCtx

To confirm vzAny configuration:

moquery -c vzAny

This is critical in environments using:

  • Shared‑services architectures
  • Permit‑all designs
  • Contract Preferred Groups

Many production incidents occur because engineers are unaware of an existing vzAny contract.


Bridge Domain Troubleshooting

Bridge Domain issues can silently break connectivity.

To list all bridge domains:

moquery -c fvBD

To display bridge domain subnets:

moquery -c fvSubnet

To validate Bridge Domain to VRF mapping:

moquery -c fvRsCtx

These commands help identify:

  • Missing gateways
  • Incorrect VRF bindings
  • Wrong subnet scope

L3Out and External Connectivity Validation

To list all Layer‑3 Outs:

moquery -c l3extOut

To view external EPGs:

moquery -c l3extInstP

To check external subnets:

moquery -c l3extSubnet

These are essential when troubleshooting:

  • North‑south traffic issues
  • Firewall integration
  • Route advertisement problems

Fault and Fabric Health Troubleshooting

To display all active faults:

moquery -c faultInst

To see only critical faults:

moquery -c faultInst | grep critical

To find operational faults:

moquery -c faultInst | grep oper

These commands are faster and often more actionable than navigating the APIC fault dashboard.


Fabric and Node Health Validation

To list all fabric nodes:

moquery -c fabricNode

To check fabric health scores:

moquery -c fabricHealth

These commands are commonly used before and after production changes to ensure stability.


Interface and Path Troubleshooting

To list physical interfaces:

moquery -c ethpmPhysIf

To check interface operational state:

moquery -c ethpmPhysIf | grep operSt

To validate static path bindings:

moquery -c fvRsPathAtt

These commands explain many partial connectivity issues, link‑state problems, and unexpected traffic drops.


Best Practices for Daily MoQuery Usage

  • Use moquery during incidents, not after
  • Save outputs for RCA and audits
  • Combine moquery with grep for faster analysis
  • Learn common managed object classes such as fvCEp, fvAEPg, fvBD, fvCtx, and faultInst

Why Every ACI Engineer Should Master MoQuery

Moquery significantly reduces MTTR, increases confidence during incidents, and exposes the actual state of the fabric. Engineers who master moquery troubleshoot faster, avoid mistakes, and operate more effectively in large ACI environments.


Conclusion

Moquery is one of the most powerful yet underutilized tools in Cisco ACI. While the APIC GUI is excellent for visualization, moquery provides the facts. For serious ACI operations, moquery should be part of every engineer’s daily workflow.