Thursday 30 January 2020

3850 stack requirement

3850 stackwise-480 supports mixed stack which means any model of 3850 cannot be used in a stack.

Whereas License(LAN base, IP base or IP services( and IOS XE version must be same on all stack switches. for example, Catalyst 3850 switches with LAN Base feature can only be stack with other 3850 LAB switches.

Maximum 8 switches can be part of single stack.

Cisco IPN configuration and hardware requirement


Below are the configuration requirement for IPN network.

1. Routed Sub-Interface with Vlan-4 : IPN device interface which is connected to Spine must be a sub interface and tag to vlan 4. We cannot use routed port or SVI for the interconnection. Also no other vlan tag can be used.

2. IPN device must support 9150B mtu and it is a mandatory requirement. Make sure all device in the path must support jumbo frames. Otherwise MP-BGP will flap between the SPINes of different PODs.

3. IPN device must support PIM BiDIR. It is used to carry BUM (Broadcast,  unknown unicast and multicast) traffic.

4. OSPF protocol. Only OSPF protocol can be configured between IPN and ACI fabric (Spines). No other protocol can be used.

5. DHCP relay must be configured if you want to perform the zero touch deployment of POD 2.

6. QOS POlicy:- It is not a madatory requirement but it is good to prioritize the Multipod control packets.


Hardware requirement:-

IPN device can be any box which can support aforementioned features. generally below hardware are used for IPN.

1. Nexus 7000
2. ASR 1K
3. N3K-C3548P-10GX

N3K-C3172PQ-10GE cannot be used as IPN device.

Saturday 25 January 2020

SSH accessibility check of multiple cisco router and save the output in a file

import paramiko
import time
import sys
import logging
import socket

remote_conn_pre = paramiko.SSHClient()
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())

f= open("output.txt","w+")

ips = [i.strip() for i in open("ip.txt")]

import logging
logging.getLogger('paramiko.transport').setLevel(logging.DEBUG)
paramiko.util.log_to_file("logs")

for ip in ips:
    try:
        remote_conn_pre.connect(ip, username='test', password='test', timeout=4, look_for_keys=False, allow_agent=False)
        remote_conn = remote_conn_pre.invoke_shell()
        print (ip + ' === Device Reachable')
        f.write(ip + ' === Device Reachable'"\n")
        time.sleep(2)
    except paramiko.AuthenticationException:
        print ip + ' === Bad credentials'
        f.write(ip + ' === Bad credentials'"\n")
        time.sleep(2)
    except paramiko.SSHException:
        print ip + ' === Issues with ssh service'
        f.write(ip + ' === Issues with ssh service'"\n")
        time.sleep(2)
    except socket.error:
        print ip + ' === Device unreachable'
        f.write(ip + ' === Device unreachable'"\n")
        time.sleep(2)
f.close()

SSH accessibility check on mulitple routers


Make a notepad contain IP address of the routers where we need to check the SSH accessibility.

import paramiko
import time
import sys
import logging
import socket

remote_conn_pre = paramiko.SSHClient()
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())

ips = [i.strip() for i in open("ip.txt")]

for ip in ips:
    try:
        remote_conn_pre.connect(ip, username='test', password='test', timeout=4, look_for_keys=False, allow_agent=False)

        remote_conn = remote_conn_pre.invoke_shell()
        print (ip + ' === Device Reachable')
        remote_conn.send("\n")
        time.sleep(2)
    except paramiko.AuthenticationException:
        print ip + ' === Bad credentials'
    except paramiko.SSHException:
        print ip + ' === Issues with ssh service'
    except socket.error:
        print ip + ' === Device unreachable'


Friday 24 January 2020

Jumbo frame configuration on Nexus


I have tried to explain the MTU configuration on the Nexus platform. MTU configuration varies based on the port type and hardware platform.

1. Layer 3 MTU Configurations

MTU configuraiton on L3 port is quite straight forward. Configuration is also same on all the platforms. We just need to give the MTU command on the interface configuration mode.

Configure MTU on a Switched Virtual Interface (SVI)
interface vlan 1
mtu 9216

Configure MTU on a Layer 3 Port
interface ethernet 1/1
no switchport
mtu 9216

2. Layer 2 MTU Configurations

MTU configuration on L2 port varies based on the hardware. On some platforms, we have to modify MTU under network-qos policy whereas on some MTU commands can be given under interface configuration mode.

2.1 Using QOS policy
On below hardware, jumbo frames can be configured using QOS policy.
Nexus 3000: Includes Nexus 3048, 3064, 3132Q, 3132Q-X, 3132Q-XL, 3172, and 3500 Series switches
Nexus 5000: All Nexus 5000 and 5500 Series switches
Nexus 6000: All Nexus 6000 Series switches

    Configuration: -
policy-map type network-qos jumbo
  class type network-qos class-default
          mtu 9216
system qos
  service-policy type network-qos jumbo

2.1 Per-Port MTU Configuration
On below hardware, jumbo frames can be configured directly under the interface.
           Nexus 3000: Includes Nexus 3132Q-V, 3164, 31108, 31128PQ, 3200 Series, and 36180YC-R switches
Nexus 7000: All Nexus 7000- and 7700 Series switches
Nexus 9000: All Nexus 9200 Series switches (includes 92xxx), 9300 Series switches (includes 93xxx), and 9500 Series switches

Nexus(config)#interface ethernet 1/1
Nexus(config-if)#mtu 9216




Wednesday 15 January 2020

How to create BD in Cisco ACI

In traditional networking, we used to have VLAN as the Layer 2 forwarding domain which also define the broadcast domain. Whereas in ACI, Vlan are just an identifier and doesn’t defines the L2 domain. Bride domain defines the broadcast and L2 domain in ACI.

Like Vlan SVI in traditional network, subnets can also be defined in BD. We can also define more than one subnet under BD.


1.  Go to TENANT => HK TENANT=>Right click on BRIDGE DOMAIN => Select “CREATE BRIDGE DOMAIN”.


2. Name BD and map the VRF to this BD. Press NEXT to proceed.


3. Configure the L3 parameters and Click + to configure the subnet information under BD.


 4.      Enter the BD IP address with subnet mask. Check the “MAKE THIS IP ADDRESS    PRIMARY”.

Private to VRF—Subnet remains local to the VRF and Tenant.
Advertised Externally—Subnet can be advertised out of ACI Fabric over L3 out.
Shared between VRFs—Subnet can be exported to other VRFs in case inter-VRF routing is required.

"Treat as Virtual IP Address" option is only checked in case of multiple Fabrics.


 5. You can see subnet details will be displayed in L3 configuration section. Click NEXT to Proceed.


 6. Click Finish to create the BD


Tuesday 14 January 2020

How to create VRF in Cisco ACI


In ACI, VRF concept is still the same as traditional network the only difference is VRF is not fabric wide unique but has a significance within the tenant.

1. To create VRF go to TENANT -> HK -> NETWORKING->RIGHT CLICK ON VRF ->  PRESS CREATE VRF.



2. Enter the VRF name and keep the other settings on default.  In this step you also get an option to create the BD. It is by default checked. Press Next to create the VRF.

3. If you don’t want to create BD in this step then uncheck the option and press Finish to create the VRF.



4. VRF is created once you click the Finish button.



How to Create Tenant on Cisco ACI

I assumed that your basic Fabric is UP where all the APICs/leafs and Spines are joined to the fabric.

The first step after the Fabric is up, is to create the tenant. Don’t confuse the tenant concept with the Nexus VDC. Tenant cannot provide the physical separation within fabric but is just a container of the policies. There are few types of default tenants present in the fabric like Infra, management and common tenant.

A Infra Tenant controls the infrastructure related policies like VXLAN overlay. It is by default in the system but can be modified by the administrator.

A Management Tenant controls the fabric wide management access( inband or out of band) related polices. It is also by default in the system and can be modified by the administrator.

A Common tenant contains the resources which are shared among the user VDCs like the connection to internet, firewalls, load balancer etc. Again by default in the system and policies in it can be modified by the administrator.

A user tenant is the one which is created by the Fabric administrator for the policies that can be used for the work loads like application, web EPGS. Below method shows how to create the an user Tenant.

     1.  Login to APIC and on the left top of the APIC HOME page you can see there is an option to add new tenant to the fabric. Click on “ADD TENANT”. 

     Adding a tenant to the fabric is a non-impacting change and can be done at any moment.


2.  Below screen will appear when you click on ADD TENANT. Add the Tenant name and click on Submit.


      3.    HK tenant will be created once you click the Submit button.