Monday, May 25, 2009

Nexus 7000 Notes - 3 (commands - this list will grow)

(It seems like command reference for NX-OS 4.1 on CCO is not completed. All the commands with * are missing - not that they look like undocumented commands.).

show hardware fabric-utilization timestamp *
- gives peak fabric utilization time stamp. In 6500, timestamp option was not available.

show hardware capacity *
- gives a lot of useful information including the command above

show port-channel traffic
- gives utilization of each link in a port channel

show license host-id
- do not forget when activating license !!

Nexus 7000 Notes - 2 (Linecards)

Nexus linecards come with an integrated M series engine. The first generation for forwarding engine is referred to as M1 forwarding engine. M1 can deliver 60Mbbps of L2 and L3 IPv4 unicast forwarding. (30Mpps of IPv6) across all ports on a single linecard. 10-slot chassis with 8 M1 forwarding engines delivers up to 480 Mpps of IPv4 unicast forwarding. (Sup720 claims up to 400Mpps).

32 ports 10G card (N7K-M132XP-12)
Part number can be dissected as follows -
N7K - for Nexus 7000
M1 - forwarding engine
32 - number of ports
X - - port speed (in Roman ?)
P - ??
1 - fabric version
2 - number of fabric required for full bandwidth (w/o redundancy)

Each fabric connection is 40Gpbs (dCEF720 has 20Gbps) with maximum of 2 fabric connectors. N7K-M132XP-12 has 2 fabric connections (i.e 80Gbps bandwidth to fabric) and makes it 4:1 oversubscription for a fully populated line card. The card can be run in non-oversubscribed mode by dedicating fabric access to 1 port in each port block. First port in each port block of 4 will become dedicated port - port 1,2,9,10,17,18,24,25.
"rate-mode dedicated" interface command is used to make the first port in port group in dedicated mode - the rest of the ports in port block are disabled.


48 ports 1G card (N7K-M148GS-11 or N7K-M148GT-11)
N7K - for Nexus 7000
M1 - forwarding engine
48 - number of ports
G - - port speed (in Roman ?)
S/T - S for SFP and T for copper ?
1 - fabric version
1 - number of fabric required for full bandwidth (w/o redundancy)

48 ports linecard has 40Gbps fabric access and thus it is 1:1.2 oversubscribed.

Nexus 7000 Notes - 1

I just got my hands on Nexus 7010 last week.
First thing that I noticed is that the box is very deep. Its back is protruding towards the aisle between data center rows.
It is 33.1" deep (as opposed to 6509's 18.2").

Nexus decouples fabric from the supervisor and the fabric is scalable (can be upgraded up to 5 fabric modules). Fabric cards are inserted from the back.
It has front-bottom to read-top air flow.

There are -
- 2 fan tray for supervisor and linecards (6 fans in each tray)
- 2 fan for fabric.
All the fans are hot-swappable.

Cisco claims first generation fabric linecard (N7K-C7010-FAB-1) can forward 46Gpbs - thus fully populated 5 fabric cards can forward up to 46 x 5 = 230Gbps.
Currently shipping supervisor 1 bandwidth is 115Gpbs/slot and the bandwidth of I/O linecard is 230Gbps.

According to Cisco maths, 7010 has maximum bandwidth of -
230Gbps / slot x 8 slots = 1840Gbps (I/O linecard)
115Gbps / slot x 2 slots = 230Gbps (sup)
(1840 + 230) x 2 (for full duplex operations) = 4.1 Tbps system.

(6500 with sup-720 is, as it name applies, a 720Gbps system.)

Sunday, May 10, 2009

Buffers, Queues and Thresholds

When QoS is enabled on a 65xx switch, queues are automatically allocated based on architecture of the line card.

For example -
1p3q8t for 6748 and 6724 (10/100/1000 linecard)
1p7q8t for 6704 (4 ports 10G card)
1p7q4t for 6708 (8 ports 10G card)

Queue size, numbers and architecture are different based on line card.
Here is the detail list as of 2009.

show queueing interface command will give away a lot of information about the port and the linecard.

Queue configuration is applied to a block of ports per ASIC (Rohini on 6724 and 6748). For 6724, wrr algorithm and qos-map configuration applied on one port will affect all 12 ports of the same ASIC, and for 6748 all 8 ports of the same ASIC.

"default interface" on one of the ports will reset wrr allocation of queues but not the qos-map, if it were altered from default *