Brocade TurboIron 24x two port trunk - switch 2 shows one port blocked, switch one all ports forwarding

  • 1
  • Question
  • Updated 2 months ago
  • Answered

I understand on a Brocade TurboIron 24x there are two ways to create a trunk:

Method 1: trunk ethe 1 to 2

Method 2: int e 1 to 2
link-aggregate configure key <key id>
link-aggregate active


I have both ways done on two TurboIron 24x. Method 1 above for two 10gbe cables between switches, and Method 2 for two 10gbe fiber between a switch and a Tegile storage array serving NFS shares.


Issue I have is on switch 2, the method 1 (static trunk) between switches shows the second port, port 19 blocked, and true the LED on that port on the switch is not illuminated. However on switch 1, this same port 19 shows in a "Forward" state and sure enough the LED is lit solid on that physical switch port. How can the link be forward on one switch and blocked on the other, if they are configured the same? Thinking it was a bad Twinax cable, I replaced port 19 between switches with an 850nm SFP+ and a short fiber optic cable. I had the same result where switch 1 showed the port forwarding but switch 2 showed the port blocked.


Here's an output of show trunk on switch 1

Configured trunks:

Trunk ID: 18
Hw Trunk ID: 1
Ports_Configured: 2
Primary Port Monitored: Jointly

Ports   PortName Port_Status Monitor Rx_Mirr Tx_Mirr Monitor_Dir
18      10gbe1*  enable      off     N/A     N/A     N/A
19      none     enable      off     N/A     N/A     N/A

Trunk ID: 21
Hw Trunk ID: 2
Ports_Configured: 2
Primary Port Monitored: Jointly

Ports   PortName Port_Status Monitor Rx_Mirr Tx_Mirr Monitor_Dir
21      Tegile*  enable      off     N/A     N/A     N/A
22      Tegile*  enable      off     N/A     N/A     N/A

Operational trunks:

Trunk ID: 18
Hw Trunk ID: 1
Duplex: Full
Speed: 10G
Tag: No
Priority: level0
Active Ports: 2

Ports   Link_Status port_state
18      active      <b>Forward</b>
19      active      <b>Forward</b>

Trunk ID: 21
Hw Trunk ID: 2
Duplex: Full
Speed: 10G
Tag: No
Priority: level0
Active Ports: 2

Ports   Link_Status port_state LACP_Status
21      active      Forward    ready
22      active      Forward    ready

Here's an output of show trunk on switch 2

Configured trunks:

Trunk ID: 18
Hw Trunk ID: 1
Ports_Configured: 2
Primary Port Monitored: Jointly

Ports   PortName Port_Status Monitor Rx_Mirr Tx_Mirr Monitor_Dir
18      10gbe1*  enable      off     N/A     N/A     N/A
19      none     enable      off     N/A     N/A     N/A

Trunk ID: 21
Hw Trunk ID: 2
Ports_Configured: 2
Primary Port Monitored: Jointly

Ports   PortName Port_Status Monitor Rx_Mirr Tx_Mirr Monitor_Dir
21      Tegile*  enable      off     N/A     N/A     N/A
22      Tegile*  enable      off     N/A     N/A     N/A

Operational trunks:

Trunk ID: 18
Hw Trunk ID: 1
Duplex: Full
Speed: 10G
Tag: No
Priority: level0
Active Ports: 1

Ports   Link_Status port_state
18      active      <b>Forward</b>
19      down        <b>Blocked</b>

Trunk ID: 21
Hw Trunk ID: 2
Duplex: Full
Speed: 10G
Tag: No
Priority: level0
Active Ports: 2

Ports   Link_Status port_state LACP_Status
21      active      Forward    ready
22      active      Forward    ready

Here is how that trunk is configured on both switches... at the very top of the config on both it shows:

trunk ethe 18 to 19 
 port-name "10gbe1 to 10gbe2 A" ethernet 18

Here's how the trunk to the operational Tegile storage array looks on switch 1

interface ethernet 21
 port-name Tegile Controller A Port 1
 no spanning-tree
 link-aggregate configure timeout short
 link-aggregate configure key 21001
 link-aggregate active
!
interface ethernet 22
 port-name Tegile Controller A Port 2
 no spanning-tree
 link-aggregate configure key 21001
 link-aggregate configure timeout short
 link-aggregate active

And how the trunk to the other Tegile storage array controller looks on switch 2

interface ethernet 21
 port-name Tegile Controller B Port 1
 no spanning-tree
 link-aggregate configure timeout short
 link-aggregate configure key 21002
 link-aggregate active
!
interface ethernet 22
 port-name Tegile Controller B Port 2
 no spanning-tree
 link-aggregate configure key 21002
 link-aggregate configure timeout short
 link-aggregate active

The issue is that yesterday I failed over the Tegile storage array from controller A to controller B. This means the NFS storage traffic to 8 ESXi servers would now originate off of switch 2, so that traffic would have to traverse the switch 2 to switch 1 trunk (ports 18 and 19) back to the "active" vmware adapters. Those vmware storage adapters remain active unless a link failure, then and only then would vmware try to talk off of switch 2. I can't use becon probing instead of link state for failover because I read for stability you need 3 adapters for this and I do not have a third adapter. So the issue I had was the two IP's on the Tegile storage array claimed to be moved over to controller B, but vmware could only ping ONE of those IP's... all storage mapped via the second IP went (inaccessible) and SSH to an ESXi server revealed I could only ping one of the Tegile IP's. So I'm trying to rule out a networking issue because so far Tegile took our config and put it on one of their lab systems and both IP's we have programmed moved properly to their second controller. However the difference is they just spun their test system up for us, whereas we have 400+ days of uptime on our controller, so they do suggest I reboot controller B and try again... but rather than cause another outage I want to investigate why this inter switch trunk has one port showing blocked only on one switch.

We have money in the budget to replace the brocades with Arista, however I only have enough moneks. The brocade foundry stuff seems a little foreign to me and limited.y to do 1 Arista switch and then we would be running just 1 switch, or 2 switches but two different vendors (1 arista primary, 1 brocade backup). Next year I can request more money and if approved get a second Arista switch.


Thanks for your info. I'm used to Cisco and Extreme Networks.
Photo of Keith

Keith

  • 2 Posts
  • 1 Reply Like
  • frustrated

Posted 2 months ago

  • 1
Photo of Keith

Keith

  • 2 Posts
  • 1 Reply Like

Ok after further testing no matter which group of ports or which media (active twinax, passive twinax, fiber optic) the second brocade switch would NEVER link up on the second port of the trunk


So instead of a static trunk, on two unused ports I created an active trunk

Switch1: 
interface e16
port-name Brocade switch trunk e16
no spanning-tree
link-aggregate configure timeout short
link-aggregate configure key 16001
link-aggregate active
interface e17
port-name Brocade switch trunk e17
no spanning-tree
link-aggregate configure timeout short
link-aggregate configure key 16001
link-aggregate active

Switch2:
interface e16
port-name Brocade switch trunk e16
no spanning-tree
link-aggregate configure timeout short
link-aggregate configure key 16002
link-aggregate active
interface e17
port-name Brocade switch trunk e17
no spanning-tree
link-aggregate configure timeout short
link-aggregate configure key 16002
link-aggregate active

It seems like the issue is resolved. I have a host in the top switch that is able to ping a host in the bottom switch (10.250.10.106) that it could not ping if two ports of a static trunk were connected.

I was initially worried about loops, but realized even other working config between access switches and cores have spanning-tree and bdpufilter's enabled on the ports for their uplink trunks.

16      Up      Forward Full 10G   16    No  1    0   748e.f86c.c7db  Brocade
17      Up      Forward Full 10G   16    No  1    0   748e.f86c.c7db  Brocade

LACP active trunks seem to be the way to go on this aging switch platform.

Were still going to rip it out for Arista in the future.