update:
juniper has sent a one off OS to try that fixes the issue, we are waiting on a time frame for offical realse
update:
juniper has sent a one off OS to try that fixes the issue, we are waiting on a time frame for offical realse
are you running your MClag in prod?
ill have to open a ticket with juniper to find the 5200 limits
Hi everyone,
I got the basisc down on Virtual chassis technology on EX switches. I am trying to understand Virtual chassis fabric technology why one wants to use Virtual Chassis Fabric instead of Virtual chassis technology? I know it is a very loaded question .
Virtual chassis can suport 10 switches while Virtual chassis fabric can support 20. It appears to me, we use Fabric technologfy where more ports, throughput is needed . Is that it ?
Thanks and have a nice weekend!!
I see xe-0/0/1 through xe-0/0/11 interfaces show up in terse but when I show the configuration I see extra interfaces with colons and a number. It adds a ton of extra configs. When try to delete all interfaces and then just recreate the interfaces without colons the interfaces don't show up in terse and traffic stops working. I have to zeroize to get it working again.
What does the colon mean?
xe-0/0/1 { unit 0 { family inet { dhcp { vendor-id Juniper-qfx10002-72q; } } } } xe-0/0/1:0 { unit 0 { family inet { dhcp { vendor-id Juniper-qfx5100-48s-6q; } } } } xe-0/0/1:1 { unit 0 { family inet { dhcp { vendor-id Juniper-qfx5100-48s-6q; } } } } xe-0/0/1:2 { unit 0 { family inet { dhcp { vendor-id Juniper-qfx5100-48s-6q; } } } }
They are 40Gb interfaces channelized to operate as individual 10Gb interfaces.
Essentially, VCF is tuned for east-west traffic while VC is better suited for north-south user traffic.
https://www.juniper.net/assets/de/de/local/pdf/books/day-one-poster-vcf.pdf
Hi everyone,
1)Virtual chassis protocol factors two things in computing the shortest path from rooted PFE to target PFE.
a) bandwidth
b) hop count.
https://www.oreilly.com/library/view/junos-enterprise-switching/9780596804244/ch04.html
From the above link:
When all interfaces have the same bandwidth, as is the case here, the SPF result is effectively based on hop count.
This is what I understand:
SPF first factors in bandwidth in computing the shortest path, if tie, it then considers the hop count to compute the shoretst path from " rooted " PFE to target PFE.
Is my understanding correct?
2) VCCP uses ISIS protcol .
Cisco implementation uses default metric of 10 for all ISIS enabled links. It does not matter if the link is 10 G or 1 G, this metric is then used to compute the shortest path.
In JUNOS' implementation , do we have default value too, regardless of the link bandwidth just like we see in Cisco?
Below I see metric value of 15 , I am not sure if this the deafult value or if this value was computed using bandwidth as we see in OSPF?
From the above link:
user@host> show virtual-chassis protocol database member 0 detail
member0: -------------------------------------------------------------------------- 001d.b510.0800.00-00 Sequence: 0x9f5, Checksum: 0x5b2b, Lifetime: 116 secs Neighbor: b0c6.9abf.6800.00 Interface: vcp-1/3/0.32768 Metric: 15 b0c6.9abf.6800.00-00 Sequence: 0x9f8, Checksum: 0x326e, Lifetime: 117 secs Neighbor: 001d.b510.0800.00 Interface: vcp-5/0/0.32768 Metric: 15
Thanks and have a nice weekend!!
I can't find documentation that details this, but it seems to be based on the port bandwidth. On VC using the build in port I see a metric of 7.
Hi everyone,
Please consider the following cases:
Case#1
We have three switches in Virtual chassis using pre provisioned method :
SW1 Master RE
SW2 Back UP RE
Sw3 Line card
We insert another SW4 into Virtual Chassis with priority 255 . What will happen next?
Will SW 4 become the master because of high priority?
If yes, how can we ensure no new SW when inserted, can take Master or back up RE role from current Master and Back up RE ?
Case#2
We have three switches in Virtual chassis using dynamic method.
After election, switches have been assigned following roles.
SW1 Master RE
SW2 Back UP RE
Sw3 Line card
We insert another SW4 into Virtual Chassis with priority 255 ? What will happen next?
Will SW 4 become the master because of high priority?
If yes, how can we ensure no new SW when inserted, can take Master or back up RE role from current Master and Back up RE ?
Thanks and have a nice weekend!!
Case 1: You cannot commit a configuration with more than two members assigned the routing-engine role.
Case 2: Yes, switch 4 will become new master. You can deterministically assign mastership by either using preprovisioned mode or assigning correct priorities. Physically adding a new member will not affect master status without also committing a higher priority to that new member.
Case#1
We have three switches in Virtual chassis using pre provisioned method :
SW1 Master RE
SW2 Back UP RE
Sw3 Line card
We insert another SW4 into Virtual Chassis with priority 255 . What will happen next?
Will SW 4 become the master because of high priority?
Case 1: You cannot commit a configuration with more than two members assigned the routing-engine role.
###################
Not sure If I misunderstand you, in case 1, SW1 is assigned Master RE and SW2 is assigned back up RE role, they are not acting as master RE at the same time.
I’ve looked at the lacp and lldp output and I don’t see any indication of loops between the four switches. It looks like you’re using ge-0/0/46 on sw2a to extend the mgmt network through your core and my guess is that you have a mgmt loop somewhere on your upstream or downstream equipment, or you’ve set a mgmt port to layer 2 somewhere while also trunking it. You might try discconecting each mgmt interface and reading one at a time until the issue reappears.
Hi all,
I'm facing with the problem on switch EX4300. I can't ping to host 10.59.54.143, the switch display the ouput as below:
admin@R2-SW4300-NOC> ping 10.59.54.143
PING 10.59.54.143 (10.59.54.143): 56 data bytes
ping: sendto: Cannot allocate memory
ping: sendto: Cannot allocate memory
Whereas, from the switch, I ping to other hosts normally.
admin@R2-SW4300-NOC> ping 10.59.54.141
PING 10.59.54.141 (10.59.54.141): 56 data bytes
64 bytes from 10.59.54.141: icmp_seq=0 ttl=64 time=1.052 ms
64 bytes from 10.59.54.141: icmp_seq=1 ttl=64 time=9.138 ms
Please help me to troubleshoot the problem.
Thanks all.
Hoangnh510
Hi,
What is your SW?
Problem with ping is the only issue on your switch? Are there any other error/log related to memory?
Thanks,
Alex
I have confirmed with the JTAC. The lacp sys-id need to be different between the both mc-lag sides.
After implementing BLD and prefer-status-control-active on both MC-LAG peers, as well as the hold timers on the ISL link suggested above, I am still experiencing the same situation. If we manually shutdown all the interfaces on either chassis, or reboot the standby chassis, we see no traffic interruption. However, issuing a "request system reboot" command on the primary node causes the VRRP gateways to become unreachable immediately, and endpoints on downstream switches become unreachable after 10 seconds. Everything comes back up at the same time, 50 seconds after issuing the reboot command. I'm guessing the same would be true in a hard failure situation, although I haven't tried physically powering down the node without the reboot command.
I'm still a little confused over which link to adjust the hold-down timers on (ISL vs ICCP). In our environment we have an ICCP link configured, but we're peering with an IP address on an IRB. Is the link described as "ICCP" actually doing any MC-LAG related for us, since we're peering with an IP associated with an IRB? See attached config for more details:
admin@TEST-GSJA-re0> show configuration interfaces ae0 apply-groups-except global-AE-PARAM; description "[TEST-GSJA-to-TEST-GSJB ICCP Inter-chassis communications link ae0 ]"; aggregated-ether-options { lacp { active; } } unit 0 { family inet { address 10.144.200.1/30; } } {master} admin@TEST-GSJA-re0> show configuration interfaces ae10 apply-groups-except global-AE-PARAM; description "[TEST-GSJA-to-TEST-GSJB ICL Inter-chassis communications link ae10 ]"; mtu 1518; aggregated-ether-options { lacp { active; } } unit 0 { family ethernet-switching { interface-mode trunk; vlan { members all; } } } admin@TEST-GSJA-re0> show configuration interfaces irb.10 family inet { address 10.144.100.1/30 { arp 10.144.100.2 l2-interface ae10.0 mac ec:13:db:11:8f:f0; } } admin@TEST-GSJA-re0> show configuration protocols iccp local-ip-addr 10.144.100.1; peer 10.144.100.2 { session-establishment-hold-time 50; redundancy-group-id-list 10; backup-liveness-detection { backup-peer-ip 10.121.121.11; } liveness-detection { minimum-receive-interval 500; transmit-interval { minimum-interval 500; } } } admin@TEST-GSJA-re0> show configuration groups global-AE-PARAM interfaces { "<ae[1-9]*>" { aggregated-ether-options { lacp { active; system-id 00:01:02:03:04:05; admin-key 5; } mc-ae { redundancy-group 10; chassis-id 0; mode active-active; status-control active; events { iccp-peer-down { prefer-status-control-active; } } } } } }
Does it work if you remove lacp from ae10? I’ve not seen lacp on the ISL in any of the juniper recommended configs for mclag—perhaps it interacts in some way with mcae?
Hello,
Is there anything preventing me from using the QSFP+ ports in EX4300/EX3400 to connect a server. Juniper refers to these ports as "uplink" ports and I'm wondering if that's just a term they use, or will it prevent me from connecting a server?
(I understand the Virtual Chassis setting needs to be turned off to use the QSFP+ ports in this fashion, but is there anything otherwise "special" about these ports).
Thanks