Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers... Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all.... BUT, lots of FEC hits. Interesting this FEC-thing. I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function. -Aaron
{master} me@mx960> show configuration interfaces et-7/1/4 | display set {master} me@mx960> {master} me@mx960> clear interfaces statistics et-7/1/4 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 28209 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2347 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 45153 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 29 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 57339 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2378 FEC Uncorrected Errors Rate 0 {master} me@mx960>
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
-- -Aaron