constant FEC errors juniper mpc10e 400g
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable. {master} me@mx960> clear interfaces statistics et-7/1/4 {master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79 -- -Aaron
Open a JTAC case, That looks like a work for them Kind Regards, Dominik W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
i did. Usually my NANOG and J-NSP email list gets me a quicker solution than JTAC. -Aaron On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- -Aaron
Isn't FEC required by the 400G spec? On Wed, Apr 17, 2024 at 3:45 PM Aaron Gould <aaron1@gvtc.com> wrote:
i did. Usually my NANOG and J-NSP email list gets me a quicker solution than JTAC.
-Aaron On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- -Aaron
In my reading the 400GBASE-R Physical Coding Sublayer (PCS) always includes the FEC. This is defined in clause 119 of IEEE Std 802.3-2022, and most easily seen in "Figure 119–2—Functional block diagram" if you don't want to get buried in the prose. Nothing there seems to imply that the FEC is optional. I'd be happy to be corrected though. It may well be that there is a method to reading these tomes, that I have not discovered yet. It is the first time I dove deep into any IEEE standard. Best regards Joel On 17.04.2024 21:47, Tom Beecher wrote:
Isn't FEC required by the 400G spec?
On Wed, Apr 17, 2024 at 3:45 PM Aaron Gould <aaron1@gvtc.com <mailto:aaron1@gvtc.com>> wrote:
__
i did. Usually my NANOG and J-NSP email list gets me a quicker solution than JTAC.
-Aaron
On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com <mailto:aaron1@gvtc.com>> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- -Aaron
-- Joel Busch, Network SWITCH Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 30, direct +41 44 268 16 58
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended. Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably. No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine. -Matt On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski < dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
fec cliff? is there a level of fec erros that i should be worried about then? not sure what you mean. -Aaron On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended.
Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably.
No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine.
-Matt
On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
-- -Aaron
At some point, an error rate would exceed the ability of forward error correction (FEC) overhead to compensate, resulting in CRC errors. You're not seeing those so all is technically well. It's not so much how many packets come in with errors that causes a problem, but what percentage of each packet is corrupted. The former is usually indicative of the latter though. Just as Tom said, we're talking about a whole new animal than the NRZ we're used to inside the building. Long-haul and DCI folks deal with this stuff pretty regularly. The secret is keep everything clean and mind your bend radii. We won't get away with some of what we used to get away with. -Matt On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould <aaron1@gvtc.com> wrote:
fec cliff? is there a level of fec erros that i should be worried about then? not sure what you mean.
-Aaron On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended.
Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably.
No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine.
-Matt
On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski < dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
-- -Aaron
-- Matt Erculiani
Interesting, thanks all, the JTAC rep got back to me and also pretty much said it's not an issue and is expected... also, JTAC rep sited 2 KB's, shown here, both using 100g as an example... question please, should I understand that this is also true about 400g, even though his KB's speak about 100g ? KB77305 KB35145 https://supportportal.juniper.net/s/article/What-is-the-acceptable-rate-of-F... https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increas... -Aaron On 4/17/2024 3:58 PM, Matt Erculiani wrote:
At some point, an error rate would exceed the ability of forward error correction (FEC) overhead to compensate, resulting in CRC errors. You're not seeing those so all is technically well.
It's not so much how many packets come in with errors that causes a problem, but what percentage of each packet is corrupted. The former is usually indicative of the latter though.
Just as Tom said, we're talking about a whole new animal than the NRZ we're used to inside the building. Long-haul and DCI folks deal with this stuff pretty regularly. The secret is keep everything clean and mind your bend radii. We won't get away with some of what we used to get away with.
-Matt
On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould <aaron1@gvtc.com> wrote:
fec cliff? is there a level of fec erros that i should be worried about then? not sure what you mean.
-Aaron
On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended.
Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably.
No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine.
-Matt
On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
-- -Aaron
-- Matt Erculiani
-- -Aaron
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data." -Aaron On 4/17/2024 4:04 PM, Aaron Gould wrote:
Interesting, thanks all, the JTAC rep got back to me and also pretty much said it's not an issue and is expected... also, JTAC rep sited 2 KB's, shown here, both using 100g as an example... question please, should I understand that this is also true about 400g, even though his KB's speak about 100g ?
KB77305 KB35145
https://supportportal.juniper.net/s/article/What-is-the-acceptable-rate-of-F...
https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increas...
-Aaron
On 4/17/2024 3:58 PM, Matt Erculiani wrote:
At some point, an error rate would exceed the ability of forward error correction (FEC) overhead to compensate, resulting in CRC errors. You're not seeing those so all is technically well.
It's not so much how many packets come in with errors that causes a problem, but what percentage of each packet is corrupted. The former is usually indicative of the latter though.
Just as Tom said, we're talking about a whole new animal than the NRZ we're used to inside the building. Long-haul and DCI folks deal with this stuff pretty regularly. The secret is keep everything clean and mind your bend radii. We won't get away with some of what we used to get away with.
-Matt
On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould <aaron1@gvtc.com> wrote:
fec cliff? is there a level of fec erros that i should be worried about then? not sure what you mean.
-Aaron
On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended.
Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably.
No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine.
-Matt
On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
-- -Aaron
-- Matt Erculiani -- -Aaron
-- -Aaron
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's. It's a bit disconcerting when you plot the data on your NMS, but it's not material. Mark.
Standard deviation is now your friend. Learned to alert on outside of SD FEC and CRCs. Although the second should already be alerting. On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka <mark@tinka.africa> wrote:
On 4/17/24 23: 24, Aaron Gould wrote: > Well JTAC just said that it seems ok, and that 400g is going to show > 4x more than 100g "This is due to having to synchronize much more to > support higher data. " > We've seen the same between
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
Just for extra clarity off those KB, probably has nothing to do with vendor interop as implied in at least one of those. You will see some volume of FEC corrected on 400G FR4 with the same router hardware and transceiver vendor on both ends, with a 3m patch. Short of duct taping the transceivers together, not going to get much more optimal than that. As far as I can suss out from my reading and what Smart People have told me, certain combinations of modulation and lamda are just more susceptible to transmission noise, so for those FEC is required by the standard. PAM4 modulation does seem to be a common thread, but there are some PAM2/NRZs that FEC is also required for. ( 100GBASE-CWDM4 for example. ) On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka <mark@tinka.africa> wrote:
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers... Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all.... BUT, lots of FEC hits. Interesting this FEC-thing. I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function. -Aaron {master} me@mx960> show configuration interfaces et-7/1/4 | display set {master} me@mx960> {master} me@mx960> clear interfaces statistics et-7/1/4 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 28209 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2347 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 45153 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 29 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 57339 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2378 FEC Uncorrected Errors Rate 0 {master} me@mx960> On 4/18/2024 7:13 AM, Mark Tinka wrote:
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
-- -Aaron
FEC is occurring at the PHY , below the PCS. Even if you're not sending any traffic, all the ethernet control frame juju is still going back and forth, which FEC may have to correct. I *think* (but not 100% sure) that for anything that by spec requires FEC, there is a default RS-FEC type that will be used, which *may* be able to be changed by the device. Could be fixed though, I honestly cannot remember. On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould <aaron1@gvtc.com> wrote:
Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers...
Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all.... BUT, lots of FEC hits. Interesting this FEC-thing. I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol
It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function.
-Aaron
{master} me@mx960> show configuration interfaces et-7/1/4 | display set
{master} me@mx960>
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 28209 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2347 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 45153 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 29 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 57339 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2378 FEC Uncorrected Errors Rate 0
{master} me@mx960>
On 4/18/2024 7:13 AM, Mark Tinka wrote:
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
-- -Aaron
Thanks. What "all the ethernet control frame juju" might you be referring to? I don't recall Ethernet, in and of itself, just sending stuff back and forth. Does anyone know if this FEC stuff I see concurring is actually contained in Ethernet Frames? If so, please send a link to show the ethernet frame structure as it pertains to this 400g fec stuff. If so, I'd really like to know the header format, etc. -Aaron On 4/18/2024 1:17 PM, Tom Beecher wrote:
FEC is occurring at the PHY , below the PCS.
Even if you're not sending any traffic, all the ethernet control frame juju is still going back and forth, which FEC may have to correct.
I *think* (but not 100% sure) that for anything that by spec requires FEC, there is a default RS-FEC type that will be used, which *may* be able to be changed by the device. Could be fixed though, I honestly cannot remember.
On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould <aaron1@gvtc.com> wrote:
Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers...
Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all.... BUT, lots of FEC hits. Interesting this FEC-thing. I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol
It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function.
-Aaron
{master} me@mx960> show configuration interfaces et-7/1/4 | display set
{master} me@mx960>
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 28209 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2347 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 45153 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 29 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 57339 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2378 FEC Uncorrected Errors Rate 0
{master} me@mx960>
On 4/18/2024 7:13 AM, Mark Tinka wrote:
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
-- -Aaron
-- -Aaron
What "all the ethernet control frame juju" might you be referring to? I don't recall Ethernet, in and of itself, just sending stuff back and forth.
I did not read the 100G Ethernet specs, but as far as I remember FastEthernet (e.g. 100BASE-FX) uses 4B/5B coding on the line, borrowed from FDDI. Octets of Ethernet frames are encoded to these 5-bit codewords, and there are valid codewords for other stuff, like idle symbols transmitted continuously between frames. Gigabit Ethernet (1000BASE-X) uses 8B/10B code on the line (from Fibre Channel). In GE there are also special (not frame octet) PCS codewords used for auto-negotiation, frame bursting, etc. So I guess these are not frames that you see, but codewords representing other data, outside Ethernet frames. András
On 4/18/24 11:45, Aaron Gould wrote:
Thanks. What "all the ethernet control frame juju" might you be referring to? I don't recall Ethernet, in and of itself, just sending stuff back and forth. Does anyone know if this FEC stuff I see concurring is actually contained in Ethernet Frames? If so, please send a link to show the ethernet frame structure as it pertains to this 400g fec stuff. If so, I'd really like to know the header format, etc.
-Aaron
IEEE Std 802.3™‐2022 Standard for Ethernet (§65.2.3.2 FEC frame format p.2943) https://ieeexplore.ieee.org/browse/standards/get-program/page/series?id=68 Also helpful, generally: ITU-T 2000 Recommendation G975 Forward Error Correction for Submarine Systems https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-G.975-200010-I!!PDF-E&type=items
I'm being sloppy with my verbiage, it's just been a long time since I thought about this in detail, sorry. The MAC layer hands bits to the Media Independent Interface, which connects the MAC to the PHY. The PHY converts the digital 1/0 into the form required by the media transmission type; the 'what goes over the wire' L1 stuff. The method of encoding will always add SOME number of bits as overhead. Ex, 64b/66b means that for every 64 bits of data to transmit, 2 bits are added, so 66 actual bits are transmitted. This encoding overhead is what I meant when I said 'ethernet control frame juju'. This starts getting into the weeds on symbol/baud rates and stuff as well, which I dont want to do now cause I'm even rustier there. When FEC is enabled, the number of overhead bits added to the transmission increases. For 400G-FR4 for example, you start with 256b/257b , which is doubled to 512b/514b for ($reason I cannot remember), then RS-FEC(544,514) is applied, adding 30 more bits for FEC. Following the example, this means 544 bits are transmitted for every 512 bits of payload data. So , more overhead. Those additional bits can correct up to 15 corrupted bits of the payload. All of these overhead bits are added in the PHY on the way out, and removed on the way in. So you'll never see them on a packet capture unless you're using something that's actually grabbing the bits off the wire. ( Pretty sure this is right, anyone please correct me if I munged any of it up.) On Thu, Apr 18, 2024 at 2:45 PM Aaron Gould <aaron1@gvtc.com> wrote:
Thanks. What "all the ethernet control frame juju" might you be referring to? I don't recall Ethernet, in and of itself, just sending stuff back and forth. Does anyone know if this FEC stuff I see concurring is actually contained in Ethernet Frames? If so, please send a link to show the ethernet frame structure as it pertains to this 400g fec stuff. If so, I'd really like to know the header format, etc.
-Aaron On 4/18/2024 1:17 PM, Tom Beecher wrote:
FEC is occurring at the PHY , below the PCS.
Even if you're not sending any traffic, all the ethernet control frame juju is still going back and forth, which FEC may have to correct.
I *think* (but not 100% sure) that for anything that by spec requires FEC, there is a default RS-FEC type that will be used, which *may* be able to be changed by the device. Could be fixed though, I honestly cannot remember.
On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould <aaron1@gvtc.com> wrote:
Not to belabor this, but so interesting... I need a FEC-for-Dummies or FEC-for-IP/Ethernet-Engineers...
Shown below, my 400g interface with NO config at all... Interface has no traffic at all, no packets at all.... BUT, lots of FEC hits. Interesting this FEC-thing. I'd love to have a fiber splitter and see if wireshark could read it and show me what FEC looks like...but something tells me i would need a 400g sniffer to read it, lol
It's like FEC (fec119 in this case) is this automatic thing running between interfaces (hardware i guess), with no protocols and nothing needed at all in order to function.
-Aaron
{master} me@mx960> show configuration interfaces et-7/1/4 | display set
{master} me@mx960>
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 28209 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2347 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 45153 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 29 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep packet Input packets : 0 Output packets: 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 | grep rror Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 57339 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2378 FEC Uncorrected Errors Rate 0
{master} me@mx960>
On 4/18/2024 7:13 AM, Mark Tinka wrote:
On 4/17/24 23:24, Aaron Gould wrote:
Well JTAC just said that it seems ok, and that 400g is going to show 4x more than 100g "This is due to having to synchronize much more to support higher data."
We've seen the same between Juniper and Arista boxes in the same rack running at 100G, despite cleaning fibres, swapping optics, moving ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, and shared the same KB's.
It's a bit disconcerting when you plot the data on your NMS, but it's not material.
Mark.
-- -Aaron
-- -Aaron
On Thu, 18 Apr 2024 at 21:49, Aaron Gould <aaron1@gvtc.com> wrote:
Thanks. What "all the ethernet control frame juju" might you be referring to? I don't recall Ethernet, in and of itself, just sending stuff back and forth. Does anyone know if this FEC stuff I see concurring is actually contained in Ethernet Frames? If so, please send a link to show the ethernet frame structure as it pertains to this 400g fec stuff. If so, I'd really like to know the header format, etc.
The frames in FEC are idle frames between actual ethernet frames. So you recall right, without FEC, you won't see this idle traffic. It's very very good, because now you actually know before putting the circuit in production, if the circuit works or not. Lot of people have processes to ping from router-to-router for N time, trying to determine circuit correctness before putting traffic on it, which looks absolutely childish compared to FEC, both in terms of how reliable the presumed outcome is and how long it takes to get to that presumed outcome. -- ++ytti
On 4/19/24 08:01, Saku Ytti wrote:
The frames in FEC are idle frames between actual ethernet frames. So you recall right, without FEC, you won't see this idle traffic.
It's very very good, because now you actually know before putting the circuit in production, if the circuit works or not.
Lot of people have processes to ping from router-to-router for N time, trying to determine circuit correctness before putting traffic on it, which looks absolutely childish compared to FEC, both in terms of how reliable the presumed outcome is and how long it takes to get to that presumed outcome.
FEC is amazing. At higher data rates (100G and 400G) for long and ultra long haul optical networks, SD-FEC (Soft Decision FEC) carries a higher overhead penalty compared to HD-FEC (Hard Decision FEC), but the net OSNR gain more than compensates for that, and makes it worth it to increase transmission distance without compromising throughput. Mark.
On Fri, 19 Apr 2024 at 10:55, Mark Tinka <mark@tinka.africa> wrote:> FEC is amazing.
At higher data rates (100G and 400G) for long and ultra long haul optical networks, SD-FEC (Soft Decision FEC) carries a higher overhead penalty compared to HD-FEC (Hard Decision FEC), but the net OSNR gain more than compensates for that, and makes it worth it to increase transmission distance without compromising throughput.
Of course there are limits to this, as FEC is hop-by-hop, so in long-haul you'll know about circuit quality to the transponder, not end-to-end. Unlike in wan-phy, OTN where you know both. Technically optical transport could induce FEC errors, if there are FEC errors on any hop, so consumers of optical networks need not have access to optical networks to know if it's end-to-end clean. Much like cut-through switching can induce errors via some symbols to communicate the CRC errors happened earlier, so the receiver doesn't have to worry about problems on their end. -- ++ytti
On 4/19/24 10:08, Saku Ytti wrote:
Of course there are limits to this, as FEC is hop-by-hop, so in long-haul you'll know about circuit quality to the transponder, not end-to-end. Unlike in wan-phy, OTN where you know both.
Technically optical transport could induce FEC errors, if there are FEC errors on any hop, so consumers of optical networks need not have access to optical networks to know if it's end-to-end clean.
This would only matter on ultra long haul optical spans where the signal would need to be regenerated, where - among many other values - FEC would need to be decoded, corrected and re-applied. SD-FEC already allows for a significant improvement in optical reach for a given modulation. This negates the need for early regeneration, assuming other optical penalties and impairments are satisfactorily compensated for. Of course, what a market defines as long haul or ultra long haul may vary; add to that the variability of regeneration spacing in such scenarios being quite wide, on the order of 600km - 1,000km. Much of this will come down to fibre, ROADM and coherent pluggable quality.
On 4/19/24 10:08, Saku Ytti wrote:
Of course there are limits to this, as FEC is hop-by-hop, so in long-haul you'll know about circuit quality to the transponder, not end-to-end. Unlike in wan-phy, OTN where you know both.
Technically optical transport could induce FEC errors, if there are FEC errors on any hop, so consumers of optical networks need not have access to optical networks to know if it's end-to-end clean.
This would only matter on ultra long haul optical spans where the signal would need to be regenerated, where - among many other values - FEC would need to be decoded, corrected and re-applied. SD-FEC already allows for a significant improvement in optical reach for a given modulation. This negates the need for early regeneration, assuming other optical penalties and impairments are satisfactorily compensated for. Of course, what a market defines as long haul or ultra long haul may vary; add to that the variability of regeneration spacing in such scenarios being quite wide, on the order of 600km - 1,000km. Much of this will come down to fibre, ROADM and coherent pluggable quality. Mark.
On Sat, 20 Apr 2024 at 10:00, Mark Tinka <mark@tinka.africa> wrote:
This would only matter on ultra long haul optical spans where the signal would need to be regenerated, where - among many other values - FEC would need to be decoded, corrected and re-applied.
In most cases, modern optical long haul has a transponder, which terminates your FEC, because clients offer gray, and you like something a bit less depressing, like 1570.42nm. This is not just FEC terminating, but also to a degree autonego terminating, like RFI signal would be between you and transponder, so these connections can be, and regularly are, provided without proper end-to-end hardware liveliness, and even if they were delivered and tested to have proper end-to-end HW liveliness, that may change during operation, so line faults may or may not be propagated to both ends as RFI assertion, and even if they are, how delayed they are, they may suffer delay to allow for optical protection to engage, which may be undesirable, as it eats into your convergence budget. Of course the higher we go in the abstraction, the less likely you are to get things like HW livelines detection, like I don't really see anyone asking for this in their pseudowire services, even though it's something that actually can be delivered. In Junos it's a single config stanza in interface, to assert RFI to client port, if pseudowire goes down in the operator network. -- ++ytti
On 4/20/24 13:25, Saku Ytti wrote:
In most cases, modern optical long haul has a transponder, which terminates your FEC, because clients offer gray, and you like something a bit less depressing, like 1570.42nm.
This is not just FEC terminating, but also to a degree autonego terminating, like RFI signal would be between you and transponder, so these connections can be, and regularly are, provided without proper end-to-end hardware liveliness, and even if they were delivered and tested to have proper end-to-end HW liveliness, that may change during operation, so line faults may or may not be propagated to both ends as RFI assertion, and even if they are, how delayed they are, they may suffer delay to allow for optical protection to engage, which may be undesirable, as it eats into your convergence budget.
Of course the higher we go in the abstraction, the less likely you are to get things like HW livelines detection, like I don't really see anyone asking for this in their pseudowire services, even though it's something that actually can be delivered. In Junos it's a single config stanza in interface, to assert RFI to client port, if pseudowire goes down in the operator network.
In our market (Africa), for both terrestrial and submarine services, OTN-type circuits are not typically ordered. Network operators are not really interested in receiving the additional link data that OTN or WAN-PHY provides. They truly want to leave the operation of the underlying transport backbone to the transport operator. The few times we have come across the market asking for OTN is if they want to groom 10x 10G into 1x 100G, for example, to deliver structured services downstream. Even when our market seeks OTN from European backhaul providers to extend submarine access into Europe and Asia-Pac, it is often for structured capacity grooming, and not for OAM benefit. It would be interesting to learn whether other markets in the world still make a preference for OTN in lieu of Ethernet, for the OAM benefit, en masse. When I worked in Malaysia back in the day (2007 - 2012), WAN-PHY was generally asked for for 10G services, until about 2010; when folk started to choose LAN-PHY. The reason, back then, was to get that extra 1% of pipe bandwidth :-). Mark.
FEC on 400G is required and expected. As long as it is “corrected”, you have nothing to worry about. We had the same realisation recenty when upgrading to 400G. -Schylar From: NANOG <nanog-bounces+sutley=ozarksgo.net@nanog.org> on behalf of Aaron Gould <aaron1@gvtc.com> Date: Wednesday, April 17, 2024 at 2:38 PM To: nanog@nanog.org <nanog@nanog.org> Subject: constant FEC errors juniper mpc10e 400g We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable. {master} me@mx960> clear interfaces statistics et-7/1/4 {master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0 {master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) {master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79 -- -Aaron
Corrected FEC errors are pretty normal for 400G FR4 On Wednesday, April 17th, 2024 at 3:36 PM, Aaron Gould <aaron1@gvtc.com> wrote:
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
Thanks Joe and Schylar, that's reassuring. Tom, yes, I believe fec is required for 400g as you see fec119 listed in that output... and i understand you can't (or perhaps shouldn't) change it. -Aaron On 4/17/2024 2:43 PM, Joe Antkowiak wrote:
Corrected FEC errors are pretty normal for 400G FR4
On Wednesday, April 17th, 2024 at 3:36 PM, Aaron Gould <aaron1@gvtc.com> wrote:
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- -Aaron
Hi. Looks like normal behavior: https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increas... "An incrementing FEC Corrected Errors counter is normal for a link that is running FEC. It just indicates that the errored bits have been corrected by FEC. " "Therefore, the incrementing FEC Corrected Errors counter might only be indicating an interoperability issue between the optics from ......" --- Fredrik Holmqvist I2B & BBTA tjänster AB 08-590 90 000 On 2024-04-17 21:36, Aaron Gould wrote:
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
Notes I found that I took from smart optical people : "PAM4 runs at much lower SNRs than NRZ, because you're trying to read 4 distinct voltage levels instead of 2.Even the cleanest system will have some of that, so the only way to make it usable is to have FEC in place." On Wed, Apr 17, 2024 at 4:01 PM Fredrik Holmqvist / I2B <fredrik@i2b.se> wrote:
Hi.
Looks like normal behavior:
https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increas...
"An incrementing FEC Corrected Errors counter is normal for a link that is running FEC. It just indicates that the errored bits have been corrected by FEC. "
"Therefore, the incrementing FEC Corrected Errors counter might only be indicating an interoperability issue between the optics from ......"
--- Fredrik Holmqvist I2B & BBTA tjänster AB 08-590 90 000
On 2024-04-17 21:36, Aaron Gould wrote:
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 10.10.10.76/30, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
participants (13)
-
Aaron Gould
-
Charles Polisher
-
Dominik Dobrowolski
-
Fredrik Holmqvist / I2B
-
Joe Antkowiak
-
Joel Busch
-
JÁKÓ András
-
Mark Tinka
-
Matt Erculiani
-
Saku Ytti
-
Schylar Utley
-
Thomas Scott
-
Tom Beecher