Interesting, thanks all, the JTAC rep got back to me and also pretty much said it's not an issue and is expected... also, JTAC rep sited 2 KB's, shown here, both using 100g as an example... question please, should I understand that this is also true about 400g, even though his KB's speak about 100g ? KB77305 KB35145 https://supportportal.juniper.net/s/article/What-is-the-acceptable-rate-of-F... https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increas... -Aaron On 4/17/2024 3:58 PM, Matt Erculiani wrote:
At some point, an error rate would exceed the ability of forward error correction (FEC) overhead to compensate, resulting in CRC errors. You're not seeing those so all is technically well.
It's not so much how many packets come in with errors that causes a problem, but what percentage of each packet is corrupted. The former is usually indicative of the latter though.
Just as Tom said, we're talking about a whole new animal than the NRZ we're used to inside the building. Long-haul and DCI folks deal with this stuff pretty regularly. The secret is keep everything clean and mind your bend radii. We won't get away with some of what we used to get away with.
-Matt
On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould <aaron1@gvtc.com> wrote:
fec cliff? is there a level of fec erros that i should be worried about then? not sure what you mean.
-Aaron
On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct errors when the port is going so fast that errors are simply inevitable. Working as Intended.
Easier (read: cheaper) to build in some error correction than make the bits wiggle more reliably.
No idea if that rate of increment is alarming or not, but you've not yet hit your FEC cliff so you appear to be fine.
-Matt
On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <dobrowolski.domino@gmail.com> wrote:
Open a JTAC case, That looks like a work for them
Kind Regards, Dominik
W dniu śr., 17.04.2024 o 21:36 Aaron Gould <aaron1@gvtc.com> napisał(a):
We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 400g. During initial testing of the 400g interface (400GBASE-FR4), I see constant FEC errors. FEC is new to me. Anyone know why this is occurring? Shown below, is an interface with no traffic, but seeing constant FEC errors. This is (2) MX960's cabled directly, no dwdm or anything between them... just a fiber patch cable.
{master} me@mx960> clear interfaces statistics et-7/1/4
{master} me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2 ---(refreshed at 2024-04-17 14:18:53 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 0 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 0 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:55 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 4302 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 8 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:57 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 8796 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 146 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:18:59 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 15582 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 111 FEC Uncorrected Errors Rate 0 ---(refreshed at 2024-04-17 14:19:01 CDT)--- Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Bit errors 0 Errored blocks 0 Ethernet FEC statistics Errors FEC Corrected Errors 20342 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 256 FEC Uncorrected Errors Rate 0
{master} me@mx960> show interfaces et-7/1/4 | grep "put rate" Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
{master} me@mx960> show interfaces et-7/1/4 Physical interface: et-7/1/4, Enabled, Physical link is Up Interface index: 226, SNMP ifIndex: 800 Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Enabled Pad to minimum frame size: Disabled Device flags : Present Running Interface flags: SNMP-Traps Internal: 0x4000 Link flags : None CoS queues : 8 supported, 8 maximum usable queues Schedulers : 0 Last flapped : 2024-04-17 13:55:28 CDT (00:36:19 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Active alarms : None Active defects : None PCS statistics Seconds Bit errors 0 Errored blocks 0 Ethernet FEC Mode : FEC119 Ethernet FEC statistics Errors FEC Corrected Errors 801787 FEC Uncorrected Errors 0 FEC Corrected Errors Rate 2054 FEC Uncorrected Errors Rate 0 Link Degrade : Link Monitoring : Disable Interface transmit statistics: Disabled
Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815) Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2 Input packets : 1 Output packets: 1 Protocol inet, MTU: 1500 Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination:10.10.10.76/30 <http://10.10.10.76/30>, Local: 10.10.10.77, Broadcast: 10.10.10.79
-- -Aaron
-- Matt Erculiani
-- -Aaron
-- Matt Erculiani
-- -Aaron