AT&T/Lumen interconnection disruption in ATL
I lost access to the Lumen network from AT&T in Atlanta for 27 minutes (12-16-2025 15:50 to 12-16-2025 16:17 - times UTC) and wanted to check if anyone else saw this from other networks/locations? It restored on it's own and from the traceroutes I was able to perform while it was down vs. now it seemed to be stuck making the jump between AT&T and Lumen @ 76.239.207.188 (AT&T per ARIN whois). Once things restored it progressed onto 4.68.62.225 (ae8.edge4.atl2.sp.lumen.tech) and then to the final destination. There was one instance where there was another hop in between those 2 @ 32.130.89.13 (AT&T per ARIN whois), but it is no longer showing up on retries.
We saw our SD-WAN appliances in ATL lose connectivity to some things in GCP (where the Velocloud gateways live). Our SD-WAN in ATL peers with Lumen. The start time aligned with yours, but it was resolved only 6 minutes later. Chuck -----Original Message----- From: nanog--- via NANOG <nanog@lists.nanog.org> Sent: Tuesday, December 16, 2025 11:33 AM To: nanog@lists.nanog.org Cc: nanog@fleish.org Subject: AT&T/Lumen interconnection disruption in ATL I lost access to the Lumen network from AT&T in Atlanta for 27 minutes (12-16-2025 15:50 to 12-16-2025 16:17 - times UTC) and wanted to check if anyone else saw this from other networks/locations? It restored on it's own and from the traceroutes I was able to perform while it was down vs. now it seemed to be stuck making the jump between AT&T and Lumen @ 76.239.207.188 (AT&T per ARIN whois). Once things restored it progressed onto 4.68.62.225 (ae8.edge4.atl2.sp.lumen.tech) and then to the final destination. There was one instance where there was another hop in between those 2 @ 32.130.89.13 (AT&T per ARIN whois), but it is no longer showing up on retries. _______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/AXYCFP5Z P6IHUASSEB2JPZTULPUTDFJL/
Interesting, I also saw a number of spikes on Downdetector for various services which would line up with a broader routing issue within a big provider's network. Thankfully it's been stable since then so I'll just file a ticket to see if it was a known issue that might have an RFO later.
participants (2)
-
chuckchurch@gmail.com -
nanog@fleish.org