On Fri, 4 May 2001, Dominic J. Eidson wrote:
So who replaced Intermedia's routers with a lump of cheese? :
[snip] Personally, I'm still trying to figure out why Exodus, in all their apparent wisdom (or lack thereof), has stopped using the GBLX OC-48's in the former GlobalCenter facilities (or at least SNV3), and is now shuttling all its traffic out a single Exodus OC-12. Prior to yesterday these traces would've shown gblx.net routers (on different IPs), and would never have touched an exodus backbone... fs1(1)# traceroute ops.sj.ipixmedia.com traceroute to ops.sj.ipixmedia.com (64.209.175.20), 30 hops max, 40 byte packets 1 fw.i.eng.bamboo.com (192.168.12.254) 0.872 ms 0.756 ms 0.716 ms 2 gw.eng.bamboo.com (63.78.93.1) 1.961 ms 2.275 ms 2.076 ms 3 240.ATM3-0.GW4.PAO1.ALTER.NET (157.130.195.13) 2.960 ms 3.109 ms 2.935 ms 4 124.ATM2-0.XR2.PAO1.ALTER.NET (146.188.148.86) 3.175 ms 3.491 ms 3.251 ms 5 188.at-1-0-0.XR4.SCL1.ALTER.NET (152.63.51.138) 3.960 ms 4.013 ms 4.339 ms 6 194.ATM5-0.GW6.SCL1.ALTER.NET (152.63.52.53) 4.347 ms 4.280 ms 4.068 ms 7 exodus-OC12-gw.customer.alter.net (157.130.203.90) 4.523 ms 4.673 ms 4.879 ms 8 66.35.194.18 (66.35.194.18) 4.908 ms 4.647 ms 4.830 ms 9 bbr01-p4-1.snva03.exodus.net (209.185.9.85) 5.470 ms 5.497 ms 5.541 ms 10 64.15.192.3 (64.15.192.3) 5.761 ms 5.606 ms 5.458 ms 11 64.209.177.30 (64.209.177.30) 5.334 ms 5.737 ms 5.738 ms 12 ops.sj.ipixmedia.com (64.209.175.20) 5.648 ms 5.751 ms 5.697 ms Reverse: ops:~# traceroute fs.eng.bamboo.com traceroute to fs.eng.bamboo.com (63.78.93.3), 30 hops max, 40 byte packets 1 wr1.sj.ipixmedia.com (64.209.175.3) 0.213 ms 0.155 ms 0.144 ms 2 64.209.177.18 (64.209.177.18) 0.196 ms 0.206 ms 0.162 ms 3 64.15.192.17 (64.15.192.17) 0.279 ms 0.23 ms 0.191 ms 4 bbr02-p0-3.sntc08.exodus.net (209.185.9.86) 0.927 ms 0.986 ms 0.852 ms 5 66.35.194.5 (66.35.194.5) 1.002 ms 0.902 ms 0.843 ms 6 POS2-0.GW6.SCL1.ALTER.NET (157.130.203.89) 2.571 ms 1.548 ms 1.43 ms 7 168.at-6-0-0.XR4.SCL1.ALTER.NET (152.63.52.62) 1.706 ms 1.653 ms 1.911 ms 8 152.63.51.141 (152.63.51.141) 2.414 ms 2.781 ms 2.405 ms 9 188.ATM5-0.GW4.PAO1.ALTER.NET (146.188.148.85) 2.544 ms 2.572 ms 2.791 ms 10 ipix-gw.customer.ALTER.NET (157.130.195.14) 4.946 ms 5.457 ms 4.969 ms 11 fs.eng.bamboo.com (63.78.93.3) 4.573 ms 4.984 ms 5.023 ms Of course, this is probably a move I should've expected from Exodus, after the mongolian flustercluck that was the AS change in SNV3. You'd think they would do something like that carefully, as you can -seriously- bone customers. But noooooo. One of our junior admins made the change (since I was out of town, but hey, it's cut and paste!). He, and all of the other affected customers in SNV3 on the conference call, were left on hold for about half an hour (plus the call started half an hour late), whereupon the exodus engineering team popped back in and said "We're done with our side, you guys go ahead!". Now. Does it seem logical to kill connectivity over BOTH of your hosting routers at once, thus killing every single BGP-running customer you have that isn't physically in their cage at the time? Or would it seem better to do what I assumed they'd do, which is do one router, wait for everyone to make changes, then do the other? I guess this is what happens when I assume intelligence at a hosting/backbone provider. *returns to watching the lights blink* -j -- -Jonathan Disher -Sr. Systems and Network Engineer, Web Operations -Internet Pictures Corporation, Palo Alto, CA -[v] (650) 388-0497 | [p] (877) 446-9311 | [e] jdisher@eng.ipix.com