Let me summarize, then ask a question: a) BoA uses the public internet for ATM transactions. The public internet was so dead, that every one of thier ATM machines was dead for many hours, even many hours longer than the public internet was dead. b) BoA uses it's own network for it's on ATM transactions. Somewhere on the a public to private connection, a firewall wasn't doing it's job, or there wasn't a firewall. Things were broken for a while, until they were able to fix all thier SQL servers. I guess my point is, if it were a), not every ATM would be dead all the time, and things would have been fixed in only a little while. Not many internet 'backbones' (at least ones BoA would have used for this application) were down as long as BoA's ATM's were. On the other hand, I think it's more likely that BoA had unprotected SQL servers, and they got it. It took a long while for BoA IT people to make it out of bed saturday morning to fix the problem. I still clearly say that I don't know what happened, and I did make assumptions (as I said in the original mail) -- but I'd still place my money on b). On Sun, 26 Jan 2003, Ray Burkholder wrote:
Actually, I think too many assumptions were made.
Let's simplify.
We know UUNet traffic capabilities were reduced significantly. Uunet has many big customers. Other big carriers had similar affects on their networks, probably particularly at peering points.
We know many companies use public or private VPN services from major carriers such as these, and that both VPN types may use public internet carriers.
I think therefore that the only true conclusion we could say is that if BoA's traffic was not prioritized, it therefore suffered collateral damage primarily due to traffic not being able to get through between ATM's and the central processing center.
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --