Re: Westnet and Utah outage
So this is to imply that you do have a problem w/ Sprint and that you are picking on them in a specific way? -scott
From list-admin@merit.edu Wed Nov 22 09:23:19 1995 Date: Wed, 22 Nov 1995 09:20:38 -0500 (EST) From: Gordon Cook <gcook@tigger.jvnc.net> Reply-To: cook@cookreport.com To: nanog@merit.edu Subject: Westnet and Utah outage Mime-Version: 1.0 Content-Type> : > TEXT/PLAIN> ; > charset=US-ASCII> Content-Length: 762
small clarification. I have no problem with westnet and do not mean to be picking on it in anyway. I have a friend who lives in westnet's service area who gets bothered by these outages and passes them on to me.
******************************************************************** Gordon Cook, Editor & Publisher Subscript.: Individ-ascii $85 The COOK Report on Internet Non Profit. $150 431 Greenway Ave, Ewing, NJ 08618 Small Corp & Gov't $200 (609) 882-2572 Corporate $350 Internet: cook@cookreport.com Corporate. Site Lic $650 Newly expanded COOK Report Web Pages http://pobox.com/cook/ ********************************************************************
Scott asks: So this is to imply that you do have a problem w/ Sprint and that you are picking on them in a specific way? COOK: your suggestion scott not mine....... If I remember correctly on my one previous querry 6 weeks ago, the problem seemed to be more MCI's than sprints. I am in the midst of writing a long cover story on how backbones are responding to internet growth pressure and when something breaks I am interest in understanding what happened. It looks in this case however like people want me to yell on the sprint outage list rather than here so I'll check out the possibility of doing that. Do you guys have an outtage list I can join? My apologies if I have offended anyone. ******************************************************************** Gordon Cook, Editor & Publisher Subscript.: Individ-ascii $85 The COOK Report on Internet Non Profit. $150 431 Greenway Ave, Ewing, NJ 08618 Small Corp & Gov't $200 (609) 882-2572 Corporate $350 Internet: cook@cookreport.com Corporate. Site Lic $650 Newly expanded COOK Report Web Pages http://pobox.com/cook/ ********************************************************************
Scott asks:
So this is to imply that you do have a problem w/ Sprint and that you are picking on them in a specific way?
COOK: your suggestion scott not mine....... If I remember correctly on my one previous querry 6 weeks ago, the problem seemed to be more MCI's than sprints. I am in the midst of writing a long cover story on how backbones are responding to internet growth pressure and when something breaks I am interest in understanding what happened. It looks in this case however like people want me to yell on the sprint outage list rather than here so I'll check out the possibility of doing that. Do you guys have an outtage list I can join? My apologies if I have offended anyone.
******************************************************************** Gordon Cook, Editor & Publisher Subscript.: Individ-ascii $85 The COOK Report on Internet Non Profit. $150 431 Greenway Ave, Ewing, NJ 08618 Small Corp & Gov't $200 (609) 882-2572 Corporate $350 Internet: cook@cookreport.com Corporate. Site Lic $650 Newly expanded COOK Report Web Pages http://pobox.com/cook/ ********************************************************************
I really hate to make Gordon's points here, but the network is so broken at times, it is hard to get interactive work done. Even an FTP between two NSF supercomputer centers ((so far) idle 266MHz machines at the end points) went at a whopping: 3320903 bytes sent in 1.1e+03 seconds (3.1 Kbytes/s) And that was already the second try, as the uncompressed file version just took ways too long. The packet losses were between 8 and 10 percent. These kind of performances are ways too regular for me these days. And as a "user" I have very little means to find out what the hell is wrong with this network. I am sometimes so sick and tired of this that I am tempted to use the tools I have (ping and traceroute) and broadly post to people as to where things seem broken. And I will not care at all if you guys tell me "well, that's unfair, as ping and traceroute go to the main processor." Give me a working network, better tools, or SHUT THE HELL UP AND GO BACK TO FARMING. I will be glad to shut up myself, once you get your act together and provide smooth and transparent network services. Obviously I am able to send much more polite notes, but I am really getting sick and tired of this lousy performance and degrading network service qualities. I suspect MANY will increase their amplitude over the next few months if this continues. And I don't want to hear this bullshit about regular 10% packet losses being just fine, and 100% being just marginal. *At least* let people know if things are broken, so they look for alternatives (be it a cup of tea if short term, or another service provider if persistent). I think this problem is wide spread and not confined to specific service providers. So if someone points a finger at your competitor, don't be too happy about it. You may be next. Geez.
In message <199511221821.KAA15673@upeksa.sdsc.edu>, Hans-Werner Braun writes:
These kind of performances are ways too regular for me these days. And as a "user" I have very little means to find out what the hell is wrong with this network. I am sometimes so sick and tired of this that I am tempted to use the tools I have (ping and traceroute) and broadly post to people as to where things seem broken. And I will not care at all if you guys tell me "well, that's unfair, as ping and traceroute go to the main processor." Give me a working network, better tools, or SHUT THE HELL UP AND GO BACK TO FARMING. I will be glad to shut up myself, once you get your act together and provide smooth and transparent network services.
Since when can't you use ping and traceroute. You just have to ignore the results from routers that are probably too heavily loaded. (And avoid loading them further by pinging them, but the few packets that determine which way your traffic is headed should be no problem). Curtis
These kind of performances are ways too regular for me these days. And as a "user" I have very little means to find out what the hell is wrong with this network. I am sometimes so sick and tired of this that I am tempted to use the tools I have (ping and traceroute) and broadly post to people as to where things seem broken. And I will not care at all if you guys tell me "well, that's unfair, as ping and traceroute go to the main processor." Give me a working network, better tools, or SHUT THE HELL UP AND GO BACK TO FARMING. I will be glad to shut up myself, once you get your act together and provide smooth and transparent network services.
Well, it's not terribly useful to complain to nanog about specific & temporary problems. Unfortunately, it's often really only the Network Operation Centers (NOCs) of the various providers who are equipped with the knowledge of the topology of the various networks who can properly diagnose network problems. [So it's not a problem to use ping & traceroute - acknowledging that they may not be useful when seeing round trip times from (Cisco) routers. The problem is that you should pick a productive forum such as your provider's NOC to complain to.] For example, it's easy to say "My provider sucked this morning - my traceroute stopped in DC and there were huge packet losses". But what you don't see is that the AC power was *off* @ MAE-East this morning from 4am or so until 9:30am or so. And mailing to NANOG is not the right way to find out these things. Avi
Question: Which RFC should I consult to determine acceptable delay and packet loss? - jeff - On Wed, 22 Nov 95, hwb@upeksa.sdsc.edu (Hans-Werner Braun) wrote:
Scott asks:
So this is to imply that you do have a problem w/ Sprint and that you are picking on them in a specific way?
COOK: your suggestion scott not mine....... If I remember correctly on my one previous querry 6 weeks ago, the problem seemed to be more MCI's than sprints. I am in the midst of writing a long cover story on how backbones are responding to internet growth pressure and when something breaks I am interest in understanding what happened. It looks in this case however like people want me to yell on the sprint outage list rather than here so I'll check out the possibility of doing that. Do you guys have an outtage list I can join? My apologies if I have offended anyone.
******************************************************************** Gordon Cook, Editor & Publisher Subscript.: Individ-ascii $85 The COOK Report on Internet Non Profit. $150 431 Greenway Ave, Ewing, NJ 08618 Small Corp & Gov't $200 (609) 882-2572 Corporate $350 Internet: cook@cookreport.com Corporate. Site Lic $650 Newly expanded COOK Report Web Pages http://pobox.com/cook/ ********************************************************************
I really hate to make Gordon's points here, but the network is so broken at times, it is hard to get interactive work done. Even an FTP between two NSF supercomputer centers ((so far) idle 266MHz machines at the end points) went at a whopping:
3320903 bytes sent in 1.1e+03 seconds (3.1 Kbytes/s)
And that was already the second try, as the uncompressed file version just took ways too long. The packet losses were between 8 and 10 percent.
These kind of performances are ways too regular for me these days. And as a "user" I have very little means to find out what the hell is wrong with this network. I am sometimes so sick and tired of this that I am tempted to use the tools I have (ping and traceroute) and broadly post to people as to where things seem broken. And I will not care at all if you guys tell me "well, that's unfair, as ping and traceroute go to the main processor." Give me a working network, better tools, or SHUT THE HELL UP AND GO BACK TO FARMING. I will be glad to shut up myself, once you get your act together and provide smooth and transparent network services.
Obviously I am able to send much more polite notes, but I am really getting sick and tired of this lousy performance and degrading network service qualities.
I suspect MANY will increase their amplitude over the next few months if this continues.
And I don't want to hear this bullshit about regular 10% packet losses being just fine, and 100% being just marginal.
*At least* let people know if things are broken, so they look for alternatives (be it a cup of tea if short term, or another service provider if persistent).
I think this problem is wide spread and not confined to specific service providers. So if someone points a finger at your competitor, don't be too happy about it. You may be next.
Geez.
Question: Which RFC should I consult to determine acceptable delay and packet loss?
RFCs are the result of IETF activities. The IETF is essentially a protocol standardization group, not an operations group. I don't think you perceive the IETF as "running" your network, or? There may not be much of an alternative, though, which to a large extend is the issue at hand. Nobody is responsible (individually or as a consortium or whatever) of this anarchically organized and largely uncoordinated (at a systemic level) global operational environment. While IETF/RFCs could be utilized somehow, this is not really an issue of theirs. I sure would not blame the IETF for not delivering here, is this is not their mandate. In other email I saw it seems that the important issues are hard to understand for some. I (and I suspect several others) don't really care much about a specific tactical issue (be it an outage or whatever). The issue is how to make the system work with predictable performance and a fate sharing attitude at a global level, in a commercial and competitive environment that is still extremely young at that, and attempts to accomodate everything from mom'n'pop shops to multi-billion dollar industry. And exhibits exponential usage and ubiquity growth, without the resources to upgrade quickly to satisfy all the demands. And no control over in-flows, and major disparities across the applications. And TCP flow control not working that well, as the aggregation of transactions is very heavy, and the packet-per-transaction count is so low on average that TCP may not be all that much better to the network than UDP (in terms of adjusting to jitter in available resources). Not to mention this age-old problem with routing table sizes and routing table updates.
In message <199511231608.IAA27230@upeksa.sdsc.edu>, Hans-Werner Braun writes:
Question: Which RFC should I consult to determine acceptable delay and packe t loss?
RFCs are the result of IETF activities. The IETF is essentially a protocol standardization group, not an operations group. I don't think you perceive the IETF as "running" your network, or? There may not be much of an alternative, though, which to a large extend is the issue at hand. Nobody is responsible (individually or as a consortium or whatever) of this anarchically organized and largely uncoordinated (at a systemic level) global operational environment. While IETF/RFCs could be utilized somehow, this is not really an issue of theirs. I sure would not blame the IETF for not delivering here, is this is not their mandate.
In other email I saw it seems that the important issues are hard to understand for some. I (and I suspect several others) don't really care much about a specific tactical issue (be it an outage or whatever). The issue is how to make the system work with predictable performance and a fate sharing attitude at a global level, in a commercial and competitive environment that is still extremely young at that, and attempts to accomodate everything from mom'n'pop shops to multi-billion dollar industry. And exhibits exponential usage and ubiquity growth, without the resources to upgrade quickly to satisfy all the demands. And no control over in-flows, and major disparities across the applications. And TCP flow control not working that well, as the aggregation of transactions is very heavy, and the packet-per-transaction count is so low on average that TCP may not be all that much better to the network than UDP (in terms of adjusting to jitter in available resources). Not to mention this age-old problem with routing table sizes and routing table updates.
This belongs on the end2end-interest list or IPPM or elsewhere, but I'll save a lot of people going through the archives. In order to get X bandwidth on a given TCP flow you need to have an average window size of X * RTT. This is expressed in terms of TCP segments N = (X * RTT) / MSS (or more correctly the segment size in use rather than MSS). To sustain an average window of N segments, you must ideally reach a steady state where you cut cwnd (current window) in half, then grow linearly, fluctuating between 2/3 and 4/3 of the target size. This would mean one drop in 2/3 N windows or DropRate in terms of time is 2/3 N * RTT. In one RTT on average X * RTT amount of data flows. In practice, you rarely drop at the perfect time, so the constant 2/3 (call it K) can be raised to 1-2. Since N = (X * RTT) / MSS, DropRate = K * X * RTT * X * RTT / MSS. Units are b/s * sec * b/s * sec / b, or b. The DropRate expressed in bits can be converted to seconds or packets (divide by X or by MSS). This type of analysis is courtesy of the good folks at PSC (Matt, Jamshid, et al). For example, to get 40 Mb/s at 70 msec RTT and 4096 MSS, you get one error about every 6 seconds (K=1) or 1 in 7,300 packets. If you look at 56k Kb/s and 512 MSS you get a very interesting result. You need one error every 66 msec or 1 error in 0.9 packets. This gives a good incentive to increase delay. At 250 msec, you get a result of one error in 11.7 packets (much better!). Another interesting point to note is that you need 3 duplicate ACKs for TCP fast retransmit to work, so your window must be at least 4 segments (and should be more). If you have a very large number of TCP flows, where on average people get less than 1200 baud or so, the delay you need to make TCP work well starts to exceed the magic 3 second boundary. This was discussed ad nauseum on end2end-interest. An important result is that you need more queueing than the delay bandwidth product for severely congested links. Another is that there is a limit to the number of active TCP flows that can be supported per bandwidth. One suggestion to address the latter problem is to further drop segment size if cwnd is less than 4 segments in size and/or when estimated RTT gets into the seconds range. This analysis of how much loss is acceptable to TCP may not be outside the bounds of an informational RFC, but so far none exists. Curtis
I made a private reply to Curtis on his posting earlier this week, and he gave a nice analysis and cc'd end2end-interest rather than nanog. For those that don't care to care to read all this, here's the summary:
Which would you prefer? 140 msec and 0% loss or 70 msec and 5% loss?
So we get to choose between large delay or large lossage. Doesn't sound wonderful... I thought you folks in nanog might be interested, so with Curtis' permission, here's the full exchange, (the original posting by Curtis is at the at the very end). -- Jim Here's what I wrote:
In message <199511272220.OAA01151@stilton.cisco.com>, Jim Forster writes:
Curtis,
I think these days for lots of folks the interesting question is not what happens when a single or a few high-rate TCPs get in equlibrium, but rather what happens when a DS-3 or higher is filled with 56k or slower flows, each of which only lasts for an average of 20 packets or so. Unfortunately, these 20 packet TCP flows are what's driving the stats these days, due I guess to the silly WWW (TCP per file; file per graphic; many graphics per page) that's been so successful.
And Curtis's reply:
The analysis below also applies to just under 800 TCP flows each getting 1/800th of a DS3 link or about 56Kb/s. The loss rate on the link should be about one packet in 11 if the delay can be increased to 250 msec. If the delay is held at 70 msec, lots of timeouts and terrible fairness and poor overall performance will result.
Do we need an ISP to prove this to you by exhibiting terrible performance? If so, please speak to Jon Crowcroft. His case is 400 flows on 4 Mb/s which is far worse, since delay would have to be increased over 3 seconds or segment size reduced below 552. :-(
I could try to derive the results but I'm sure you or others would do better :-). How many of the packets in the 20 packet flow are at equilibrium? What's the drop rate? Hmmm, very simple minded analysis says that it will be large: expontential growth (doubling cwnd every ack) should get above best case pretty quickly, certainly within the 20 packet flow. Assume it's only above optimum once, then the packet loss rate is 1 in 20. Sounds grim. Vegas TCP sounds better for these reasons, since it tracks actual bw, but I'm not really qualified to judge.
-- Jim
Jim,
The end2end-interest thread was quite long and I didn't want to repeat the whole thing. The initial topic was very tiny TCP flows of 3 to 4 packets. That is a really bad problem, but should no longer be a realistic problem once HTTP is modified to allow it to pick up both the HTML page and all inline images in one TCP connection.
Your example is quite reasonable. At 20 packets per flow, with no loss you get 1, 2, 4, 8, 3 packets per RTT or complete transfer in about 5 RTT. On average each TCP flow will get 20 packets / 5 RTT of bandwidth until congestion of 4 packets/RTT (for 552/70 msec, this is about 64 Kb/s). If the connection is temporarily overloaded by a factor of 2, this must be reduced to 2 packets/RTT. If we drop 1 packet in 20, roughly 35% of the flows go completely untouched (0.95^20). Some 15% will drop one packet of the first 3 and timeout and slow start, resulting in less than 20 packet / 3 seconds (3 seconds >> 5*RTT). Some 60% will drop one packet of the 4th through 20th, resulting in fast retransmit, no timeout, and linear growth in window. If the 4th is dropped, the window is cut to 2, so next few RTTs you get 2, 3, 4, 5, 3, or 8 RTTS (2 initial, 1 drop, 5 more). This is probably not quite enough to slow things down.
On a DS3 with 70 msec RTT and 1500 simultaneous flows of 20 packets each (steady state such that the number of active flows remains about 1500, roughly twice what a DS3 could support) you would need a drop rate of on the order of 5% or more. Alternately, you could queue things up, doubling the delay to 140 msec and give every flow the same slower rate (perfect fairness in your example) and have a zero drop rate.
Which would you prefer? 140 msec and 0% loss or 70 msec and 5% loss? Delay is good. We want delay for elastic traffic! But not for real time - use RSVP, admission control, police at the ingress and stick it on the front of the queue.
In practice, I'd expect overload to be due to lots of flows, but not enough little guys to overload the link (if so, get a bigger pipe, we can say that and put it in practice). The overload will be due to a high baseline of little guys (20 packet flows, or a range of fairly small ones), plus some percentage of longer duration flows capable of sucking up the better part of a T1, giving half a chance. It is the latter that you want to slow down, and these are the ones that you *can* slow down with a fairly low drop rate.
I leave it as an exercise to the reader to determine how RED fits into this picture (either one, my overload scenario or Jim's where all the flows are 20 packets in duration).
The 400 flows on 4 Mb/s is an interesting (and difficult) case. I've suggested both allowing delay to get very large (ie: as high as 2 seconds) and hacking the host implementation to reduce segment size to as low as 128 bytes when RTT gets huge or cwnd drops below 4 segments, holding the window to no less than 512 (4 segments) in hopes that fast retransmit will almost always work even in 15-20% loss situations.
Curtis
Curtis's original posting:
In order to get X bandwidth on a given TCP flow you need to have an average window size of X * RTT. This is expressed in terms of TCP segments N = (X * RTT) / MSS (or more correctly the segment size in use rather than MSS). To sustain an average window of N segments, you must ideally reach a steady state where you cut cwnd (current window) in half, then grow linearly, fluctuating between 2/3 and 4/3 of the target size. This would mean one drop in 2/3 N windows or DropRate in terms of time is 2/3 N * RTT. In one RTT on average X * RTT amount of data flows. In practice, you rarely drop at the perfect time, so the constant 2/3 (call it K) can be raised to 1-2. Since N = (X * RTT) / MSS, DropRate = K * X * RTT * X * RTT / MSS. Units are b/s * sec * b/s * sec / b, or b. The DropRate expressed in bits can be converted to seconds or packets (divide by X or by MSS). This type of analysis is courtesy of the good folks at PSC (Matt, Jamshid, et al).
For example, to get 40 Mb/s at 70 msec RTT and 4096 MSS, you get one error about every 6 seconds (K=1) or 1 in 7,300 packets. If you look at 56k Kb/s and 512 MSS you get a very interesting result. You need one error every 66 msec or 1 error in 0.9 packets. This gives a good incentive to increase delay. At 250 msec, you get a result of one error in 11.7 packets (much better!).
Another interesting point to note is that you need 3 duplicate ACKs for TCP fast retransmit to work, so your window must be at least 4 segments (and should be more). If you have a very large number of TCP flows, where on average people get less than 1200 baud or so, the delay you need to make TCP work well starts to exceed the magic 3 second boundary. This was discussed ad nauseum on end2end-interest. An important result is that you need more queueing than the delay bandwidth product for severely congested links. Another is that there is a limit to the number of active TCP flows that can be supported per bandwidth. One suggestion to address the latter problem is to further drop segment size if cwnd is less than 4 segments in size and/or when estimated RTT gets into the seconds range.
This analysis of how much loss is acceptable to TCP may not be outside the bounds of an informational RFC, but so far none exists.
Curtis
Jim,
I made a private reply to Curtis on his posting earlier this week, and he gave a nice analysis and cc'd end2end-interest rather than nanog. For those that don't care to care to read all this, here's the summary:
[ .. summary deleted .. ] I did mention that I didn't mind you forwarding that note. There was subsequent discussion on the end2end-interest list. I may be overstating the buffering requirements since the assumption is made that a high degree of synchronization could occur. This really needs to be backed up by simulations. Curtis
Question: Which RFC should I consult to determine acceptable delay and
Hans; Sorry...I waited for additional replies but you seemed to be the only one to take my bait. My question was rhetorical. I hear all this complaining on this forum about unacceptable delay and packet loss by the ISP Community yet no "respected" industry standards body has yet set QOS guidelines for ISP's! An old management dictum says "if its important, measure it". I know where to look for QOS criteria on my physical plant (T1/DS3's), I even know where to look for QOS criteria for my old X.25 network. If we want things to get better w/i the ISP Community...let's define what better is. - jeff - On Thu, 23 Nov 95, hwb@upeksa.sdsc.edu (Hans-Werner Braun) wrote: packet
loss?
RFCs are the result of IETF activities. The IETF is essentially a protocol standardization group, not an operations group. I don't think you perceive the IETF as "running" your network, or? There may not be much of an alternative, though, which to a large extend is the issue at hand. Nobody is responsible (individually or as a consortium or whatever) of this anarchically organized and largely uncoordinated (at a systemic level) global operational environment. While IETF/RFCs could be utilized somehow, this is not really an issue of theirs. I sure would not blame the IETF for not delivering here, is this is not their mandate.
In other email I saw it seems that the important issues are hard to understand for some. I (and I suspect several others) don't really care much about a specific tactical issue (be it an outage or whatever). The issue is how to make the system work with predictable performance and a fate sharing attitude at a global level, in a commercial and competitive environment that is still extremely young at that, and attempts to accomodate everything from mom'n'pop shops to multi-billion dollar industry. And exhibits exponential usage and ubiquity growth, without the resources to upgrade quickly to satisfy all the demands. And no control over in-flows, and major disparities across the applications. And TCP flow control not working that well, as the aggregation of transactions is very heavy, and the packet-per-transaction count is so low on average that TCP may not be all that much better to the network than UDP (in terms of adjusting to jitter in available resources). Not to mention this age-old problem with routing table sizes and routing table updates.
----------------------------------------------------------------------------- Jeff Oliveto | Phone: +1.703.760.1764 Sr.Mgr Opns Technical Services | Fax: +1.703.760.3321 Cable & Wireless, Inc | Email: joliveto@cwi.net 1919 Gallows Road | URL: http://www.cwi.net/ Vienna, VA 22182 | NOC: +1.800.486.9999 -----------------------------------------------------------------------------
Other people have touched on it, but I'd like to re-iterate: The quality that someone can expect out of their Internet connection, as a practical matter, will somewhat vary with how much they're willing to pay. It seems to me that giving someone <<1% downtime is an expensive level of service. The Internet market today is not one where most customers question the providers on the level of service; quite contrarily they question the providers on how cheap they can go. This type of market will be cost driven, and for my $19.95 unlimited PPP account, do you think my ISP will be able to give me <<1% inaccessibility? Not without operating in the red, I don't think. I think most ISP's would be *delighted* to offer customers Very High Quality service, but few customers are willing to pay for that service. As a result, the final judgement of "how good is good enough" will be "whatever the customer can live with," as compared to anything that engineers like (ie 1%, 5%, etc). Ed ed@texas.net (p.s. you notice I'm brushing aside the first question, being "how do I *measure* the quality of service." Offhand, a weighted average of all of the components that a given customer needs for a connection makes the most sense to me.) -- On Tue, 28 Nov 1995 joliveto@cwi.net wrote:
Hans;
Sorry...I waited for additional replies but you seemed to be the only one to take my bait. My question was rhetorical.
I hear all this complaining on this forum about unacceptable delay and packet loss by the ISP Community yet no "respected" industry standards body has yet set QOS guidelines for ISP's! An old management dictum says "if its important, measure it".
I know where to look for QOS criteria on my physical plant (T1/DS3's), I even know where to look for QOS criteria for my old X.25 network. If we want things to get better w/i the ISP Community...let's define what better is.
- jeff -
Gordon Cook writes: COOK: your suggestion scott not mine....... If I remember correctly on my one previous querry 6 weeks ago, the problem seemed to be more MCI's than sprints. I am in the midst of writing a long cover story on how backbones are responding to internet growth pressure and when something breaks I am interest in understanding what happened. Gordon, Why would this be a backbone issue? It sounds to me like Westnet relies on a single connection from its NSP, and that this failed for some period of time. Any single component, be it circuit, router, etc. *will fail*. If a regional depends on a single point of failure the outcome is inevitable. -- Jeff Hayward
participants (9)
-
Avi Freedman
-
Curtis Villamizar
-
Edward Henigin
-
Gordon Cook
-
hwb@upeksa.sdsc.edu
-
Jeff Hayward
-
Jim Forster
-
joliveto@cwi.net
-
Scott Huddle