Global BGP - 2001-06-23
Out of curiosity - did anyone see a duration of significanlt instability in the global routing tables on Saturday afternoon? Without violating NDA, all I can say is that it resembled a historic event involve a bad route, Ciscos, and Bay routers (only this time, it was a bad route, Ciscos, and <X> vendor whom I cannot name but is being soundly beaten with wet noodles to resolve the issue). The bad route, and instability, were seen across all of our transit vendors (all "household" names of transit service). Anyone else see this sort of event, or further details on the cause? -- *************************************************************************** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 We definitely felt it...does the mystery vendor rhyme with clowndree ? We had most of routers drop eachother with invalid as_path errors about 12:30pm pst yesterday.. Matt - -- Matt Levine @Home: matt@deliver3.com ICQ : 17080004 PGP : http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x6C0D04CF - -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of lucifer@lightbearer.com Sent: Sunday, June 24, 2001 5:06 PM To: nanog@merit.edu Subject: Global BGP - 2001-06-23 Out of curiosity - did anyone see a duration of significanlt instability in the global routing tables on Saturday afternoon? Without violating NDA, all I can say is that it resembled a historic event involve a bad route, Ciscos, and Bay routers (only this time, it was a bad route, Ciscos, and <X> vendor whom I cannot name but is being soundly beaten with wet noodles to resolve the issue). The bad route, and instability, were seen across all of our transit vendors (all "household" names of transit service). Anyone else see this sort of event, or further details on the cause? - -- ********************************************************************** ***** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 7.0.3 for non-commercial use <http://www.pgp.com> iQA/AwUBOzZYD8p0j1NsDQTPEQLraQCg3Imj5YtwyUSgI1mPnW5/nTeJLqAAoJ5I FpTCr+2qQuurL/942L0nxytM =qrUq -----END PGP SIGNATURE-----
Out of curiosity - did anyone see a duration of significanlt instability in the global routing tables on Saturday afternoon? Without violating NDA, all I can say is that it resembled a historic event involve a bad route, Ciscos, and Bay routers (only this time, it was a bad route, Ciscos, and <X> vendor whom I cannot name but is being soundly beaten with wet noodles to resolve the issue). The bad route, and instability, were seen across all of our transit vendors (all "household" names of transit service).
Hmm ... why is <X> being beaten? Was the problem reversed this time? The only historic event I can recall involving a bad route, Cisco, and Bay (actually, events would be better, since it happened at least twice) was a case of (a) someone injecting a bad route, (b) the cisco at the other end accepting it in violation of the RFC, (c) ciscos passing that bad route all around the internet, all in violation of the RFC, (d) that route eventually hitting a cisco<->bay peering connection, and (e) the Bay (although the problem wasn't limited to Bay, as gated, and possible other implementations as well, behaved the same way) properly sending a NOTIFY and taking down the BGP session, as required by the RFC. It only took two major outages before Cisco fixed the problem. (The BGP advertisement was posted to NANOG both times, as was the BugID the second time.) So if this is the same issue, Cisco would be the vendor to flog, although assuming they didn't re-introduce it, the flogging might more correctly be directed at providers still running code old enough to have this particular problem. Both my transits (Bay on my end, Cisco on the other end) made it through just fine, though. (This time. The last two times it happened, the cisco's on the other end happily passed the invalid route to me and the Bay on my end happily dropped the BGP session, and this was repeated ad infinitum until the bogus route was removed from the other end.) -- Brett
Brett Frankenberger wrote:
Out of curiosity - did anyone see a duration of significanlt instability in the global routing tables on Saturday afternoon? Without violating NDA, all I can say is that it resembled a historic event involve a bad route, Ciscos, and Bay routers (only this time, it was a bad route, Ciscos, and <X> vendor whom I cannot name but is being soundly beaten with wet noodles to resolve the issue). The bad route, and instability, were seen across all of our transit vendors (all "household" names of transit service).
Hmm ... why is <X> being beaten? Was the problem reversed this time?
The only historic event I can recall involving a bad route, Cisco, and Bay (actually, events would be better, since it happened at least twice) was a case of (a) someone injecting a bad route, (b) the cisco at the other end accepting it in violation of the RFC, (c) ciscos passing that bad route all around the internet, all in violation of the RFC, (d) that route eventually hitting a cisco<->bay peering connection, and (e) the Bay (although the problem wasn't limited to Bay, as gated, and possible other implementations as well, behaved the same way) properly sending a NOTIFY and taking down the BGP session, as required by the RFC.
A) Ciscos flap sessions, according to the only reports I've heard. B) <X> routers were crashing, either due to the bug, or the session resets. Thus, <X> is being flogged. I have reports of at least one <Y> having problems, as well. C) I would post the BugID, but the only source I have is under NDA. However, having now heard this much in a public forum (IE, not covered), I can say "Invalid AS path data bug".
It only took two major outages before Cisco fixed the problem. (The BGP advertisement was posted to NANOG both times, as was the BugID the second time.)
I have the guilty announcement, but again, it's under NDA. However, I can say that we are now seeing this announcement from all of our upstreams, non-blocked, so it appears that they fixed the origionating point.
So if this is the same issue, Cisco would be the vendor to flog, although assuming they didn't re-introduce it, the flogging might more correctly be directed at providers still running code old enough to have this particular problem.
I would flog Cisco as well, but A) they have a bug on it already, and B) we're not using Ciscos for our core (note: this is my personal email, and I am not speaking for my employer; however, this is publically documented on my employers website, so it's not NDAed).
Both my transits (Bay on my end, Cisco on the other end) made it through just fine, though. (This time. The last two times it happened, the cisco's on the other end happily passed the invalid route to me and the Bay on my end happily dropped the BGP session, and this was repeated ad infinitum until the bogus route was removed from the other end.)
I have no data on Bay; my apologies if this wasn't clear. Bay was *only* being referenced as a historical point of note. No attempt at FUD, and my apologies if anyone read it that way. -- *************************************************************************** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
A) Ciscos flap sessions, according to the only reports I've heard.
Is it an invalid AS_PATH? If so, if such is received by a Cisco, the Cisco is required by the RFC to drop the session. Failing to do so (and then propogating the bogus advertisement) was the cause of the original problem ... AFAIK, the fix (which was released a long time ago, but may not yet be running everywhere) causes the Cisco to behave properly, which is to drop the session.
B) <X> routers were crashing, either due to the bug, or the session resets. Thus, <X> is being flogged. I have reports of at least one <Y> having problems, as well.
Well, OK. If <X> is crashing, then <X> has a problem. And I didn't mean to imply that they didn't. Mostly, I was posting because I frequently hear the "Bay vs. Cisco" crashes of yore reported as "Bay's were dropping BGP sessions". That implies that the Bay was broke, when in reality Bay (and most other non-Cisco implementations) was doing what was required by the RFC. The reason for my post, not knowing who <X> is (although I could probably guess) or what <X> was doing, was to clarify that routers that drop BGP sessions upon receiving invalid advertisements are not broken; but rather, they are doing what is required.
I have no data on Bay; my apologies if this wasn't clear. Bay was *only* being referenced as a historical point of note. No attempt at FUD, and my apologies if anyone read it that way.
And I wasn't attempting to defend them, either -- I'm just curious about the problem. Anyway, someone had to be passing this advertisement around ... if the Ciscos were dropping the session in response to it, and <X>'s were crashing, who's left to pass the bad advertisement around? Cisco with older code that propogated the advertisement upon receipt, instead of issuing a NOTIFY and tearing the session down? Naturally, you might be unable to answer the above, due to NDA ... mostly, I'm just fishing for details (from anywhere) on what happened. -- Brett
Brett Frankenberger wrote:
A) Ciscos flap sessions, according to the only reports I've heard.
Is it an invalid AS_PATH? If so, if such is received by a Cisco, the Cisco is required by the RFC to drop the session. Failing to do so (and then propogating the bogus advertisement) was the cause of the original problem ... AFAIK, the fix (which was released a long time ago, but may not yet be running everywhere) causes the Cisco to behave properly, which is to drop the session.
Clarification: Ciscos take a buggy route, and turn it into an invalid one. This causes Cisco peers to flap the session (yes, as they should), and some other vendors (B, below) appear to have more serious issues.
B) <X> routers were crashing, either due to the bug, or the session resets. Thus, <X> is being flogged. I have reports of at least one <Y> having problems, as well.
Well, OK. If <X> is crashing, then <X> has a problem. And I didn't mean to imply that they didn't. Mostly, I was posting because I frequently hear the "Bay vs. Cisco" crashes of yore reported as "Bay's were dropping BGP sessions". That implies that the Bay was broke, when in reality Bay (and most other non-Cisco implementations) was doing what was required by the RFC.
The reason for my post, not knowing who <X> is (although I could probably guess) or what <X> was doing, was to clarify that routers that drop BGP sessions upon receiving invalid advertisements are not broken; but rather, they are doing what is required.
A good point, and entirely true. I apologize for not being clear about the bug, but I was/am trying to step carefully around the NDAs. And yes, they're annoying, and there are probably some people who believe I'm violating it even now. (Hopefully not the lawyers...)
I have no data on Bay; my apologies if this wasn't clear. Bay was *only* being referenced as a historical point of note. No attempt at FUD, and my apologies if anyone read it that way.
And I wasn't attempting to defend them, either -- I'm just curious about the problem.
Anyway, someone had to be passing this advertisement around ... if the Ciscos were dropping the session in response to it, and <X>'s were crashing, who's left to pass the bad advertisement around? Cisco with older code that propogated the advertisement upon receipt, instead of issuing a NOTIFY and tearing the session down?
I'm not entirely clear on this; from the bug ID, it implies that iBGP may be treated differently than external peers (specifically, part of it appears to involve appending one's own ASN, possibly; again, I'm not entirely clear on it, even reading the bug report).
Naturally, you might be unable to answer the above, due to NDA ... mostly, I'm just fishing for details (from anywhere) on what happened.
Sorry. As Sean said... most of it is covered by NDAs, and this is exactly what will lead to required outage reporting for everyone, if they don't start relaxing it some. From our point of view (here), a lot of the issues were second-order, caused by the number of flaps in the global table from various directions, and/or the bug in vendor <X>'s equipment causing the reboots rapidly. Though, to their credit, <X> was good about handling the ticket, and had engineers talking to us rapidly, etc etc. Reasonable handling, IMO. -- *************************************************************************** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
lucifer@lightbearer.com wrote:
Brett Frankenberger wrote:
I have no data on Bay; my apologies if this wasn't clear. Bay was *only* being referenced as a historical point of note. No attempt at FUD, and my apologies if anyone read it that way.
And I wasn't attempting to defend them, either -- I'm just curious about the problem.
Anyway, someone had to be passing this advertisement around ... if the Ciscos were dropping the session in response to it, and <X>'s were crashing, who's left to pass the bad advertisement around? Cisco with older code that propogated the advertisement upon receipt, instead of issuing a NOTIFY and tearing the session down?
I'm not entirely clear on this; from the bug ID, it implies that iBGP may be treated differently than external peers (specifically, part of it appears to involve appending one's own ASN, possibly; again, I'm not entirely clear on it, even reading the bug report).
Actually, upon reviewing the whole thread from the Bay routers incident, a large part of what caused that issue to spread is that the Ciscos were, in fact, sending a NOTIFY and closing the session - but only *after* they had alreayd propagated the route. If this hasn't changed, then last Sat's issues could once again be characterized as "A cisco bug caused bad routes to enter the routing table, and Cisco's handling of bad received routes caused it to propagate throughout the network rapidly, while some non-zero number of other vendors closed the session upon receiving the bad route, and did not propagate it" (I can verify the last, at least, for our core; no router that did not have an external BGP feed was affected at all, except for it's sessions resetting when some of the peers vanished). Can anyone verify whether Cisco still does BGP this way? (Propagate, then kill origionating session). If so, it rather clearly answers the question about how this managed to make it throughout the network... (For the record: I'm not trying to Cisco-bash here. All vendors have problems, and when you have a huge market share, your problems tend to show up much more obviously, when they appear. However, Cisco does still have a huge market share, meaning this affected a whole lot of people, if true... so, I'm curious). -- *************************************************************************** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
On Mon, Jun 25, 2001 at 02:38:32PM -0700, lucifer@lightbearer.com wrote:
Can anyone verify whether Cisco still does BGP this way? (Propagate, then kill origionating session). If so, it rather clearly answers the question about how this managed to make it throughout the network...
I'm fairly sure that is not the case anymore.
(For the record: I'm not trying to Cisco-bash here. All vendors have problems, and when you have a huge market share, your problems tend to show up much more obviously, when they appear. However, Cisco does still have a huge market share, meaning this affected a whole lot of people, if true... so, I'm curious).
From what I can tell this time it was not ciscos fault. It appears that the vendor that had the problem just had an issue with a specific "valid" announcement that others propogated to it. What is interesting is one could use this to see what providers are using vendor "X" at exchange points. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Jared Mauch wrote:
On Mon, Jun 25, 2001 at 02:38:32PM -0700, lucifer@lightbearer.com wrote:
Can anyone verify whether Cisco still does BGP this way? (Propagate, then kill origionating session). If so, it rather clearly answers the question about how this managed to make it throughout the network...
I'm fairly sure that is not the case anymore.
(For the record: I'm not trying to Cisco-bash here. All vendors have problems, and when you have a huge market share, your problems tend to show up much more obviously, when they appear. However, Cisco does still have a huge market share, meaning this affected a whole lot of people, if true... so, I'm curious).
From what I can tell this time it was not ciscos fault. It appears that the vendor that had the problem just had an issue with a specific "valid" announcement that others propogated to it.
All I can say is that the only report I have had about what caused the whole mess to start was a Cisco BugID regarding a mangling done by some IOS versions on a particular sort of route update that made it invalid (or perhaps, "more invalid"). And if Cisco is no longer propagating routes before shutting down the source session, then we're back to wondering how this particular issue managed to cause flaps at the same time across at least 5 "big player" networks that I've had reports about (including 3 by direct observation), at the same time. This person must have some pretty impressive connectivity, if they managed to get what appears to be well over a dozen routers at the absolute minimum, and more likely in the range of "hundreds" if the rumor volume is at all accurate, to each display the bug (since, if a bad announcement isn't propagated, it will never reach anything but the direct peers; thus, this person would have to be directly peered with every router that anyone saw flapping sessions to a customer). Now, I'll grant, it would be possible to do this, but for them to have hit just *our* network, they would have to be on 3 major carries in 3 states, including some places that a normal class B-type announcer just isn't terribly likely to have a peering session.
What is interesting is one could use this to see what providers are using vendor "X" at exchange points.
Quite true. Though I suspect that in some cases, this might only tell you what routing code they use. Making too many inferences is probably unwise. Especially given the number of folks who thought they knew who "X" was, only to state their guess and come out wrong... -- *************************************************************************** Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
Vendor X released a limited statement to their customers describing the issue - and their view on it. The large incumbent vendor that we all know and love has confirmed the issue, and released a "patch" to some of their customers. Vendor X also went on to state that at no time did their boxes crash, mis-forward, reset, or have any issue resulting from the events of the past weekend.
From Vendor X's statement:
1. Another vendor's implementation of BGP contained a bug that caused EBGP peers to leak CONFEDERATION information across AS boundaries, interpreted as malformed AS_PATH announcements. 2. Vendor X's implementation of BGP-4 fully complies with the BGP-4 specification (RFC 1771) and accordingly, terminates a BGP session to a BGP peer who forwards malformed AS_PATH announcement. 3. Unfortunately, this other vendor does not adhere to the standard in the same manner and as a result, malformed AS_PATH announcements are propagated to other BGP peers. This is contrary to RFC 1771. Vendor X believes that these vendors should modify their implementation to adhere to the guidelines as stated per RFC 1771 (see section 6 - BGP Error Handling). 4. In light of the events of the past weekend and with input from a number of the affected service providers (point #1 above), Vendor X has concluded that a review of our BGP implementation is unnecessary at this time. If you happen to be running Vendor X's software and think you may have experienced the issue you can use the following to verify. ssh@vendor-x-119.chi03#sh ip bgp neighbor xxx.xxx.xxx.xx last BGP4: 86 bytes hex dump of packet received from neighbor that contains Error ffffffff ffffffff ffffffff ffffffff 00560200 00003bff ffffffff ffffffff ffffffff ffffff00 2d0104fd e8005ac0 a8803a10 02060104 00010001 02028000 02020200 ffffffff ffffffff ffffffff ffffffff 001318d0 f20118d0 c10e18d0 f20018d0 .chance (not speaking for Vendor X in any way shape or form. Just passing along info that I was sent.)
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of lucifer@lightbearer.com Sent: Monday, June 25, 2001 9:59 PM To: Jared Mauch Cc: lucifer@lightbearer.com; Brett Frankenberger; nanog@merit.edu Subject: Re: Global BGP - 2001-06-23
Jared Mauch wrote:
On Mon, Jun 25, 2001 at 02:38:32PM -0700,
Can anyone verify whether Cisco still does BGP this way? (Propagate, then kill origionating session). If so, it rather clearly answers the question about how this managed to make it throughout the network...
I'm fairly sure that is not the case anymore.
(For the record: I'm not trying to Cisco-bash here. All vendors have problems, and when you have a huge market share, your
lucifer@lightbearer.com wrote: problems tend to
show up much more obviously, when they appear. However, Cisco does still have a huge market share, meaning this affected a whole lot of people, if true... so, I'm curious).
From what I can tell this time it was not ciscos fault. It appears that the vendor that had the problem just had an issue with a specific "valid" announcement that others propogated to it.
All I can say is that the only report I have had about what caused the whole mess to start was a Cisco BugID regarding a mangling done by some IOS versions on a particular sort of route update that made it invalid (or perhaps, "more invalid"). And if Cisco is no longer propagating routes before shutting down the source session, then we're back to wondering how this particular issue managed to cause flaps at the same time across at least 5 "big player" networks that I've had reports about (including 3 by direct observation), at the same time. This person must have some pretty impressive connectivity, if they managed to get what appears to be well over a dozen routers at the absolute minimum, and more likely in the range of "hundreds" if the rumor volume is at all accurate, to each display the bug (since, if a bad announcement isn't propagated, it will never reach anything but the direct peers; thus, this person would have to be directly peered with every router that anyone saw flapping sessions to a customer).
Now, I'll grant, it would be possible to do this, but for them to have hit just *our* network, they would have to be on 3 major carries in 3 states, including some places that a normal class B-type announcer just isn't terribly likely to have a peering session.
What is interesting is one could use this to see what providers are using vendor "X" at exchange points.
Quite true. Though I suspect that in some cases, this might only tell you what routing code they use. Making too many inferences is probably unwise. Especially given the number of folks who thought they knew who "X" was, only to state their guess and come out wrong... -- ************************************************************** ************* Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 <sigh>... If the RFC jumped off a cliff... - -- Matt Levine @Home: matt@deliver3.com @Work: matt@eldosales.com ICQ : 17080004 PGP : http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x6C0D04CF - -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Chance Whaley Sent: Tuesday, June 26, 2001 12:36 PM To: lucifer@lightbearer.com; 'Jared Mauch' Cc: 'Brett Frankenberger'; nanog@merit.edu Subject: RE: Global BGP - 2001-06-23 - Vendor X's statement... Vendor X released a limited statement to their customers describing the issue - and their view on it. The large incumbent vendor that we all know and love has confirmed the issue, and released a "patch" to some of their customers. Vendor X also went on to state that at no time did their boxes crash, mis-forward, reset, or have any issue resulting from the events of the past weekend. - From Vendor X's statement: 1. Another vendor's implementation of BGP contained a bug that caused EBGP peers to leak CONFEDERATION information across AS boundaries, interpreted as malformed AS_PATH announcements. 2. Vendor X's implementation of BGP-4 fully complies with the BGP-4 specification (RFC 1771) and accordingly, terminates a BGP session to a BGP peer who forwards malformed AS_PATH announcement. 3. Unfortunately, this other vendor does not adhere to the standard in the same manner and as a result, malformed AS_PATH announcements are propagated to other BGP peers. This is contrary to RFC 1771. Vendor X believes that these vendors should modify their implementation to adhere to the guidelines as stated per RFC 1771 (see section 6 - BGP Error Handling). 4. In light of the events of the past weekend and with input from a number of the affected service providers (point #1 above), Vendor X has concluded that a review of our BGP implementation is unnecessary at this time. If you happen to be running Vendor X's software and think you may have experienced the issue you can use the following to verify. ssh@vendor-x-119.chi03#sh ip bgp neighbor xxx.xxx.xxx.xx last BGP4: 86 bytes hex dump of packet received from neighbor that contains Error ffffffff ffffffff ffffffff ffffffff 00560200 00003bff ffffffff ffffffff ffffffff ffffff00 2d0104fd e8005ac0 a8803a10 02060104 00010001 02028000 02020200 ffffffff ffffffff ffffffff ffffffff 001318d0 f20118d0 c10e18d0 f20018d0 .chance (not speaking for Vendor X in any way shape or form. Just passing along info that I was sent.)
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of lucifer@lightbearer.com Sent: Monday, June 25, 2001 9:59 PM To: Jared Mauch Cc: lucifer@lightbearer.com; Brett Frankenberger; nanog@merit.edu Subject: Re: Global BGP - 2001-06-23
Jared Mauch wrote:
On Mon, Jun 25, 2001 at 02:38:32PM -0700,
Can anyone verify whether Cisco still does BGP this way? (Propagate, then kill origionating session). If so, it rather clearly answers the question about how this managed to make it throughout the network...
I'm fairly sure that is not the case anymore.
(For the record: I'm not trying to Cisco-bash here. All vendors have problems, and when you have a huge market share, your
lucifer@lightbearer.com wrote: problems tend to
show up much more obviously, when they appear. However, Cisco does still have a huge market share, meaning this affected a whole lot of people, if true... so, I'm curious).
From what I can tell this time it was not ciscos fault. It appears that the vendor that had the problem just had an issue with a specific "valid" announcement that others propogated to it.
All I can say is that the only report I have had about what caused the whole mess to start was a Cisco BugID regarding a mangling done by some IOS versions on a particular sort of route update that made it invalid (or perhaps, "more invalid"). And if Cisco is no longer propagating routes before shutting down the source session, then we're back to wondering how this particular issue managed to cause flaps at the same time across at least 5 "big player" networks that I've had reports about (including 3 by direct observation), at the same time. This person must have some pretty impressive connectivity, if they managed to get what appears to be well over a dozen routers at the absolute minimum, and more likely in the range of "hundreds" if the rumor volume is at all accurate, to each display the bug (since, if a bad announcement isn't propagated, it will never reach anything but the direct peers; thus, this person would have to be directly peered with every router that anyone saw flapping sessions to a customer).
Now, I'll grant, it would be possible to do this, but for them to have hit just *our* network, they would have to be on 3 major carries in 3 states, including some places that a normal class B-type announcer just isn't terribly likely to have a peering session.
What is interesting is one could use this to see what providers are using vendor "X" at exchange points.
Quite true. Though I suspect that in some cases, this might only tell you what routing code they use. Making too many inferences is probably unwise. Especially given the number of folks who thought they knew who "X" was, only to state their guess and come out wrong... -- ************************************************************** ************* Joel Baker System Administrator - lightbearer.com lucifer@lightbearer.com http://www.lightbearer.com/~lucifer
-----BEGIN PGP SIGNATURE----- Version: PGPfreeware 7.0.3 for non-commercial use <http://www.pgp.com> iQA/AwUBOzjUksp0j1NsDQTPEQI+2gCgiu1CGp0VZrdDmJhIR7twld+0lg4AoP9O 43Kd24mxIEgcAPj+GEEbqW8Y =j1IU -----END PGP SIGNATURE-----
On Tue, 26 Jun 2001 14:30:13 EDT, Matt Levine <matt@deliver3.com> said:
<sigh>... If the RFC jumped off a cliff...
The correct analogy is "If the RFC said 'stop drop and roll if you're on fire'". Which, incidentally, *IS* approximately what it says. -- Valdis Kletnieks Operating Systems Analyst Virginia Tech
On Tue, 26 June 2001, "Matt Levine" wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
<sigh>... If the RFC jumped off a cliff...
Pointless and irrelevant. Do you follow the accepted standard or not - that is what it comes down to. Bugs are bugs and everyone has them, big deal. However, there is a general consensus about how things are supposed to work - interoperability is somewhat difficult in this day and age without it. So which is it? Follow the standards - be they RFC, STD, draft, de facto, or de jure - or roll your own and pray? No one has stated that closing the session is bad thing, and the general feeling is that its a good thing. So what is it that you want? .chance (rambling on only for himself and not representing anyone else)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 What I would like is for my routers to not drop 4 of our 6 transit providers, RFC, standard, not standard, whatever. We've suggested to our vendor that there atleast be some option to control this, we are not at the core, we are an end user. When following the RFC dictates that our routing equipment loses connectivity to the internet, then I say that there is a problem. It's really nice that they can say "it's not a bug, it's a feature", but this is a feature I'd at the very least have the ability to turn off. Matt - -- Matt Levine @Home: matt@deliver3.com @Work: matt@eldosales.com ICQ : 17080004 PGP : http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x6C0D04CF - -----Original Message----- From: Chance Whaley [mailto:chance@dreamscope.com] Sent: Tuesday, June 26, 2001 2:51 PM To: 'Matt Levine'; nanog@merit.edu Subject: RE: Global BGP - 2001-06-23 - Vendor X's statement...
On Tue, 26 June 2001, "Matt Levine" wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
<sigh>... If the RFC jumped off a cliff...
Pointless and irrelevant. Do you follow the accepted standard or not - - that is what it comes down to. Bugs are bugs and everyone has them, big deal. However, there is a general consensus about how things are supposed to work - interoperability is somewhat difficult in this day and age without it. So which is it? Follow the standards - be they RFC, STD, draft, de facto, or de jure - or roll your own and pray? No one has stated that closing the session is bad thing, and the general feeling is that its a good thing. So what is it that you want? .chance (rambling on only for himself and not representing anyone else) -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 7.0.3 for non-commercial use <http://www.pgp.com> iQA/AwUBOzjd18p0j1NsDQTPEQISJACg2/qve9ML8rE9nq6YAbXpz0Eph3kAoMjy fb5ufjjcM2bcDgvYasBWIcP7 =64JJ -----END PGP SIGNATURE-----
participants (6)
-
Brett Frankenberger
-
Chance Whaley
-
Jared Mauch
-
lucifer@lightbearer.com
-
Matt Levine
-
Valdis.Kletnieks@vt.edu