RE: Tier-1 without their own backbone?
One of the providers we are looking at is Level-3. Any comments good/bad on reliability and clue? We already have UU, Sprint, and AT&T. I also realize that the "they suck less" list changes continuously... :)
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side. MFN/AboveNet has also been very good to us. -Sean Sean P. Crandall VP Engineering Operations MegaPath Networks Inc. 6691 Owens Drive Pleasanton, CA 94588 (925) 201-2530 (office) (925) 201-2550 (fax)
I hear that Level 3 is good but do they handle small stuff like T-1? We may be looking to dual-home soon and will be looking around. ----- Original Message ----- From: "Sean Crandall" <sean@megapath.net> To: "'Rick Ernst'" <erond@legendz.com>; <nanog@merit.edu> Sent: Wednesday, August 27, 2003 15:48 Subject: RE: Tier-1 without their own backbone?
One of the providers we are looking at is Level-3. Any comments good/bad on reliability and clue? We already have UU, Sprint, and AT&T. I also realize that the "they suck less" list changes continuously... :)
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side.
MFN/AboveNet has also been very good to us.
-Sean
Sean P. Crandall VP Engineering Operations MegaPath Networks Inc. 6691 Owens Drive Pleasanton, CA 94588 (925) 201-2530 (office) (925) 201-2550 (fax)
--On Wednesday, August 27, 2003 15:53:44 -0500 John Palmer <nanog@adns.net> wrote:
I hear that Level 3 is good but do they handle small stuff like T-1? We may be looking to dual-home soon and will be looking around.
Remember, Level(3) bought (at least some of) genuity/bbn. I was always impressed with the genuity folks. We just switched a DS3 to the AS3356 backbone from AS1 on Monday. Smoothest turn up I've ever had. LER
----- Original Message ----- From: "Sean Crandall" <sean@megapath.net> To: "'Rick Ernst'" <erond@legendz.com>; <nanog@merit.edu> Sent: Wednesday, August 27, 2003 15:48 Subject: RE: Tier-1 without their own backbone?
One of the providers we are looking at is Level-3. Any comments good/bad on reliability and clue? We already have UU, Sprint, and AT&T. I also realize that the "they suck less" list changes continuously... :)
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side.
MFN/AboveNet has also been very good to us.
-Sean
Sean P. Crandall VP Engineering Operations MegaPath Networks Inc. 6691 Owens Drive Pleasanton, CA 94588 (925) 201-2530 (office) (925) 201-2550 (fax)
-- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: ler@lerctr.org US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
Well don't send messages to a list from an address that you don't want to receive responses to... After sending an offlist response:
This is probably because this is an internal account that no one is supposed to be sending mail to. If you are sending it mail, you are probably a low-life, bottom feeding scum sucking spammer who will burn in hell. NO addresses at this domain EVER want to hear from you.
What part of that don't you understand?
<nanog@adns.net>: Message rejected by recipient.
I believe this particular case has been brought to the attention of the list before, no? That said, I remember hearing that Level3 tried not to do the smaller stuff directly, but that situation may be different now. -- "Since when is skepticism un-American? Dissent's not treason but they talk like it's the same..." (Sleater-Kinney - "Combat Rock")
On Wed, 27 Aug 2003, Sean Crandall wrote:
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side.
MFN/AboveNet has also been very good to us.
Another happy Level3 customer. We have a similarly sized connection to MFN/AboveNet, which I won't recommend at this time due to some very questionable null routing they're doing (propogating routes to destinations, then bitbucketing traffic sent to them) which is causing complaints from some of our customers and forcing us to make routing adjustments as the customers notice MFN/AboveNet has broken our connectivity to these destinations. Or as they say, I encourage my competitors buy from them. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route System Administrator | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Wed, 27 Aug 2003, jlewis@lewis.org wrote:
We have a similarly sized connection to MFN/AboveNet, which I won't recommend at this time due to some very questionable null routing they're doing (propogating routes to destinations, then bitbucketing traffic sent to them) which is causing complaints from some of our customers and forcing us to make routing adjustments as the customers notice MFN/AboveNet has broken our connectivity to these destinations.
We've noticed that one of our upstreams (Global Crossing) has introduced ICMP rate limiting 4/5 days ago. This means that any traceroutes/pings through them look awful (up to 60% apparent packet loss). After contacting their NOC, they said that the directive to install the ICMP rate limiting was from the Homeland Security folks and that they would not remove them or change the rate at which they limit in the foreseeable future. What are other transit providers doing about this or is it just GLBX? Cheers, Rich
On Thu, Aug 28, 2003 at 01:23:40PM +0100, variable@ednet.co.uk wrote:
On Wed, 27 Aug 2003, jlewis@lewis.org wrote:
We have a similarly sized connection to MFN/AboveNet, which I won't recommend at this time due to some very questionable null routing they're doing (propogating routes to destinations, then bitbucketing traffic sent to them) which is causing complaints from some of our customers and forcing us to make routing adjustments as the customers notice MFN/AboveNet has broken our connectivity to these destinations.
We've noticed that one of our upstreams (Global Crossing) has introduced ICMP rate limiting 4/5 days ago. This means that any traceroutes/pings through them look awful (up to 60% apparent packet loss). After contacting their NOC, they said that the directive to install the ICMP rate limiting was from the Homeland Security folks and that they would not remove them or change the rate at which they limit in the foreseeable future.
I guess this depends on the type of interconnect you have with them. If you're speaking across a public-IX or private (or even paid) peering link, this doesn't seem unreasonable that they would limit traffic to a particular percentage across that circuit. I think the key is to determine what is 'normal' and what obviously constitutes an out of the ordinary amount of ICMP traffic. If you're a customer, there's not really a good reason to rate-limit your icmp traffic. customers tend to notice and gripe. they expect a bit of loss when transiting a peering circuit or public fabric, and if the loss is only of icmp they tend to not care. This is why when I receive escalated tickets I check using non-icmp based tools as well as using icmp based tools.
What are other transit providers doing about this or is it just GLBX?
here's one of many i've posted in the past, note it's also related to securing machines. http://www.ultraviolet.org/mail-archives/nanog.2002/0168.html I recommend everyone do such icmp rate-limits on their peering circuits and public exchange fabrics to what is a 'normal' traffic flow on your network. The above message from the archives is from Jan 2002, if these were a problem then and still are now, perhaps people should either 1) accept that this is part of normal internet operations, or 2) decide that this is enough and it's time to seriously do something about these things. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Thu, Aug 28, 2003 at 08:48:50AM -0400, Jared Mauch wrote:
they [customers] expect a bit of loss when transiting a peering circuit or public fabric, and if the loss is only of icmp they tend to not care.
Um, since when? My customers expect perfection and if they don't get it, they're gonna gripe. Even if it's just the appearance of a problem (through traceroute and ICMP echo or similar), I'm going to hear about it. Personally, I tollerate a little loss. But I'm an engineer. I'm not a customer who has little or no concept of how the internet works and who doesn't really want to. The customer just wants it to work and when it doesn't they expect me to fix it, not explain to them that there really isn't a problem and that it's all in their head.
What are other transit providers doing about this or is it just GLBX?
here's one of many i've posted in the past, note it's also related to securing machines.
http://www.ultraviolet.org/mail-archives/nanog.2002/0168.html
I recommend everyone do such icmp rate-limits on their peering circuits and public exchange fabrics to what is a 'normal' traffic flow on your network. The above message from the archives is from Jan 2002, if these were a problem then and still are now, perhaps people should either 1) accept that this is part of normal internet operations, or 2) decide that this is enough and it's time to seriously do something about these things.
While rate limiting ICMP can be a good thing, it has to be done carefully and probably can't be uniform across the backbone. (think of a common site that gets pinged whenever someone wants to test to see if their connection went down or if it's just loaded.. Limit ICMP into them impropperly and lots of folks notice.) Such limiting also has to undergo periodic tuning as traffic levels increase, traffic patterns shift, and so forth. If a provider is willing to put the effort into it to do it right, I'm all for it. If they're just gonna arbitrarily decide that the allowable flow rate is 200k across an OC48 and never touch it again then that policy is going to cause problems. --- Wayne Bouchard web@typo.org Network Dude http://www.typo.org/~web/
On Thu, 28 Aug 2003, Wayne E. Bouchard wrote:
While rate limiting ICMP can be a good thing, it has to be done carefully and probably can't be uniform across the backbone. (think of a common site that gets pinged whenever someone wants to test to see if their connection went down or if it's just loaded.. Limit ICMP into them impropperly and lots of folks notice.) Such limiting also has to undergo periodic tuning as traffic levels increase, traffic patterns shift, and so forth.
Along these lines, how does this limiting affect akamai or other 'ping for distance' type localization services? I'd think their data would get somewhat skewed, right?
On Thu, Aug 28, 2003 at 03:55:26PM +0000, Christopher L. Morrow wrote:
On Thu, 28 Aug 2003, Wayne E. Bouchard wrote:
While rate limiting ICMP can be a good thing, it has to be done carefully and probably can't be uniform across the backbone. (think of a common site that gets pinged whenever someone wants to test to see if their connection went down or if it's just loaded.. Limit ICMP into them impropperly and lots of folks notice.) Such limiting also has to undergo periodic tuning as traffic levels increase, traffic patterns shift, and so forth.
Along these lines, how does this limiting affect akamai or other 'ping for distance' type localization services? I'd think their data would get somewhat skewed, right?
Perhaps they'll come up with a more advanced system of monitoring? probally the best way to do that is to track the download speed either with cookies (with subnet info) or by subnet only to determine the best localization. With an imperfect system of tracking localization, you will get imperfect results. - jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
At 12:39 PM 8/28/2003, you wrote:
Along these lines, how does this limiting affect akamai or other 'ping for distance' type localization services? I'd think their data would get somewhat skewed, right?
Perhaps they'll come up with a more advanced system of monitoring?
probally the best way to do that is to track the download speed either with cookies (with subnet info) or by subnet only to determine the best localization.
With an imperfect system of tracking localization, you will get imperfect results.
I'm not sure about other implementations, but our Akamai boxes in our datacenter receive all traffic requests which originate from our address space as predefined with Akamai. I believe they also somehow factor in address space announcements originated via our AS as well since they asked for our AS when we originally started working with them. -Robert Tellurian Networks - The Ultimate Internet Connection http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Good will, like a good name, is got by many actions, and lost by one." - Francis Jeffrey
Along these lines, how does this limiting affect akamai or other 'ping for distance' type localization services? I'd think their data would get somewhat skewed, right?
using icmp to predict tcp performance has always been a silly idea; it doesn't take any icmp rate limit policy changes to make it silly. other silly ways to try to predict tcp performance include aspath length comparisons, stupid dns tricks, or geographic distance comparisons. the only reliable way to know what tcp will do is execute it. not just the syn/synack as in some blast protocols i know of, but the whole session. and the predictive value of the information you'll gain from this decays rather quickly unless you have a lot of it for trending/aggregation. "gee, ping was faster to A but tcp was faster to B, do you s'pose there could be a satellite link, or a 9600 baud modem, in the system somewhere?" -- Paul Vixie
NAC is not a global intercontinental super-duper backbone, but we do the same. It takes some education to the customers, but after they understand why, most are receptive. Especially when they get DOS'ed. On Thu, 28 Aug 2003 variable@ednet.co.uk wrote:
On Wed, 27 Aug 2003, jlewis@lewis.org wrote:
We have a similarly sized connection to MFN/AboveNet, which I won't recommend at this time due to some very questionable null routing they're doing (propogating routes to destinations, then bitbucketing traffic sent to them) which is causing complaints from some of our customers and forcing us to make routing adjustments as the customers notice MFN/AboveNet has broken our connectivity to these destinations.
We've noticed that one of our upstreams (Global Crossing) has introduced ICMP rate limiting 4/5 days ago. This means that any traceroutes/pings through them look awful (up to 60% apparent packet loss). After contacting their NOC, they said that the directive to install the ICMP rate limiting was from the Homeland Security folks and that they would not remove them or change the rate at which they limit in the foreseeable future.
What are other transit providers doing about this or is it just GLBX?
Cheers,
Rich
* variable@ednet.co.uk said:
On Wed, 27 Aug 2003, jlewis@lewis.org wrote:
We have a similarly sized connection to MFN/AboveNet, which I won't recommend at this time due to some very questionable null routing they're doing (propogating routes to destinations, then bitbucketing traffic sent to them) which is causing complaints from some of our customers and forcing us to make routing adjustments as the customers notice MFN/AboveNet has broken our connectivity to these destinations.
We've noticed that one of our upstreams (Global Crossing) has introduced ICMP rate limiting 4/5 days ago. This means that any traceroutes/pings through them look awful (up to 60% apparent packet loss). After contacting their NOC, they said that the directive to install the ICMP rate limiting was from the Homeland Security folks and that they would not remove them or change the rate at which they limit in the foreseeable future.
Homeland Security recommended the filtering of ports 137-139 but have not, to my knowledge, recommended rate limiting ICMP. I speak for Global Crossing when I say that ICMP rate limiting has existed on the Global Crossing network, inbound from peers, for a long time ... we learned our lesson from the Yahoo DDoS attack (when they were one of our customers) back in the day and it was shortly thereafter that we implemented the rate limiters. Over the past 24 hours we've performed some experimentation that shows outbound rate limiters being also of value and we're looking at the specifics of differentiating between happy ICMP and naughty 92 byte packet ICMP and treating the latter with very strict rules ... like we would dump it on the floor. This, I believe, will stomp on the bad traffic but allow the happy traffic to pass unmolested. The rate-limiters have become more interesting recently, meaning they've actually started dropping packets (quite a lot in some cases) because of the widespread exploitation of unpatched windows machines. Our results show that were we to raise the size of the queues the quantity of ICMP is such that it would just fill it up and if we permit all ICMP to pass unfettered we would find some peering circuits that become conjested. Our customers would not appreciate the latter either. -Steve
On Thu, 28 Aug 2003, Steve Carter wrote:
The rate-limiters have become more interesting recently, meaning they've actually started dropping packets (quite a lot in some cases) because of the widespread exploitation of unpatched windows machines.
Yep, the amount of ICMP traffic seems to be increasing on most backbones due to worm activity. It probably hasn't exceed HTTP yet, but it is surpasssing many other protocols. Some providers have seen ICMP increase by over 1,000% over the last two weeks. Unfortunately, the question sometimes becomes which packets do you care about more? Ping or HTTP? Patch your Windows boxes. Get your neighbors to patch their Windows boxes. Microsoft make a CD so people can fix their Windows machines before they connect them to the network.
* Sean Donelan said:
On Thu, 28 Aug 2003, Steve Carter wrote:
The rate-limiters have become more interesting recently, meaning they've actually started dropping packets (quite a lot in some cases) because of the widespread exploitation of unpatched windows machines.
Yep, the amount of ICMP traffic seems to be increasing on most backbones due to worm activity. It probably hasn't exceed HTTP yet, but it is surpasssing many other protocols. Some providers have seen ICMP increase by over 1,000% over the last two weeks.
The results of our data collection is almost unbelievable. I've had to have it rechecked multiple times because I had a hard time even groking the scale. Like, dude, is your calculator broken? It appears that the volume is still growing ... even with the widespread publicity. Those of us that are sourcing this traffic need to protect ourselves and the community by rate limiting because the exploited are not. I agree with Wayne that we need to be smart (reads: very specific) about how we rate limit during this event. When the event is over we can go back to just a simple rate limit that protects us in a very general way until the next event jumps up. <private message> Yuh, Jay, I changed my tune ... you were right. </private message> -Steve
Inline. On Thu, Aug 28, 2003 at 12:01:16PM -0400, Sean Donelan said something to the effect of:
On Thu, 28 Aug 2003, Steve Carter wrote:
The rate-limiters have become more interesting recently, meaning they've actually started dropping packets (quite a lot in some cases) because of the widespread exploitation of unpatched windows machines.
Yep, the amount of ICMP traffic seems to be increasing on most backbones due to worm activity. It probably hasn't exceed HTTP yet, but it is surpasssing many other protocols. Some providers have seen ICMP increase by over 1,000% over the last two weeks.
I fear that all this has been a conspiracy machinated by an amalgam of coffee purveyors and aspirin/analgesic manufacturers. This is most definitely true. I work on GBLX's Internet Security team and had the dubious fortune of being the oncall engineer this week. The sheer volume of icmp I've see just as a result of slurping traffic off customer interfaces, not peering points, related to security incident reports is staggering. Facing facts, people are _not_ patching their stuff, in spite of pervasive pleas and warnings from vendors and media geeks. Many of the infected customers, presenting initially with symptoms of circuit saturation and latency, are shocked to learn that they are in effect DoSing themselves, and only then are they even mildly-motivated to seek out sub-par OS builds and patch their boxen. While a rate limit doesn't do anything to restore link health to those customers, it prevents them from flooding the playground for the rest of us. Others remain more or less clueless that they're throttling unholy quantities of icmp (among other things) until a node threatens to go unstable and we start filtering and swinging traffic in a flurry of damage control, subsequently calling _them_ and asking that the issue be investigated. Having a router reload or an upstream circuit become saturated is far more rigorous to the customers downstream than pruning back their capacity for icmp. We are operating in an unusual time, where these solutions may seem less than elegant, but are appropriate when overall network health and general responsibility dictate that more aggressive praxes of risk mitigation be deployed. When the din dies down to a more manageable roar, perhaps the caps can be re-evaluated. In the interim, these measures are levied in the name of customer/non-customer/device protection, and not enacted without great thought to the impact on our customers and downstreams.
Unfortunately, the question sometimes becomes which packets do you care about more? Ping or HTTP?
Unfortunate ultimatum, but cheers. It's true.
Patch your Windows boxes. Get your neighbors to patch their Windows boxes.
Simple, but brilliant. Please. If I could find my friggin fairy dust, I'd conjure up a trojan that went out and reloaded infected hosts with a new OS. Call it *poof*BSD perhaps? Just till this thing blows over... ;)
Microsoft make a CD so people can fix their Windows machines before they connect them to the network.
And this is a great idea...
ymmv, --ra -- K. Rachael Treu rara at navigo dot com ..Fata viam invenient.. -- I am an employee of, but do not necessarily represent herein, Global Crossing, Ltd. --
On Thu, 28 Aug 2003, Rachael Treu wrote:
Facing facts, people are _not_ patching their stuff, in spite of pervasive pleas and warnings from vendors and media geeks.
There need to be more serious consequences for not patching. Like, having their ports turned down until they decide that patching might not be such a bad idea after all. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
We have been doing that. During quiet times our Customer Service Reps (CSR) are calling infected users telling them a) Their computer has been compromised. In its current state it can potentially be taken over by others or other users can look at the contents of their private files etc. b) It is currently interfering with other users connections. Particularly our DSL users can blast out at a fast enough rate to hamper dialup users. If the user is not home (often broadband users leave their computers on) the CSRs leave a message stating the customer can call in any time they like and they will be reactivated. Once doing so, they need to clean their machine ASAP-- there are several FREE point and click tools now. The majority comply and are understanding. I think the key is to emphasize that its in their best interest and that we did it for THEIR protection (i.e. someone can potentially take over your machine,look at your private files, delete things etc etc). Also emphasize that they need to be a responsible Internet participant -- e.g. how would they like it if another user was hampering their connection because that other user had a virus and we didnt get them to clean it up. Give your CSRs a script or talking points to follow and it should be smooth for the most part. ---Mike At 12:42 PM 28/08/2003 -0700, Dan Hollis wrote:
On Thu, 28 Aug 2003, Rachael Treu wrote:
Facing facts, people are _not_ patching their stuff, in spite of pervasive pleas and warnings from vendors and media geeks.
There need to be more serious consequences for not patching. Like, having their ports turned down until they decide that patching might not be such a bad idea after all.
-Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
At 01:57 PM 28/08/2003 -0700, Dan Hollis wrote:
On Thu, 28 Aug 2003, Mike Tancsa wrote:
The majority comply and are understanding.
and the rest?
There will always be troublesome customers, but the VAST majority have been compliant. If they dont want to comply to something as reasonable as this, they will go to my competitors who will then have to deal with the flood of abuse hate mail (I am calling the FBI if you dont fix this), retaliatory attacks, black listings etc etc... i.e. they will become a headache for my competitors. Other sites who are large and dont necessarily have the resources to immediately find and kill the offending host (with sobig.f the headers will often show the NETBIOS name of the sending machine so its not THAT hard to find), we will add local rules to contain them for now until they have their IT consultants clean it up. But like I said before, give your CSRs a script. Explain to the customer how this is in their best interest... Most people are reasonable. We have all talked to people who say things like, "I have had 10 different ISPs and none have made me do something like this! I demand.......".... remember to ask yourself, why have they gone through 10 different ISPs ..... ---Mike
It should be pointed put that the ISPs have their share of blame for the quick-spreading worms, beause they neglected very simple precautions -- such as giving cutomers pre-configured routers or DSL/cable modems with firewalls disabled by default (instead of the standard "end-user, let only outgoing connections thru" configuration), and providing insufficient information to end-users on configuring these firewalls. --vadim
Vadim Antonov wrote:
It should be pointed put that the ISPs have their share of blame for the quick-spreading worms, beause they neglected very simple precautions -- such as giving cutomers pre-configured routers or DSL/cable modems with firewalls disabled by default (instead of the standard "end-user, let only outgoing connections thru" configuration), and providing insufficient information to end-users on configuring these firewalls.
And you´re willing to pay all the helpdesk persons helping these people to adjust their configurations to accommodate for KaZaa, BitTorrent, Quake3, Counter Strike, etc? It would be much easier and more centralized if the networking interfaces in operating systems would not expose services by default. But were already went there. Pete
On Thu, Aug 28, 2003 at 03:09:22PM -0700, Vadim Antonov wrote:
It should be pointed put that the ISPs have their share of blame for the quick-spreading worms, beause they neglected very simple precautions -- such as giving cutomers pre-configured routers or DSL/cable modems with firewalls disabled by default (instead of the standard "end-user, let only outgoing connections thru" configuration), and providing insufficient information to end-users on configuring these firewalls.
Yes, fingerpointing fixes everything. I remember a quote my music teacher used to tell the class, "for every one finger you point at someone else, four fingers are pointing back at you".
On Thu, 2003-08-28 at 17:37, Steve Carter wrote:
I speak for Global Crossing when I say that ICMP rate limiting has existed on the Global Crossing network, inbound from peers, for a long time ... we learned our lesson from the Yahoo DDoS attack (when they were one of our customers) back in the day and it was shortly thereafter that we implemented the rate limiters. Over the past 24 hours we've performed some experimentation that shows outbound rate limiters being also of value and we're looking at the specifics of differentiating between happy ICMP and naughty 92 byte packet ICMP and treating the latter with very strict rules ... like we would dump it on the floor. This, I believe, will stomp on the bad traffic but allow the happy traffic to pass unmolested.
I think I can safely say that GBLX is beyond "looking at the specifics" of dropping 92-byte ICMP's, and are in fact doing it. And have not really bothered telling their customers about it either. We happen to use GBLX as one of our upstreams, and have a GigE pipe towards them. Since MS in their infinite wisdom seem to use 92-byte ICMP Echos in the Windows tracert.exe without having any option to use another protocol and/or packetsize, this certainly has generated several calls to OUR support desk today, by customers of ours claiming "your routing is broken, traceroutes aren't getting anywhere!". Although I obviously understand the reasons, it WOULD be nice if if a supplier would at least take the trouble to inform us when they start applying filters to customer traffic, so our helpdesk would be prepared to answer questions about it. We are not a peer, but a paying customer after all. Oh, and it is not rate-limiting causing this, it is most definitely 92-byte filters. "traceroute -P icmp www.gblx.net 92" from a decent OS will drop, any other packetsize works like a charm. /leg
Level(3) is generally very good. Great engineering team and very reliable. I'm not sure if their pricing will maintain their business model in the long run, but I certainly hope so. - Daniel Golding On Wed, 27 Aug 2003, Sean Crandall wrote:
One of the providers we are looking at is Level-3. Any comments good/bad on reliability and clue? We already have UU, Sprint, and AT&T. I also realize that the "they suck less" list changes continuously... :)
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side.
MFN/AboveNet has also been very good to us.
-Sean
Sean P. Crandall VP Engineering Operations MegaPath Networks Inc. 6691 Owens Drive Pleasanton, CA 94588 (925) 201-2530 (office) (925) 201-2550 (fax)
Agreed. I know nothing about the pricing but last time I had a problem with BGP, it only took a few minutes to get someone with enable and clue, calling their general support number posted on their website. The problem was on their end and it was fixed while I was on the phone. Arguably one of the fastest response times I've ever had with a vendor. -Paul On Sun, 2003-08-31 at 20:40, Daniel Golding wrote:
Level(3) is generally very good. Great engineering team and very reliable. I'm not sure if their pricing will maintain their business model in the long run, but I certainly hope so.
- Daniel Golding
On Wed, 27 Aug 2003, Sean Crandall wrote:
One of the providers we are looking at is Level-3. Any comments good/bad on reliability and clue? We already have UU, Sprint, and AT&T. I also realize that the "they suck less" list changes continuously... :)
I have about 5 GB of IP transit connections from Level3 across 8 markets (plus using their facilities for our backbone). Level3 has been very solid on the IP transit side.
MFN/AboveNet has also been very good to us.
-Sean
Sean P. Crandall VP Engineering Operations MegaPath Networks Inc. 6691 Owens Drive Pleasanton, CA 94588 (925) 201-2530 (office) (925) 201-2550 (fax)
-- Paul Timmins <paul@timmins.net>
participants (23)
-
Alex Rubenstein
-
Christopher L. Morrow
-
Dan Hollis
-
Daniel Golding
-
Jared Mauch
-
jlewis@lewis.org
-
John Palmer
-
Larry Rosenman
-
Lars Erik Gullerud
-
Mike Tancsa
-
Omachonu Ogali
-
Paul Timmins
-
Paul Vixie
-
Petri Helenius
-
Rachael Treu
-
Robert Boyle
-
Sean Crandall
-
Sean Donelan
-
Steve Carter
-
Vadim Antonov
-
variable@ednet.co.uk
-
Wayne E. Bouchard
-
william+nanog@hq.dreamhost.com