Future attacks will be stronger and more organized. So how do we protect the root servers from future attack?
protecting the servers is not the *critical* point. protecting the service is. don't obsessed up on silly boxes. of course, box/link protection is *one* aspect of protecting the service. randy
protecting the servers is not the *critical* point. protecting the service is. don't obsessed up on silly boxes.
You're right. It comes down to risk mitigation, not risk elimination. I'd posit it's impossible to PREVENT a DDOS attack -- as such, as we did when they first manifested themselves in 1999, we need to develop response plans capable of meeting the onslaught and mitigating its impact so that things continue to function, even if they're degraded somewhat. It's like airport security - total security is a fantasy, but we have to raise the bar to make it more difficult for an attacker, and couple that with effective plans to respond when things occur, thus ensuring both an acceptable level of service during the incident and a smooth recovery/investigation afterward. Of course, in the airport security case, the bar's still lying on the ground..... :( Rick Infowarrior.org
On Thu, 24 Oct 2002, Richard Forno wrote:
I'd posit it's impossible to PREVENT a DDOS attack -- as such, as we did when they first manifested themselves in 1999, we need to develop response plans capable of meeting the onslaught and mitigating its impact so that things continue to function, even if they're degraded somewhat.
1999?! Doesn't anybody remember the massive SYN attack against Panix in 1995? Or that tfreak released smurf.c in July of 1997? (And was it fraggle or papasmurf that came the summer of the following year? Whichever one it was, the other came out within six months after that.) And those are just the ones I remember since I moved away from Rutgers and started working in the BBN NOC - I'm sure there were others even before that. (Not counting accidental operational incidents like the AS 7007 routing chaos in 1997 or the AS 8584 identitical issue a year later.) 1999 was just when Distributed DoS started getting a little airplay. We'd already had four fruitless years of dealing with DoS attacks by the time that happened. What would be wonderful is a radical change in the way we think about DoS attacks. It would be fabulous for someone (or a group of someones) to come up with a completely different way to approach the problem. I wish that I could be the person who does that, who sparks that change, but in the seven years I've been thinking about it, nothing's come to mind. So, seven years of hardening hosts against SYN attacks. Five years of trying to get people to turn off the forwarding of broadcast packets. Three years of botnets generating meg upon meg of crap-bandwidth. Where are the suuuuuper-geniuses? Kelly J. -- Kelly J. Cooper - Security Engineer, CISSP GENUITY - Main # - 800-632-7638 Woburn, MA 01801 - http://www.genuity.net
On Thu, 24 Oct 2002 18:01:44 -0000, "Kelly J. Cooper" <kcooper@genuity.net> said:
So, seven years of hardening hosts against SYN attacks. Five years of trying to get people to turn off the forwarding of broadcast packets. Three years of botnets generating meg upon meg of crap-bandwidth.
Where are the suuuuuper-geniuses?
You know, most bars have bouncers at the door that check IDs. Sure, they're not perfect, but the bartender can usually be pretty sure the guy ordering a beer is over 21. The average bar isn't run by a soooper-genius. But it's still considered fashionable to let packets roam your network without an ID check at the door. soooper-genius solutions aren't going to help any when there's a lot of address space that's managed by Homer Simpson.... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On Thu, 24 Oct 2002 Valdis.Kletnieks@vt.edu wrote:
On Thu, 24 Oct 2002 18:01:44 -0000, "Kelly J. Cooper" <kcooper@genuity.net> said:
So, seven years of hardening hosts against SYN attacks. Five years of trying to get people to turn off the forwarding of broadcast packets. Three years of botnets generating meg upon meg of crap-bandwidth.
Where are the suuuuuper-geniuses?
You know, most bars have bouncers at the door that check IDs. Sure, they're not perfect, but the bartender can usually be pretty sure the guy ordering a beer is over 21. The average bar isn't run by a soooper-genius. But it's still considered fashionable to let packets roam your network without an ID check at the door.
Yeah and how's that working so far?
soooper-genius solutions aren't going to help any when there's a lot of address space that's managed by Homer Simpson....
But there will always be address space managed by Homer Simpson. And that's part of my point - we can't fix everybody's networks. There will always be broken/misconfigured networks run by the willfully ignorant. We've been in an arms race for years. They come up with something, we come up with a response, they come up with something else, we scramble to find router OS code that doesn't crash, etc. It's just back and forth, back and forth. All I'm advocating is breaking out of that pattern. Kelly J. -- Kelly J. Cooper - Security Engineer, CISSP GENUITY - Main # - 800-632-7638 Woburn, MA 01801 - http://www.genuity.net
On Thu, 24 Oct 2002 18:59:46 -0000, "Kelly J. Cooper" <kcooper@genuity.net> said:
You know, most bars have bouncers at the door that check IDs. Sure, they're not perfect, but the bartender can usually be pretty sure the guy ordering a beer is over 21. The average bar isn't run by a soooper-genius. But it's still considered fashionable to let packets roam your network without an ID check at the door.
Yeah and how's that working so far?
Works a lot better than making an overworked bartender do it. And yes, that's an intentional dig at the "but you can't filter at the core" crowd, and the "but you can't backtrack spoofed traffic easily" crowd... How well does it work? Well enough that you can drive by a bar and just *know* that it's a dead night because there's no bouncer. And it's never a dead night on the Internet.
soooper-genius solutions aren't going to help any when there's a lot of address space that's managed by Homer Simpson....
But there will always be address space managed by Homer Simpson.
Why? I'm asking a serious question here - why is it considered acceptable?
All I'm advocating is breaking out of that pattern.
I bet a few good lawsuits alleging civil liability for contributory negligence for allowing spoofed packets would do wonders for that problem. I posit that there won't be any "sooper genius" solution that will actually work as long as the prevailing model is small islands of clue awash in a sea of Homer Simpsons. -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
Something I'd love to see is a blue-ribbon commission (meaning, made up of people with real clue) whose job it was to come up with a bird's-eye view of what the internet would look like if it were designed from scratch today. Maybe this is some of what Internet-II is supposed to be doing but I think it's more focused on very high bandwidth gated community stuff. In theory the internet could be radically redesigned, at least on paper, and still deliver just about the same function as far as end-users are concerned; surfing, email, file transfer, routing, naming, etc. Task one would be "what must be preserved -- what can be tossed?" So, e.g., web browsing/serving must be preserved, but all of IP per se maybe is up for grabs for redesign, etc. The point being maybe we all spend so much time backpatching etc and assuming that the technology can't be shifted much due to backwards compatability that, truth be told, we don't really know what that shift we're avoiding might be if it were feasible. Can't really know how hard it is to build the bridge if you don't know how wide the river is. And now a song for anyone who read this far: Deep in the Heart of Internet (tune: Deep in the Heart of Texas) The web at night - is big and bright, Deep in the heart of Internet. The smurfers' eye - are on that pie, Deep in the heart of Internet. The roots do loom - just like perfume, Deep in the heart of Internet. Reminds smurfs of - why they get no love. Deep in the heart of Internet. The admins cry - eat 'wall and die, Deep in the heart of Internet. The smurfers rush - to send their gush, Deep in the heart of Internet. The reporters wail - hot on the trail, Deep in the heart of Internet. And the spammers spam - and spam and spam, DEEP IN THE HEART OF INTERNET! Lyrics written anonymously by Barry Shein -- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD The World | Public Access Internet | Since 1989 *oo*
On Thu, 24 Oct 2002, Barry Shein wrote:
Something I'd love to see is a blue-ribbon commission (meaning, made up of people with real clue) whose job it was to come up with a bird's-eye view of what the internet would look like if it were designed from scratch today.
How about a council? http://www.eweek.com/article2/0,3959,642876,00.asp October 21, 2002 Network Council to Urge New Practices By Caron Carlson "A council of the largest telephone carriers and ISPs, charged by the federal government with preventing disruptions to the nation's telecommunications system, is preparing a checklist of procedures to protect networks from terrorism and natural disasters."
That sounds to me more like considering the use of sonic repellants rather than rat poison to keep the vermin out of the relays and providing latex gloves for removing the dead rats, rather than designing out the relays the rodents get into entirely. On October 24, 2002 at 17:29 sean@donelan.com (Sean Donelan) wrote:
On Thu, 24 Oct 2002, Barry Shein wrote:
Something I'd love to see is a blue-ribbon commission (meaning, made up of people with real clue) whose job it was to come up with a bird's-eye view of what the internet would look like if it were designed from scratch today.
How about a council?
http://www.eweek.com/article2/0,3959,642876,00.asp October 21, 2002 Network Council to Urge New Practices By Caron Carlson
"A council of the largest telephone carriers and ISPs, charged by the federal government with preventing disruptions to the nation's telecommunications system, is preparing a checklist of procedures to protect networks from terrorism and natural disasters."
-- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD The World | Public Access Internet | Since 1989 *oo*
This is the perfect council to totally immobilize the Internet into a Bell-head pattern. The problem is that we no longer have any big ISPs that aren't telco-driven. PSI and UUNET and the descendants of BBN are all but extinct. Peter
At 02:44 PM 10/24/2002, Barry Shein wrote:
That sounds to me more like considering the use of sonic repellants rather than rat poison to keep the vermin out of the relays and providing latex gloves for removing the dead rats, rather than designing out the relays the rodents get into entirely.
Given time, rats can chew through concrete. They are smart enough to trip traps before eating the cheese, or to lick the cheese off triggers rather than pulling or chewing, so as not to cross the alarm threshold. They breed faster than you can keep up with them, which not only ensures a generous supply of them but also ensures that they adapt to new environments quickly. They have been known to become resistant to poisons that killed rats a few years before. In short, your rats versus script kiddies analogy is perfect, but I think you are forgetting that we still have rats everywhere. ~Ben (who speaks for himself alone here) --- Ben Browning <benb@theriver.com> The River Internet Access Co. Network Operations 1-877-88-RIVER http://www.theriver.com
Assuming no time, money, people, etc resource constraints; securing the Internet is pretty simple. 1. Require all providers install and manage firewalls on all subscriber connections enforcing source address validation. 2. Prohibit subscribers from running services on their own machines. Only approved provider managed servers should provide services to users. 3. Prohibit direct subscriber-to-subscriber communication, except through approved NSP protocol gateways. Only approved NSP-to-NSP proxied traffic should be exchanged between network providers. Are there some down-sides? Sure. But who really needs the end-to-end principle or uncontrolled innovation. "No, the electric telegraph is not a sound invention. It will always be at the mercy of the slightest disruption, wild youths, drunkards, bums, etc.... The electric telegraph meets those destructive elements with only a few meters of wire over which supervision is impossible. A single man could, without being seen, cut the telegraph wires leading to Paris, and in twenty-four hours cut in ten different places the wires of the same line, without being arrested." - Dr. Barbay, Paris France, 1846
At 13:14 -0400 10/25/02, Sean Donelan wrote:
Are there some down-sides? Sure. But who really needs the end-to-end principle or uncontrolled innovation.
The context of the above is, of course, sarcastic. But it reminded me of a quote that once appeared on mailing list that is germane to this. The quote was uttered in 1824 or so, by the inventor of the telegraph. The quote lamented that the funding needed to deploy an innovative concept was held by the folks that were the most threatened by innovation - i.e., they made money with out the latest new fangled thing so whatever the new fangled thing did, it was sure to be a threat to their current income stream. Does anyone know this quote? -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Edward Lewis +1-703-227-9854 ARIN Research Engineer
Assuming no time, money, people, etc resource constraints; securing the Internet is pretty simple.
1. Require all providers install and manage firewalls on all subscriber connections enforcing source address validation.
2. Prohibit subscribers from running services on their own machines. Only approved provider managed servers should provide services to users.
3. Prohibit direct subscriber-to-subscriber communication, except through approved NSP protocol gateways. Only approved NSP-to-NSP proxied traffic should be exchanged between network providers.
Are there some down-sides? Sure. But who really needs the end-to-end principle or uncontrolled innovation.
i can see how the end to end principle applies in cases 2 and 3, but not 1. -- Paul Vixie
On 25 Oct 2002, Paul Vixie wrote:
1. Require all providers install and manage firewalls on all subscriber connections enforcing source address validation.
i can see how the end to end principle applies in cases 2 and 3, but not 1.
I didn't make any of these up. They've all been proposed by serious, well-meaning people. If you have 2 and 3, why do you need to waste global addresses on 1. So the NSP managed "firewall" device is really a super-NAT device, which some well-meaning people believe NAT improves security becauses users won't be able to set the outbound addresses themselves. The firewall will rewrite the user's hidden internal address with the firewall's registered address. Its a mis-understanding of what source address validation is. Some folks think it should work like ANI, where the telephone company writes the "correct" number on the call at the switch.
On Fri, 25 Oct 2002, Sean Donelan wrote: :Assuming no time, money, people, etc resource constraints; securing the :Internet is pretty simple. Assuming you are referring to "securing" as the balance of the holy triuvirate of Confidentiality, Integrity and Availability, there are other options than the modest proposals you made. The ISP doesn't have to manage the firewall, but like I said earlier, if they provided a configurable filter in the form of a web interface to altering access-lists applied to the customers connection, this would solve most problems. It's not so much a question of what needs to be done, the technical solutions are always the easy part. It is a question of who needs to do it. - If OS vendors didn't ship their products with all those services open, we wouldn't need to protect users with default firewall policies. - If all users suddenly had an epiphany and could go to M$.com and click one link to lock down their home machines, M$ could keep shipping their consumer-grade hacker-bait to soccer moms and children. Maybe they can use their monopoly for something constructive for a change. - If the government said that a cyberattack was emminent and launched a WWII style propaganda campaign along the lines of "loose lips sink ships" maybe people might catch on. This might sound silly, but it worked for Y2k. So, modest proposals for draconian feature enhancements and creating arbitrary consumer and provider class users, are thankfully still funny. -- batz
On Thu, 24 Oct 2002, Barry Shein wrote:
Something I'd love to see is a blue-ribbon commission (meaning, made up of people with real clue) whose job it was to come up with a bird's-eye view of what the internet would look like if it were designed from scratch today.
There's a workshop on just this kind of subject, from an even higher bird's eye (namely, how should we think about architecting networks) to be held at SIGCOMM 2003. Personally I'm looking forward to it and not just because SIGCOMM 2003 is in Germany. http://www.acm.org/sigcomm/ccr/cfp/CCR-arch-call.html Craig
On Thu, Oct 24, 2002 at 06:01:44PM +0000, Kelly J. Cooper wrote:
1999?! Doesn't anybody remember the massive SYN attack against Panix in 1995? Or that tfreak released smurf.c in July of 1997? (And was it fraggle or papasmurf that came the summer of the following year? Whichever one it was, the other came out within six months after that.)
Not to mention that the tools being publically available is much different than being known by a certain community (covert IRC blackhat communities differ slightly from EFNet which differs even moreso from CERT, etc). I recall working at GoodNet, and smurf attacks affecting customer networks the first week of May, 1997. There is speculation that the root name servers attacks were from a modified tool of a current well-known tool. How does that fit into the equation? Information needs to pass quickly and correctly. BUGTRAQ has typically been the best forum for this, and NANOG as well. However, Internet operators will continue to lag behind the times even if we have a more intelligent infrastructure capable of handling these problems. I see this being done on a organization-by-organization basis, but no real consistent community. The correct plan is to have one person dedicated to packet capture infrastructure, another person dedicated to packet-to-tool identification and reverse engineering, and finally a large group dedicated to filtering/moving the traffic with open or proprietary (including home-grown) solutions (proactively and upon peer/customer notification), e.g.: rfc2827, rfc3013 (ingress and egress filtering) rfc1750, rfc1858, rfc1948, rfc2196, rfc3128, rfc3365 rfc2142 (and draft-vixie-ops-stdaddr-01.txt), rfc1173, rfc1746, rfc2350 draft-ymbk-obscurity-00.txt, draft-ietf-grip-isp-expectations-05.txt draft-moriarty-ddos-rid-01.txt, draft-jones-netsec-reqs-00.txt draft-turk-bgp-dos-01.txt, draft-dattathrani-tcp-ip-security-00.txt http://www.secsup.org/Tracking/ http://www.secsup.org/CustomerBlackHole/ http://www.cymru.com/ http://www.tcpdump.org All of this information needs to be in one place, and organizations need to understand that working together on these problems in the only way to fix them (this goes doubly for hardware/software vendors). I'm sure I left out a ton of information, and the list could become exhaustive very quickly and easily. The ideas and the strategies all stay the same, and the end result is hopefully a more secure, resilient infrastructure. In some ways, you and your organization either get it or you don't. And there is no way to force people into understanding the concept - let alone the importance of these issues. How do you solve that problem? -dre
On Thu, Oct 24, 2002 at 06:01:44PM +0000, Kelly J. Cooper wrote:
What would be wonderful is a radical change in the way we think about DoS attacks. It would be fabulous for someone (or a group of someones) to come up with a completely different way to approach the problem. I wish that I could be the person who does that, who sparks that change, but in the seven years I've been thinking about it, nothing's come to mind.
So, seven years of hardening hosts against SYN attacks. Five years of trying to get people to turn off the forwarding of broadcast packets. Three years of botnets generating meg upon meg of crap-bandwidth.
We have hosts that can take 100Mbit worth of SYN attacks out-of-the-box, instead of the dialups worth that crippled PANIX. We have a smurf attack against the root servers which was so small it was trivially filtered, compared to the gigabits of broadcasts which used to be open. Heck I got a bigger smurf the last time I made fun of Ralph Doncaster's "IGP-less network" on this list. Yes it's not so completely dead that you can only find it in labratories like smallpox, but the once seemly endless supply of broadcasts has been closed down to the point where it is now more difficult for attackers to find them then it is worth in damage when they use them. It's not "dead", but it's so effectively close that for most of us it might as well be. We're still working on the distributed attacks, but eventually we'll come up with something just as effective. If it was as easy to scan for networks who don't spoof filter as it is to scan for networks with open broadcasts, I think we'd have had that problem licked too. It's the nature of people to invent new ways to accomplish their goals, both from the attackers and the people running the networks. If we hadn't plugged the PANIX style attacks, do you think anyone would have bothered writing smurf, when they already had a tool which worked? So the question is, do you think we're better off because we've created better TCP/IP stacks and better routers, or worse off because we've created better attackers with better tools we currently don't have much defense against? -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)
On Thu, Oct 24, 2002 at 04:07:18PM -0400, Richard A Steenbergen mooed:
We're still working on the distributed attacks, but eventually we'll come up with something just as effective. If it was as easy to scan for networks who don't spoof filter as it is to scan for networks with open broadcasts, I think we'd have had that problem licked too.
Are you sure? * A smurf attack hurts the open broadcast network as much (or more) than it does the victim. A DDoS attack from a large number of sites need not be all that harmful to any one traffic source. * 'no ip directed broadcast', which is becoming the default behavior for many routers and end-systems, vs. 'access-list 150 deny ip ... any' 'access-list 150 deny ip ... any' ... 'access-list 150 permit ip any any' (ignoring rpf, which doesn't work for everyone). Until the default behavior of most systems is to block spoofed packets, it's going to remain a problem. -Dave, whose glass is half-empty this week. :) -- work: dga@lcs.mit.edu me: dga@pobox.com MIT Laboratory for Computer Science http://www.angio.net/ I do not accept unsolicited commercial email. Do not spam me.
--On Thursday, October 24, 2002 04:30:20 PM -0400 "David G. Andersen" <dga@lcs.mit.edu> wrote:
Until the default behavior of most systems is to block spoofed packets, it's going to remain a problem.
I assert this is not the case. A significant percentage of DDoS attacks use legitimate source IP addresses. When there are thousands of throw-away hosts in the attack network, the difficulty of traceback and elimination remains, and so does the problem. Yes, blocking spoofed packets helps. But it is not an end-game. Kevin
Hi, NANOGers. ] I assert this is not the case. A significant percentage of DDoS attacks use ] legitimate source IP addresses. When there are thousands of throw-away hosts I track several botnets per week, and a large amount of DDoS per week. Only around 20% (or a bit less) of all the attacks I log use spoofed source addresses. Does anti-spoofing help? Yes. It is but one of many mitigation strategies. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Thu, Oct 24, 2002 at 04:02:09PM -0500, Rob Thomas wrote:
Hi, NANOGers.
] I assert this is not the case. A significant percentage of DDoS attacks use ] legitimate source IP addresses. When there are thousands of throw-away hosts
I track several botnets per week, and a large amount of DDoS per week. Only around 20% (or a bit less) of all the attacks I log use spoofed source addresses.
Does anti-spoofing help? Yes. It is but one of many mitigation strategies.
I don't know what botnets you look at, but I wouldn't go that far. Of course stopping spoofing will not solve everything, but is does and will make a huge impact on DoS mitigation and tracing. The problem now is that noone "knows" for certain if the attack they're tracing is spoofed or not. With a random source syn flood, you know it's spoofed. With an attack which is spoofing a legit-looking address that is completely unrelated to the attacker, you don't. Most people who report DoS (including myself) have been so burned by finding out that legitimate looking source address on an attack is infact spoofed (or worse yet that an innocent party gets blamed by incompetent admins), they see a DDoS and don't even bother. Attackers w/DDoS networks use this to their advantage, by mixing spoofed attacks (where they can) with unspoofed attacks (where they can't, such as windows machines, boxes where they havn't compromised root such as apache worms and the like, and even in rare cases where the network is doing their job and ingress filtering), to make it effectively impossible to know which hosts to go after. Tracing down dumb drones with non-spoofed addresses is a LOT easier than tracking spoofed packets through the network (or worse explaining to other networks how to do it). Of course, as more and more ingress filtering is implemented, the attacks will move to "one-off" spoofing, where they spoof their neighbors address but are still close enough to get through filters, and incompetent admins go chasing after ghosts. But we'll deal with that situation when we come to it. :) -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)
At 04:51 PM 10/24/2002, Kevin Houle wrote:
--On Thursday, October 24, 2002 04:30:20 PM -0400 "David G. Andersen" <dga@lcs.mit.edu> wrote:
Until the default behavior of most systems is to block spoofed packets, it's going to remain a problem.
I assert this is not the case. A significant percentage of DDoS attacks use legitimate source IP addresses. When there are thousands of throw-away hosts in the attack network, the difficulty of traceback and elimination remains, and so does the problem.
Yes, blocking spoofed packets helps. But it is not an end-game.
It provides the identity of the party to sue for negligence, should the damage elsewhere be severe. In large networks, it would behoove administrators to establish ingress filters on the routers connecting subnets, so that they can further limit spoofing or help trace the party involved.
Yes, blocking spoofed packets helps. But it is not an end-game.
it's not even middle-game
It provides the identity of the party to sue for negligence, should the damage elsewhere be severe.
and lawsuits have always been such a major contributor to internet advances in the past. makes me think of suing the cemetaries and coffin manufacturers in "night of the living dead." does not scale. you might remember or look up smb's presentation on 'pushback' at some nanog or another (those anonymous hotel rooms kinda blur, especially before first coffee). that's not perfect, but it scales. and it's an engineering approach to a technology problem, always a good sign. randy
At 08:30 AM 10/25/2002, Randy Bush wrote:
Yes, blocking spoofed packets helps. But it is not an end-game.
it's not even middle-game
It provides the identity of the party to sue for negligence, should the damage elsewhere be severe.
and lawsuits have always been such a major contributor to internet advances in the past. makes me think of suing the cemetaries and coffin manufacturers in "night of the living dead." does not scale.
As with the spam problem, the underlying issue is a social issue as well as a technological one. However we proceed on a technological basis, there will continue to be arms races in the DoS world. Lawsuits are inefficient way to advance the Internet technology, but may help on the social side of things. We needn't be binary in our choice of paths to persue.
you might remember or look up smb's presentation on 'pushback' at some nanog or another (those anonymous hotel rooms kinda blur, especially before first coffee). that's not perfect, but it scales. and it's an engineering approach to a technology problem, always a good sign.
I agree it is something we should be pursued, but disagree the problem is entirely of a technological nature.
participants (18)
-
Barry Shein
-
batz
-
Ben Browning
-
Craig Partridge
-
Daniel Senie
-
David G. Andersen
-
dre
-
Edward Lewis
-
Kelly J. Cooper
-
Kevin Houle
-
Paul Vixie
-
Peter Salus
-
Randy Bush
-
Richard A Steenbergen
-
Richard Forno
-
Rob Thomas
-
Sean Donelan
-
Valdis.Kletnieks@vt.edu