---------- Forwarded message ---------- From: andrew.wallace <andrew.wallace@rocketmail.com> Date: Wed, Jul 29, 2009 at 6:22 PM Subject: Real Black Hats Hack Security Experts on Eve of Conference To: Information Security Mailing List <n3td3v@googlegroups.com> LAS VEGAS — Two noted security professionals were targeted this week by hackers who broke into their web pages, stole personal data and posted it online on the eve of the Black Hat security conference. Security researcher Dan Kaminsky and former hacker Kevin Mitnick were targeted because of their high profiles, and because the intruders consider the two notables to be posers who hype themselves and do little to increase security, according to a note the hackers posted in a file left on Kaminsky’s site. The files taken from Kaminsky’s server included private e-mails between Kaminisky and other security researchers, highly personal chat logs, and a list of files he has purportedly downloaded that pertain to dating and other topics. The hacks also targeted other security professionals, and were apparently timed to coincide with the Black Hat and DefCon security conference in Las Vegas this week, where Kaminsky is unveiling new research on digital certificates and hash collisions. The hackers criticized Mitnick and Kaminsky for using insecure blogging and hosting services to publish their sites, that allowed the hackers to gain easy access to their data. http://www.wired.com/threatlevel/2009/07/kaminsky-hacked/ http://www.leetupload.com/zf05.txt
--- On Wed, 7/29/09, Scott Weeks <surfer@mauigateway.com> wrote:
From: Scott Weeks <surfer@mauigateway.com> Subject: Re: Fwd: Dan Kaminsky To: "andrew.wallace" <andrew.wallace@rocketmail.com> Date: Wednesday, July 29, 2009, 10:10 PM
--- andrew.wallace@rocketmail.com wrote:
http://www.leetupload.com/zf05.txt ------------------------------------------
This one is off line:
Site Temporarily Unavailable We apologize for the inconvenience. Please contact the webmaster/ tech support immediately to have them rectify this.
error id: "bad_httpd_conf"
scott
Dan Kaminsky mirrors: http://r00tsecurity.org/files/zf05.txt http://antilimit.net/zf05.txt Much thanks, Andrew
Randy Bush wrote:
LAS VEGAS — Two noted security professionals were targeted this week by hackers who broke into their web pages, stole personal data and posted it online on the eve of the Black Hat security conference.
boooooring.
randy
Two noted security professionals, and Kevin Mitnick, whom no one gives a damn about, were targeted... FTFY Andrew D Kirch
LAS VEGAS — Two noted security professionals were targeted this week by hackers who broke into their web pages, stole personal data and posted it online on the eve of the Black Hat security conference. boooooring. Two noted security professionals, and Kevin Mitnick, whom no one gives a damn about, were targeted...
Ettore Bugatti, maker of the finest cars of his day, was once asked why his cars had less than perfect brakes. He replied something like, "Any fool can make a car stop. It takes a genius to make a car go." so i am not particularly impressed by news of children making a car stop. randy
On 29-Jul-09, at 9:23 PM, Randy Bush wrote:
LAS VEGAS — Two noted security professionals were targeted this week by hackers who broke into their web pages, stole personal data and posted it online on the eve of the Black Hat security conference. boooooring. Two noted security professionals, and Kevin Mitnick, whom no one gives a damn about, were targeted...
Ettore Bugatti, maker of the finest cars of his day, was once asked why his cars had less than perfect brakes. He replied something like, "Any fool can make a car stop. It takes a genius to make a car go."
so i am not particularly impressed by news of children making a car stop.
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
On Thu, Jul 30, 2009 at 03:48:18PM -0700, Dragos Ruiu wrote:
On 29-Jul-09, at 9:23 PM, Randy Bush wrote:
Ettore Bugatti, maker of the finest cars of his day, was once asked why his cars had less than perfect brakes. He replied something like, "Any fool can make a car stop. It takes a genius to make a car go."
so i am not particularly impressed by news of children making a car stop.
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
At the risk of adding to the metadiscussion, I've never seen anyone die from having a car that accelerated too slowly. Unfortunately I think encouraging Randy to drive cars with bad brakes would be against the NANOG charter. :) -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
On Thu, Jul 30, 2009 at 11:48 PM, Dragos Ruiu<dr@kyx.net> wrote:
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
What does this have to do with Nanog, the guy found a critical security bug on DNS last year. There is no slander here, I put his name in the subject header so to draw attention to the relevance of posting it to Nanog. I copy & pasted a news article caption, which also doesn't slander Dan Kaminsky but reports on the actions of other people true to the facts. Any further slander allegations, please point them at Wired's legal team. Andrew
I don't see a video attached or an audio recording. Thus no slander. Libel on the other hand is a different matter. On Aug 1, 2009, at 8:10 AM, andrew.wallace wrote:
On Thu, Jul 30, 2009 at 11:48 PM, Dragos Ruiu<dr@kyx.net> wrote:
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
There is no slander here, I put his name in the subject header so to draw attention to the relevance of posting it to Nanog.
I copy & pasted a news article caption, which also doesn't slander Dan Kaminsky but reports on the actions of other people true to the facts.
Any further slander allegations, please point them at Wired's legal team.
Andrew
On Sat, Aug 01, 2009 at 01:11:17PM -0700, Cord MacLeod wrote:
I don't see a video attached or an audio recording. Thus no slander.
Libel on the other hand is a different matter.
You have those backwards. Slander is transitory (i.e. spoken) defamation, libel is written/recorded/etc non-transitory defamation. This seems like a group that could benefit from knowing those two words. :) -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
Read my post one more time... The standards you described are what I described. No video, no audio = no speech = no slander. The article was written, hence libel. On Aug 3, 2009, at 6:02 PM, Richard A Steenbergen wrote:
On Sat, Aug 01, 2009 at 01:11:17PM -0700, Cord MacLeod wrote:
I don't see a video attached or an audio recording. Thus no slander.
Libel on the other hand is a different matter.
You have those backwards. Slander is transitory (i.e. spoken) defamation, libel is written/recorded/etc non-transitory defamation. This seems like a group that could benefit from knowing those two words. :)
-- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
Hi, Read my post one more time and think though: Only "zf0" are legally in the shit. The guy "Dragos Ruiu" has absolutely no case against me. Copy & paste doesn't count as defamation, speak to Wired's legal team if you have an issue. Cheers, Andrew On Tue, Aug 4, 2009 at 2:02 AM, Richard A Steenbergen<ras@e-gerbil.net> wrote:
On Sat, Aug 01, 2009 at 01:11:17PM -0700, Cord MacLeod wrote:
I don't see a video attached or an audio recording. Thus no slander.
Libel on the other hand is a different matter.
You have those backwards. Slander is transitory (i.e. spoken) defamation, libel is written/recorded/etc non-transitory defamation. This seems like a group that could benefit from knowing those two words. :)
-- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
On 3-Aug-09, at 9:43 PM, andrew.wallace wrote:
Hi,
Read my post one more time and think though: Only "zf0" are legally in the shit.
The guy "Dragos Ruiu" has absolutely no case against me.
Copy & paste doesn't count as defamation, speak to Wired's legal team if you have an issue.
Cheers,
Andrew
Whoa. Feeling a tad defensive? ;-P I used slander specifically. Any defamation from referenced emails is short-lived. ;-) cheers, --dr
On Tue, Aug 4, 2009 at 2:02 AM, Richard A Steenbergen<ras@e- gerbil.net> wrote:
On Sat, Aug 01, 2009 at 01:11:17PM -0700, Cord MacLeod wrote:
I don't see a video attached or an audio recording. Thus no slander.
Libel on the other hand is a different matter.
You have those backwards. Slander is transitory (i.e. spoken) defamation, libel is written/recorded/etc non-transitory defamation. This seems like a group that could benefit from knowing those two words. :)
-- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
andrew.wallace wrote:
On Thu, Jul 30, 2009 at 11:48 PM, Dragos Ruiu<dr@kyx.net> wrote:
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
He didn't find it. He only publicized it. the guy who wrote djbdns fount it years ago. Powerdns was patched for the flaw a year and a half before Kaminsky published his article. http://blog.netherlabs.nl/articles/2008/07/09/some-thoughts-on-the-recent-dn... "However - the parties involved aren't to be lauded for their current fix. Far from it. It has been known since 1999 that all nameserver implementations were vulnerable for issues like the one we are facing now. In 1999, Dan J. Bernstein <http://cr.yp.to/djb.html> released his nameserver (djbdns <http://cr.yp.to/djbdns.html>), which already contained the countermeasures being rushed into service now. Let me repeat this. Wise people already saw this one coming 9 years ago, and had a fix in place." --Curtis
On Tue, 04 Aug 2009 13:32:42 EDT, Curtis Maurand said:
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
He didn't find it. He only publicized it. the guy who wrote djbdns fount it years ago. Powerdns was patched for the flaw a year and a half before Kaminsky published his article.
Yeah, and Robert Morris Sr wrote about a mostly-theoretical issue with TCP sequence numbers back in 1985. Then a decade later, some dude named Mitnick whacked the workstation of this whitehat Shimomura, and the industry collectively went "Oh ****, it isn't just theoretical" and Steve Bellovin got to write RFC1948. (Mitnick was the first *well known* attack using it that I know of - anybody got a citation for an earlier usage, either well-known or 0-day?)
"Wise people already saw this one coming 9 years ago, and had a fix in place."
Yes, but a wise man without a PR agent doesn't do the *rest* of the community much good. A Morris or Bernstein may *see* the problem a decade before, but it may take a Mitnick or Kaminsky to make the *rest* of us able to see it...
On Tue, 4 Aug 2009, Valdis.Kletnieks@vt.edu wrote:
Yes, but a wise man without a PR agent doesn't do the *rest* of the community much good. A Morris or Bernstein may *see* the problem a decade before, but it may take a Mitnick or Kaminsky to make the *rest* of us able to see it...
Same thing to get the industry to scramble to get rid of MD5 in SSL-certs, known for a long time, when it was shown to be practical it didn't take that long to get rid of. People want proof, not theory. -- Mikael Abrahamsson email: swmike@swm.pp.se
Date: Tue, 04 Aug 2009 13:32:42 -0400 From: Curtis Maurand <cmaurand@xyonet.com>
andrew.wallace wrote:
On Thu, Jul 30, 2009 at 11:48 PM, Dragos Ruiu<dr@kyx.net> wrote:
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
He didn't find it. He only publicized it. the guy who wrote djbdns fount it years ago. Powerdns was patched for the flaw a year and a half before Kaminsky published his article.
http://blog.netherlabs.nl/articles/2008/07/09/some-thoughts-on-the-recent-dn...
"However - the parties involved aren't to be lauded for their current fix. Far from it. It has been known since 1999 that all nameserver implementations were vulnerable for issues like the one we are facing now. In 1999, Dan J. Bernstein <http://cr.yp.to/djb.html> released his nameserver (djbdns <http://cr.yp.to/djbdns.html>), which already contained the countermeasures being rushed into service now. Let me repeat this. Wise people already saw this one coming 9 years ago, and had a fix in place."
Dan K. has never claimed to have "discovered' the vulnerability. As the article says, it's been know for years and djb did suggest a means to MINIMIZE this vulnerability. There is NO fix. There never will be as the problem is architectural to the most fundamental operation of DNS. Other than replacing DNS (not feasible), the only way to prevent this form of attack is DNSSEC. The "fix" only makes it much harder to exploit. What Dan K. did was to discover a very clever way to exploit the design flaw in DNS that allowed the attack. What had been a known problem that was not believed to be generally exploitable became a real threat to the Internet. Suddenly people realized that an attack of this sort was not only possible, but quick and easy (relatively). Dan K. did what a security professional should do...he talked to the folks who were responsible for most DNS implementations that did caching and a work-around was developed before the attack mechanism was made public. He was given credit for finding the attack method, but the press seemed to get it wrong (as they often do) and lots of stories credited him with finding the vulnerability. By the way, I know that Paul Vixie noted this vulnerability quite some years ago, but I don't know if his report was before or after djb's. Now, rather then argue about the history of this problem (non-operational), can we stick to operational issues like implementing DNSSEC to really fix it (operational)? Is your DNS data signed? (No, mine is not and probably won't be for another week or two.) -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
There is NO fix. There never will be as the problem is architectural to the most fundamental operation of DNS. Other than replacing DNS (not feasible), the only way to prevent this form of attack is DNSSEC. The "fix" only makes it much harder to exploit.
Randomizing source ports and QIDs simply increases entropy, making it harder to spoof an answer. If this is not a "fix", then DNSSEC is not a fix either, as it only increases entropy as well. Admitted, DNSSEC increases it a great deal more, but by your definition, it is not a "fix". -- TTFN, patrick On Aug 4, 2009, at 2:32 PM, Kevin Oberman wrote:
Date: Tue, 04 Aug 2009 13:32:42 -0400 From: Curtis Maurand <cmaurand@xyonet.com>
andrew.wallace wrote:
On Thu, Jul 30, 2009 at 11:48 PM, Dragos Ruiu<dr@kyx.net> wrote:
at the risk of adding to the metadiscussion. what does any of this have to do with nanog? (sorry I'm kinda irritable about character slander being spammed out unnecessarily to unrelated public lists lately ;-P )
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
He didn't find it. He only publicized it. the guy who wrote djbdns fount it years ago. Powerdns was patched for the flaw a year and a half before Kaminsky published his article.
http://blog.netherlabs.nl/articles/2008/07/09/some-thoughts-on-the-recent-dn...
"However - the parties involved aren't to be lauded for their current fix. Far from it. It has been known since 1999 that all nameserver implementations were vulnerable for issues like the one we are facing now. In 1999, Dan J. Bernstein <http://cr.yp.to/djb.html> released his nameserver (djbdns <http://cr.yp.to/djbdns.html>), which already contained the countermeasures being rushed into service now. Let me repeat this. Wise people already saw this one coming 9 years ago, and had a fix in place."
Dan K. has never claimed to have "discovered' the vulnerability. As the article says, it's been know for years and djb did suggest a means to MINIMIZE this vulnerability.
There is NO fix. There never will be as the problem is architectural to the most fundamental operation of DNS. Other than replacing DNS (not feasible), the only way to prevent this form of attack is DNSSEC. The "fix" only makes it much harder to exploit.
What Dan K. did was to discover a very clever way to exploit the design flaw in DNS that allowed the attack. What had been a known problem that was not believed to be generally exploitable became a real threat to the Internet. Suddenly people realized that an attack of this sort was not only possible, but quick and easy (relatively). Dan K. did what a security professional should do...he talked to the folks who were responsible for most DNS implementations that did caching and a work-around was developed before the attack mechanism was made public.
He was given credit for finding the attack method, but the press seemed to get it wrong (as they often do) and lots of stories credited him with finding the vulnerability.
By the way, I know that Paul Vixie noted this vulnerability quite some years ago, but I don't know if his report was before or after djb's.
Now, rather then argue about the history of this problem (non-operational), can we stick to operational issues like implementing DNSSEC to really fix it (operational)? Is your DNS data signed? (No, mine is not and probably won't be for another week or two.) -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
In a message written on Tue, Aug 04, 2009 at 11:32:46AM -0700, Kevin Oberman wrote:
There is NO fix. There never will be as the problem is architectural to the most fundamental operation of DNS. Other than replacing DNS (not feasible), the only way to prevent this form of attack is DNSSEC. The "fix" only makes it much harder to exploit.
I don't understand why replacing DNS is "not feasible". -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
* Leo Bicknell:
In a message written on Tue, Aug 04, 2009 at 11:32:46AM -0700, Kevin Oberman wrote:
There is NO fix. There never will be as the problem is architectural to the most fundamental operation of DNS. Other than replacing DNS (not feasible), the only way to prevent this form of attack is DNSSEC. The "fix" only makes it much harder to exploit.
I don't understand why replacing DNS is "not feasible".
Replacing the namespace is not feasible because any newcomer will lack the liability shield ICANN, root operators, TLD registries, and registrars have established for the Internet DNS root, so it will never get beyond the stage of hashing out the legal issues. We might have an alternative one day, but it's going to happen by accident, through generalization of an internal naming service employed by a widely-used application. There are several successful application-specific naming services which are independent of DNS, but all the attempts at replacing DNS as a general-purpose naming service have failed. The transport protocol is a separate issue. It is feasible to change it, but the IETF has a special working group which is currently tasked to prevent any such changes. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
On Aug 5, 2009, at 9:32 PM, Florian Weimer wrote:
We might have an alternative one day, but it's going to happen by accident, through generalization of an internal naming service employed by a widely-used application.
Or even more likely, IMHO, that more and more applications will have their own naming services which will gradually reduce the perceived need for a general-purpose system - i.e., the centrality of DNS won't be subsumed into any single system (remember X.500?), but, rather, by a multiplicity of systems. [Note that I'm not advocating this particular approach; I just think it's the most likely scenario.] Compression/conflation of the transport stack will likely be both a driver and an effect of this trend, over time. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Unfortunately, inefficiency scales really well. -- Kevin Lawton
In message <825C8AC7-C01E-4934-92FD-E7B9E8091A3A@arbor.net>, Roland Dobbins wri tes:
On Aug 5, 2009, at 9:32 PM, Florian Weimer wrote:
We might have an alternative one day, but it's going to happen by accident, through generalization of an internal naming service employed by a widely-used application.
Or even more likely, IMHO, that more and more applications will have their own naming services which will gradually reduce the perceived need for a general-purpose system - i.e., the centrality of DNS won't be subsumed into any single system (remember X.500?), but, rather, by a multiplicity of systems.
Been there, done that, doesn't work well. For all it's short comings the DNS and the single namespace it brings is much better than having a multitude of namespaces. Yes I've had to work with a multitude of namespaces and had to map between them. Ugly.
[Note that I'm not advocating this particular approach; I just think it's the most likely scenario.]
Compression/conflation of the transport stack will likely be both a driver and an effect of this trend, over time.
----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com>
Unfortunately, inefficiency scales really well.
-- Kevin Lawton
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Aug 5, 2009, at 10:11 PM, Mark Andrews wrote:
For all it's short comings the DNS and the single namespace it brings is much better than having a multitude of namespaces.
I agree with you, but I don't think this approach is going to persist as the standard model. Increasingly, transport and what we now call layer-7 are going to become conflated (we already see all these Rube Goldberg-type mechanisms to try and accomplish this OOB now, with predictable results), and that's going to lead to APIs/data types embedding this information, IMHO. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Unfortunately, inefficiency scales really well. -- Kevin Lawton
Multiple systems end up with problems. Even standard DNS blows up when some company (Apple) decides that an extension (.local) should not be forwarded to the DNS servers on some device (iPhone) because their service (Bonjour) uses it. Thanks, Erik -----Original Message----- From: Roland Dobbins [mailto:rdobbins@arbor.net] Sent: Wednesday, August 05, 2009 10:44 AM To: NANOG list Subject: DNS alternatives (was Re: Dan Kaminsky) On Aug 5, 2009, at 9:32 PM, Florian Weimer wrote:
We might have an alternative one day, but it's going to happen by accident, through generalization of an internal naming service employed by a widely-used application.
Or even more likely, IMHO, that more and more applications will have their own naming services which will gradually reduce the perceived need for a general-purpose system - i.e., the centrality of DNS won't be subsumed into any single system (remember X.500?), but, rather, by a multiplicity of systems. [Note that I'm not advocating this particular approach; I just think it's the most likely scenario.] Compression/conflation of the transport stack will likely be both a driver and an effect of this trend, over time. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Unfortunately, inefficiency scales really well. -- Kevin Lawton
On Aug 5, 2009, at 10:20 PM, Erik Soosalu wrote:
Multiple systems end up with problems.
Yes, and again, I'm not advocating this approach. I just think it's most likely where we're going to end up, long-term. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Unfortunately, inefficiency scales really well. -- Kevin Lawton
In a message written on Wed, Aug 05, 2009 at 02:32:27PM +0000, Florian Weimer wrote:
The transport protocol is a separate issue. It is feasible to change it, but the IETF has a special working group which is currently tasked to prevent any such changes.
My interest was in replacing the protocol. I've grown fond of the name space, for all of its warts. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
My interest was in replacing the protocol. I've grown fond of the name space, for all of its warts.
As we evolved from circuit switching to packet switching, which many at the time said it would never work, and from the HOSTS.TXT to DNS, sooner or later the “naming scheme” for resources on the net will imho in the future evolve to something better and different from DNS. No doubt for more than 25 years DNS has provided a great service, and it had many challenges and will continue for some time to do so. But DNS from being a simple way to provide name resolution evolved to something more complex, and also degenerated into a protocol/service that created a new industry when a monetary value was stuck to particular sequences of characters that require to be globally unique and the base to construct a URL. At some time in the future and when a new paradigm for the user interface is conceived, we may not longer have the end user “typing” a URL, the DNS or something similar will still be in the background providing name to address mapping but there will be no more monetary value associated with it or that value will be transferred to something else. It may sound too futuristic and inspired from science fiction, but I never saw Captain Piccard typing a URL on the Enterprise. Sooner or later, we or the new generation of ietfers and nanogers, will need to start thinking about a new naming paradigm and design the services and protocols associated with it. The key question is, when we start? Meanwhile we have to live with what we have and try to improve it as much as we can. My .02
Jorge Amodio (jmamodio) writes:
It may sound too futuristic and inspired from science fiction, but I never saw Captain Piccard typing a URL on the Enterprise.
That's ok, I've never seen the Enterprise at the airport.
Sooner or later, we or the new generation of ietfers and nanogers, will need to start thinking about a new naming paradigm and design the services and protocols associated with it.
The key question is, when we start?
Let's see how far the SMTP replacement has come, and get some inspiration. Heck, it's an application that only _uses_ the DNS, should be easy. -- "Hey kid, go scan a /48"
Once upon a time, Phil Regnauld <regnauld@catpipe.net> said:
Jorge Amodio (jmamodio) writes:
It may sound too futuristic and inspired from science fiction, but I never saw Captain Piccard typing a URL on the Enterprise.
That's ok, I've never seen the Enterprise at the airport.
I have, but not that Enterprise (I saw the space shuttle orbiter Enterprise on a 747 land here).
Let's see how far the SMTP replacement has come, and get some inspiration. Heck, it's an application that only _uses_ the DNS, should be easy.
There's always somebody looking to re-invent the wheel, but usually they are startups looking to make a quick buck by patenting and licensing their technology that "will be the savior of the Internet" (and so they don't get far). -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
It may sound too futuristic and inspired from science fiction, but I never saw Captain Piccard typing a URL on the Enterprise.
That's ok, I've never seen the Enterprise at the airport.
Don't confuse sight with vision.
Sooner or later, we or the new generation of ietfers and nanogers, will need to start thinking about a new naming paradigm and design the services and protocols associated with it.
The key question is, when we start?
Let's see how far the SMTP replacement has come, and get some inspiration. Heck, it's an application that only _uses_ the DNS, should be easy.
It won't be quick, it won't be easy, and you will have to deal with the establishment that at all cost will keep trying to squeeze money out of the current system. Cheers
On Aug 5, 2009, at 1:30 PM, Jorge Amodio wrote:
It may sound too futuristic and inspired from science fiction, but I never saw Captain Piccard typing a URL on the Enterprise.
That's ok, I've never seen the Enterprise at the airport.
Go to Dulles Airport. She used to be on the runway a long time; now she is at the Udvar-Hazy Center there. Don't know what this has to do with URIs, though. Marshall
Don't confuse sight with vision.
Sooner or later, we or the new generation of ietfers and nanogers, will need to start thinking about a new naming paradigm and design the services and protocols associated with it.
The key question is, when we start?
Let's see how far the SMTP replacement has come, and get some inspiration. Heck, it's an application that only _uses_ the DNS, should be easy.
It won't be quick, it won't be easy, and you will have to deal with the establishment that at all cost will keep trying to squeeze money out of the current system.
Cheers
On Wed, Aug 5, 2009 at 12:49 PM, Jorge Amodio <jmamodio@gmail.com> wrote:
At some time in the future and when a new paradigm for the user interface is conceived, we may not longer have the end user “typing” a URL, the DNS or something similar will still be in the background providing name to address mapping but there will be no more monetary value associated with it or that value will be transferred to something else.
We're already there. It's called "Google". In the the vast majority of cases I have seen, people don't type domain names, they search the web. When they do type a domain name, they usually type it into the Google search box. (Alternatively, they type everything into the browser's "address bar", which is really a "search-the-web bar" in most browsers.) (Replace "Google" with search engine of your choice.) -- Ben
On Aug 5, 2009, at 6:26 PM, Ben Scott wrote:
On Wed, Aug 5, 2009 at 12:49 PM, Jorge Amodio <jmamodio@gmail.com> wrote:
At some time in the future and when a new paradigm for the user interface is conceived, we may not longer have the end user “typing” a URL, the DNS or something similar will still be in the background providing name to address mapping but there will be no more monetary value associated with it or that value will be transferred to something else.
We're already there. It's called "Google".
In the the vast majority of cases I have seen, people don't type domain names, they search the web. When they do type a domain name, they usually type it into the Google search box.
Partially true for web access, very rarely true for email. I type in email domains much more often than I do web domains. And now email addresses are becoming URIs for log ins, SIP calling, video conferencing, etc. It's also interesting how in some ways twitter and its relatives have been sending URLs backwards. If you type in http://www.americafree.tv you may have some idea what you are getting, but if you type in http://bit.ly/w5aM4 you have none. (These two URLs go, or at least they should go, to the same place. Who knows if that will be true in a year, or 5, or 10.) Here is a place IMO where a better UI and URI philosophy would really help. Regards Marshall
(Alternatively, they type everything into the browser's "address bar", which is really a "search-the-web bar" in most browsers.)
(Replace "Google" with search engine of your choice.)
-- Ben
Once upon a time, Ben Scott <mailvortex@gmail.com> said:
In the the vast majority of cases I have seen, people don't type domain names, they search the web. When they do type a domain name, they usually type it into the Google search box.
Web != Internet. DNS is used for much more than web sites, and many of those things are not in a public index. For example, most people type in their friends' email addresses (at least into an address book). -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On Wed, Aug 5, 2009 at 6:37 PM, Chris Adams<cmadams@hiwaay.net> wrote:
... we may not longer have the end user “typing” a URL, the DNS or something similar will still be in the background providing name to address mapping ...
In the the vast majority of cases I have seen, people don't type domain names, they search the web. When they do type a domain name, they usually type it into the Google search box.
Web != Internet.
(Web != Internet) != the_point Most people don't type email addresses, either. They pick from from their address book. Their address book knows the address because it auto-learned it from a previously received email. If their email program doesn't do that, they find an old email and hit "Reply". (You laugh, but even in my small experience, I've seen plenty of clusers who rarely originate an email. They reply to *everything*. You have to email them once for them to email you. It's always neat to get a message in my inbox that's a reply to a message from three years ago. But I digress.) User IDs on Facebook, Twitter, et. al., aren't email addresses, they're user IDs. They just happen to look just like email addresses, because nobody's come up with a better system yet. The main reason those services ask for the user's email address for an ID is it makes the "I forgot my user ID" support cases easier. (Note that it doesn't eliminate them. Some people still don't know their Facebook user ID until you tell them it's their email address. Then they ask what their email address is...) Web browsers already automatically fill-in one's email address if you let them. One of these days Microsoft or Mozilla or whoever will come up with a method to make the automation more seamless, and people will probabbly stop knowing their own email address. To do the initial exchange for a new person, they'll use Facebook. Or whatever. Paper advertisements: What's easier? (1) Publishing a URL in a print ad, and expecting people to remember it and type it correctly. (2) Saying "type our name into $SERVICE", where $SERVICE is some popular website that most people trust (like Facebook or whatever), and has come up with a workable system for disambiguation. You get the picture. Follow the trend. The systems aren't done evolving into being yet, but the avalanche has definitely started. It's too late for the pebbles to vote. As the person I was replying to said, DNS is unlikely to go away, but I'll lay good money that some day most people won't even know domain names exist, any more than they know IP addresses do. -- Ben <google!gmail!mailvortex>
(2) Saying "type our name into $SERVICE", where $SERVICE is some popular website that most people trust (like Facebook or whatever), and has come up with a workable system for disambiguation.
You might want to talk to AOL about that. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
(2) Saying "type our name into $SERVICE", where $SERVICE is some popular website that most people trust (like Facebook or whatever), and has come up with a workable system for disambiguation.
I can only hope that those who believe in "disambiguation" of mailing addresses, electronic or otherwise, will be sorely disappointed. Return to sender is the only safe course. [No, I am not an architect, regardless of what Google says.] And, I really want whitehouse.gov, not whitehouse.com, even though you could only guess my reason. The design intent of the DNS namespace and structure is to provide unambiguous location of domain name related data. The implementation of that structure and lookup mechanisms may be flawed, but we should fix the flaws, not discard the namespace. This discussion has wandered widely. Perhaps we should revert to discussing how to operate our networks to provide secure connectivity, including secure determination of connection targets. James R. Cutler james.cutler@consultant.com
On Wed, Aug 5, 2009 at 8:40 PM, James R. Cutler<james.cutler@consultant.com> wrote:
(2) Saying "type our name into $SERVICE", where $SERVICE is some popular website that most people trust (like Facebook or whatever), and has come up with a workable system for disambiguation.
I can only hope that those who believe in "disambiguation" of mailing addresses, electronic or otherwise, will be sorely disappointed.
I'm not suggesting the elimination of email addresses any more than I'm suggesting the elimination of DNS. I'm asserting that people find it easier to use methods other than transcription to share email addresses, and that society gravitates towards such methods.
... And, I really want whitehouse.gov, not whitehouse.com, even though you could only guess my reason.
Thanks for proving my point for me. :) Why do you think <whitehouse.com> exists and is what it is? Why is it such a well-known example? I'm guessing it's because people frequently end up at the wrong website. Computers like using domain names, and are good at it. People don't, and aren't. It seems reasonable that computers will continue using domain names, while people continue to migrate to layered front-ends. I'm honestly stumped why people are having such a hard time speaking to this. Instead I keep getting told that DNS is more precise than Google. (Or email addresses are more precise than human names. Whatever.) Have I ever said otherwise? Indeed, my whole premise *depends* on the fact that DNS is absolutely precise while people are generally rather imprecise in their communications. The one counter-argument that actually speaks to my point -- that I can think of -- would be that computers are really lousy at deciphering human ambiguity. Which is true enough. But I think Google, Facebook, et. al., have demonstrated that it's not impossible to program a computer to deal with human ambiguity, provided you have enough computrons and limit the scope. Okay, one more counter-argument: To be useful, such services generally have to be popular. To become and remain popular, such services generally have to be widely available. Widely available services tend to get abused by spammers. Restricting service to block spammers is generally antithetical to making it widely available. Effective technological solutions are hard to find; political/economic solutions are expensive. -- Ben
In message <59f980d60908051602y1fe364devfb5f590a8c7959dc@mail.gmail.com>, Ben S cott writes:
On Wed, Aug 5, 2009 at 6:37 PM, Chris Adams<cmadams@hiwaay.net> wrote:
... we may not longer have the end user =93typing=94 a URL, the DNS or something similar will still be in the background providing name to add= ress mapping ...
=A0 In the the vast majority of cases I have seen, people don't type domain names, they search the web. =A0When they do type a domain name, they usually type it into the Google search box.
Web !=3D Internet.
(Web !=3D Internet) !=3D the_point
Most people don't type email addresses, either. They pick from from their address book. Their address book knows the address because it auto-learned it from a previously received email. If their email program doesn't do that, they find an old email and hit "Reply". (You laugh, but even in my small experience, I've seen plenty of clusers who rarely originate an email. They reply to *everything*. You have to email them once for them to email you. It's always neat to get a message in my inbox that's a reply to a message from three years ago. But I digress.)
Which requires that people type addresses in in the first place. It's like these anti spam proceedures which require that you respond to a message that says you sent the email to let it through. I doesn't work if everyone or even if most do it.
User IDs on Facebook, Twitter, et. al., aren't email addresses, they're user IDs. They just happen to look just like email addresses, because nobody's come up with a better system yet. The main reason those services ask for the user's email address for an ID is it makes the "I forgot my user ID" support cases easier. (Note that it doesn't eliminate them. Some people still don't know their Facebook user ID until you tell them it's their email address. Then they ask what their email address is...)
No they make finding a unique id easy by leveraging a existing globally unique system.
Web browsers already automatically fill-in one's email address if you let them.
Which you have typed into the web browser in the first place.
One of these days Microsoft or Mozilla or whoever will come up with a method to make the automation more seamless, and people will probabbly stop knowing their own email address. To do the initial exchange for a new person, they'll use Facebook. Or whatever.
Paper advertisements: What's easier? (1) Publishing a URL in a print ad, and expecting people to remember it and type it correctly. (2) Saying "type our name into $SERVICE", where $SERVICE is some popular website that most people trust (like Facebook or whatever), and has come up with a workable system for disambiguation.
1 if you actually want people to get to you and not your competitor. There is a reason people put phone numbers in advertisments rather than say "look us up in the yellow/white pages".
You get the picture. Follow the trend. The systems aren't done evolving into being yet, but the avalanche has definitely started. It's too late for the pebbles to vote.
There is a difference between looking for a service and looking for a specific vendor of a service.
As the person I was replying to said, DNS is unlikely to go away, but I'll lay good money that some day most people won't even know domain names exist, any more than they know IP addresses do.
People may not know what a domain name is but they will use them all the time even if they are not aware of it. Google Twitter, Facebook etc. all depend on a working DNS whether they make it use visible to user or not. Mark
-- Ben <google!gmail!mailvortex> -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Wed, Aug 5, 2009 at 7:30 PM, Mark Andrews <marka@isc.org> wrote:
Which requires that people type addresses in in the first place.
As I wrote, we're already part of the way towards people not having to do even that.
No they make finding a unique id easy by leveraging a existing globally unique system.
That too. But if Facebook *becomes* the globally unique system...
Web browsers already automatically fill-in one's email address if you let them.
Which you have typed into the web browser in the first place.
Web browsers can get the user's email address from the OS/email program in many cases. The cases where that isn't working yet (e.g., Yahoo) are problems easily solved by technology. Sites and browsers already have a protocol for changing one's home page. "Would you like your email to be at 'Google'? [Yes] [No]"
1 if you actually want people to get to you and not your competitor.
And when people can't remember or mis-type the URL, you think they get the "right" site all the time?
There is a reason people put phone numbers in advertisments rather than say "look us up in the yellow/white pages".
If there was a better system, would they still print their phone numbers in advertisements? Of your associates, how many of their phone numbers do you know? How many does your phone dial for you? How often do you find yourself glad someone called you first, saving you the trouble from entering their phone number into your contacts manually? Now get the phone talking to PhoneFaceBook or whatever, so the "first call" problem is solved. Do you get to Google by typing "google.com" or "64.233.161.104"? If only the later mechanism existed, would you be adverse to adopting a better one?
There is a difference between looking for a service and looking for a specific vendor of a service.
Sure. There's a difference between looking for me and looking for all the other people named "Ben Scott", too. Yet Facebook has resulted in people I haven't talked to in 15 years finding me. Facebook solved the personal-name ambiguity problem for them. It seems reasonable to suppose that other ambiguity problems are solvable as well. People used to copy HOSTS.TXT around until someone came up with a better scheme. /etc/hosts still exists and still comes in handy for some things. People used to put great effort into maintaining giant bookmark files in web browsers. Sharing bookmark entries was a great way to improve one's web experience. These days, bookmark files are still used and still useful, but their necessity is very much diminished by improved web search engines and browser history. Look for the trend in all the things I'm writing about. It's not about getting *rid* of domain names, or URLs, or email addresses, or IP addresses, or phone numbers. It's about people finding ways of *using* all those things without *knowing* them. Extrapolate from that trend.
As the person I was replying to said, DNS is unlikely to go away, but I'll lay good money that some day most people won't even know domain names exist, any more than they know IP addresses do.
People may not know what a domain name is but they will use them all the time even if they are not aware of it.
Isn't that what I *just* *wrote*? :-) Please try to understand my point, rather than setting out to defend the usefulness of DNS. I still run ISC BIND on all my servers if that makes you feel less defensive. ;-) -- Ben @ 209.85.221.52
-- Ben @ 209.85.221.52
Really? farside.isc.org:marka {2} % telnet 209.85.221.52 25 Trying 209.85.221.52... Connected to mail-qy0-f52.google.com. Escape character is '^]'. 220 mx.google.com ESMTP 26si8920387qyk.119 helo farside.isc.org 250 mx.google.com at your service mail from: <marka@isc.org> 250 2.1.0 OK 26si8920387qyk.119 rcpt to: <Ben@[209.85.221.52]> 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 26si8920387qyk.119 quit 221 2.0.0 closing connection 26si8920387qyk.119 Connection closed by foreign host. farside.isc.org:marka {3} % -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
At some time in the future and when a new paradigm for the user interface is conceived, we may not longer have the end user “typing” a URL, the DNS or something similar will still be in the background providing name to address mapping but there will be no more monetary value associated with it or that value will be transferred to something else.
We're already there. It's called "Google".
Almost there. No wonder why Microsoft does not want to brand "bing" as yet another "search engine" and are sucking Yahoo's brain power and employees, and Google that is "waving" into something. Talking about the subject with a friend during the past few days, most of the conversation ended being around the User Interface. When grandma wants to do on-line banking she does not care about a cryptic URL, or if it has a .bank gTLD or if by the magic of IDN she is able to write it in polish, she just wants to get to her bank account, so a UI that is application/context/content/user aware will help her when she just says "get me to my bank", grandma will spit on the spit pad for DNA authentication and the UI will magically get her there. And if Mr. Dan wonders about hacking the system, I already have several vials of grandma spit in frozen storage :-) But, you get the point. Cheers
On Wed, Aug 5, 2009 at 7:06 PM, Jorge Amodio<jmamodio@gmail.com> wrote:
Talking about the subject with a friend during the past few days, most of the conversation ended being around the User Interface.
A popular idiom is "where the rubber meets the road". It comes from cars, of course. The contact patch between tire and surface. If those 100 square inches or so don't provide what's needed, nobody cares how elegant the rest of the car is. In information systems, the UI is where the rubber meets the road.
When grandma wants to do on-line banking she does not care about a cryptic URL, or if it has a .bank gTLD or if by the magic of IDN she is able to write it in polish, she just wants to get to her bank account ...
Exactly. I think it would be nice if we had some nicely designed, elegant, centralized protocol to do all this, but I suspect that won't happen. Instead I think we'll have a bunch of ad hoc solutions, and then ad hoc solutions that attempt to meta the ad hoc soltions. Someone's already suggested this in another thread. So someone will log into GMyLinkedBook or whatever, which then makes use of Facebook to find a friend, which then talks to AIM to contact them on their iPhone via some other damn thing. Yes, it'll be a mess. So's most of the rest of the world. Keeps us IT guys in work, I guess. -- Ben
I think it would be nice if we had some nicely designed, elegant, centralized protocol to do all this, but I suspect that won't happen.
s/centralized/distributed/
them on their iPhone via some other damn thing. Yes, it'll be a mess.
Have you seen the iphone decoding bar code into urls ?
Keeps us IT guys in work, I guess.
and the domainers/registr*s making money. Cheers
Have you seen the iphone decoding bar code into urls ?
doesn't the iphone has an app to decode qr-codes similar to the one built into almost all keitai here in japan. http://en.wikipedia.org/wiki/QR_Code randy
doesn't the iphone has an app to decode qr-codes similar to the one built into almost all keitai here in japan.
Yep. Called iMatrix. (There are probably others too)
doesn't the iphone has an app to decode qr-codes similar to the one built into almost all keitai here in japan.
Yep. Called iMatrix. (There are probably others too)
Yes, that's one of the apps. Anyway, as you can see this is just one example that a URL may not look prima facie like a construct based on a FQDN. One issue on the UI side is that even with all the progress in graphics and visual representation for many things, to enter a URL we are still using a TTY interface. There are folks (MSFT is investing a bunch of money on it) doing R&D for touch sensitive data entry devices/UI that are context aware. You have now a little taste of that technology with the iphone and all the new smartphones and other gadgets that implemented the same idea. How many URLs you type to get to YouTube on the iphone/itouch ? ... none. The DNS (or whatever name we call it in the future) is not going away, it will go back to be what it was intented, and not just a giant global billboard where folks are fighting for space on it and where some other folks -even if they don't need it- buy space hoping that someday somebody will want their space at any cost making her/him rich. Cheers Jorge
On 7-Aug-09, at 8:01 PM, Randy Bush wrote:
Have you seen the iphone decoding bar code into urls ?
doesn't the iphone has an app to decode qr-codes similar to the one built into almost all keitai here in japan.
There are multiple (5+ at last count) iPhone apps for QR codes, incl. NTT and KDDI/au variants. There are also similar apps for Android, and Symbian ships with one (though not field aware like NTT and KDDI/au variants) cheers, --dr -- World Security Pros. Cutting Edge Training, Tools, and Techniques Tokyo, Japan November 4/5 2009 http://pacsec.jp Vancouver, Canada March 22-26 http://cansecwest.com pgpkey http://dragos.com/ kyxpgp
Have you seen the iphone decoding bar code into urls ?
doesn't the iphone has an app to decode qr-codes similar to the one built into almost all keitai here in japan.
Yes, is not really new but it can decode QR, DataMatrix (same as used for postage), ShotCode and bar code. With the new camera some applications got much better. Regards Jorge
On 05/08/2009 15:18, Leo Bicknell wrote:
I don't understand why replacing DNS is "not feasible".
I'd be happy to think about replacing the DNS as soon as we've finished off migrating to an ipv6-only internet in a year or two. Shall we set up a committee to try to make it happen faster? Nick
Curtis Maurand <cmaurand@xyonet.com> writes:
What does this have to do with Nanog, the guy found a critical security bug on DNS last year.
He didn't find it. He only publicized it. the guy who wrote djbdns fount it years ago.
first blood on both the DNS TXID attack, and on what we now call the Kashpureff attack, goes to chris schuba who published in 1993: http://ftp.cerias.purdue.edu/pub/papers/christoph-schuba/schuba-DNS-msthesis... i didn't pay any special heed to it since there was no way to get enough bites at the apple due to negative caching. when i saw djb's announcement (i think in 1999 or 2000, so, seven years after schuba's paper came out) i said, geez, that's a lot of code complexity and kernel overhead for a problem that can occur at most once per DNS TTL. and sure enough when we did finally put source port randomization into BIND it crashed a bunch of kernels and firewalls and NATs, and is still paying painful dividends for large ISP's who are now forced to implement it. why forced? what was it about kaminsky's announcement that changed this from a once-per-TTL problem that didn't deserve this complex/costly solution into a once-per-packet problem that made the world sit up and care? if you don't know the answer off the top of your head, then maybe do some reading or ask somebody privately, rather than continuing to announce in public that bernstein's problem statement was the same as kaminsky's problem statement. and, always give credit to chris schuba, who got there first.
Powerdns was patched for the flaw a year and a half before Kaminsky published his article.
nevertheless bert was told about the problem and was given a lengthy window in which to test or improve his solutions for it. and i think openbsd may have had source port randomization first, since they do it in their kernel when you try to bind(2) to port 0. most kernels are still very predictable when they're assigning a UDP port to an outbound socket. -- Paul Vixie KI6YSY
On Tue, Aug 4, 2009 at 9:25 PM, Paul Vixie<vixie@isc.org> wrote:
i didn't pay any special heed to it since there was no way to get enough bites at the apple due to negative caching. when i saw djb's announcement (i think in 1999 or 2000, so, seven years after schuba's paper came out) i said, geez, that's a lot of code complexity and kernel overhead for a problem that can occur at most once per DNS TTL. and sure enough when we
Even then it was worth it, and it was silly that the DNS community ignored him. Note that work on RFC 5452 started two years before Kaminksy's announcement.
Powerdns was patched for the flaw a year and a half before Kaminsky published his article.
nevertheless bert was told about the problem and was given a lengthy window in which to test or improve his solutions for it. and i think openbsd may
You told me about the problem so I would not accidentally reveal it in process of working on and discussing my draft. You also told me you'd block progress of the draft until after the Kaminsky announcement. And given the tactics you employ on the IETF DNSEXT mailinglist, I knew you'd succeed. Recall that the draft contained 'MUST' wording that would've made it embarrassing for BIND *not* to implement source port randomization. I didn't have to make any changes to PowerDNS as I was aware of the danger of using a single source port already. In addition, remember the one famous succeeded attempt to spoof a source port randomizing nameserver, which took 10 hours and gigabit speeds? The same guy attempted this attack against PowerDNS, and failed for a simple (and accidental) reason. It turns out that PowerDNS query throttling and PowerDNS timeout caching makes it very hard to find the sweet spot between generating enough queries to spoof a domain in a timely manner, but not overloading the server or the network to the point that timeouts will be generated, which leads to PowerDNS to no longer sending out queries. That does not mean that I think the DNS is 'safe' now. My other attempt to increase DNS security in a simple way ('EDNS PING') was blocked as effectively as the RFC 5452 drafts were, and I've given up on that route. See http://www.ops.ietf.org/lists/namedroppers/namedroppers.2009/msg00760.html I'll be at HAR2009 next week, and I understand both Kaminksy and EDNS-PING co-author David Ulevitch will be there, which should be fun. I'll also be presenting on DNS security risks, which will cover the subjects above as well. Bert
Other than DNSSEC, I'm aware of these relatively simple hacks to add entropy to DNS queries. 1) Random query ID 2) Random source port 3) Random case in queries, e.g. GooGLe.CoM 4) Ask twice (with different values for the first three hacks) and compare the answers I presume everyone is doing the first two. Any experience with the other two to report? R's, John
On Wed, Aug 5, 2009 at 6:48 PM, John Levine<johnl@iecc.com> wrote:
3) Random case in queries, e.g. GooGLe.CoM 4) Ask twice (with different values for the first three hacks) and compare the answers
I presume everyone is doing the first two. Any experience with the other two to report?
3 works, but offers zero protection against 'kaminsky spoofing the root' since you can't fold the case of "123456789.". And the root is the goal. 4 breaks on Akamai and many other CDNs. Even 'ask thrice, and take the majority answer' doesn't work there. 5 is 'edns ping', but it was effectively blocked because people thought DNSSEC would be easier to do, or demanded that EDNS PING (http://edns-ping.org) would offer everything that DNSSEC offered. Bert
bert hubert (bert.hubert) writes:
5 is 'edns ping', but it was effectively blocked because people thought DNSSEC would be easier to do, or demanded that EDNS PING (http://edns-ping.org) would offer everything that DNSSEC offered.
I'm surprised you failed to mention http://dnscurve.org/crypto.html, which is always brought up, but never seems to solve the problems mentioned.
5 is 'edns ping', but it was effectively blocked because people thought DNSSEC would be easier to do, or demanded that EDNS PING (http://edns-ping.org) would offer everything that DNSSEC offered.
I'm surprised you failed to mention http://dnscurve.org/crypto.html, which is always brought up, but never seems to solve the problems mentioned.
dnscurve looks like a swell idea, but I wouldn't put it in the category of a hack as straightforward as the ones I listed. Also, at this point there appears to be neither code nor an implementable spec available since Dan is still fiddling with it. Regards, John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies", Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor "More Wiener schnitzel, please", said Tom, revealingly.
On Wed, 5 Aug 2009 15:07:30 -0400 (EDT) "John R. Levine" <johnl@iecc.com> wrote:
5 is 'edns ping', but it was effectively blocked because people thought DNSSEC would be easier to do, or demanded that EDNS PING (http://edns-ping.org) would offer everything that DNSSEC offered.
I'm surprised you failed to mention http://dnscurve.org/crypto.html, which is always brought up, but never seems to solve the problems mentioned.
dnscurve looks like a swell idea, but I wouldn't put it in the category of a hack as straightforward as the ones I listed. Also, at this point there appears to be neither code nor an implementable spec available since Dan is still fiddling with it.
As I understand it, dnscurve protects transmissions, not objects. That's not the way DNS operates today, what with N levels of cache. It may or may not be better, but it's a much bigger delta to today's systems and practices than DNSSEC is. --Steve Bellovin, http://www.cs.columbia.edu/~smb
http://dnscurve.org/crypto.html, which is always brought up, but never seems to solve the problems mentioned.
As I understand it, dnscurve protects transmissions, not objects. That's not the way DNS operates today, what with N levels of cache. It may or may not be better, but it's a much bigger delta to today's systems and practices than DNSSEC is.
I took a closer look, and I have to say he did a really good job of integrating dnscurve into the way DNS works. Each request and response is protected by PKI, sort of like per message TLS. The public key for a server is encoded into the server's name, and the one for a client is passed in the packet. The name server name trick gives much of the zone chaining you get from DNSSEC. He doesn't say anything explicit about chained caches, but it seems pretty clear that you's have to tell the client cache about the server cache's key at the same time you tell it the server cache's IP address. It seems to me that the situation is no worse than DNSSEC, since in both cases the software at each hop needs to be aware of the security stuff, or you fall back to plain unsigned DNS. R's, John
In message <alpine.BSF.2.00.0908051952480.3301@simone.lan>, "John R. Levine" writes:
http://dnscurve.org/crypto.html, which is always brought up, but never seems to solve the problems mentioned.
As I understand it, dnscurve protects transmissions, not objects. That's not the way DNS operates today, what with N levels of cache. It may or may not be better, but it's a much bigger delta to today's systems and practices than DNSSEC is.
I took a closer look, and I have to say he did a really good job of integrating dnscurve into the way DNS works. Each request and response is protected by PKI, sort of like per message TLS. The public key for a server is encoded into the server's name, and the one for a client is passed in the packet. The name server name trick gives much of the zone chaining you get from DNSSEC.
He doesn't say anything explicit about chained caches, but it seems pretty clear that you's have to tell the client cache about the server cache's key at the same time you tell it the server cache's IP address.
It seems to me that the situation is no worse than DNSSEC, since in both cases the software at each hop needs to be aware of the security stuff, or you fall back to plain unsigned DNS.
R's, John
The DNSCurve concept is fine the implementation however is really poorly done. It also only works well for iterative resolvers. It doesn't work well for stub resolvers, nameservers that forward etc. as one now has a key distribution problem. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Wed, Aug 05, 2009 at 09:17:01PM -0400, John R. Levine wrote:
...
It seems to me that the situation is no worse than DNSSEC, since in both cases the software at each hop needs to be aware of the security stuff, or you fall back to plain unsigned DNS.
I might misunderstand how dnscurve works, but it appears that dnscurve is far easier to deploy and get running. The issue is merely coverage. How much of DNS do you want to protect. This is analagous to SMTP security, the more MTAs that support TLS the proportional increase of security in the system as a whole. Dnscurve appears to be another form of opportunistic encryption, the more servers that employ dnscurve means an accretion in security of DNS as a whole.
Of course, as long as an adversary in your packet path can force a seamless downgrade (e.g. to plain DNS or plain non-TLS SMTP), the hard security benefit is nowhere near as great as it's sometimes purported to be. And this is a problem that we'll be stuck living with for a very long time as far as I can tell. - S -----Original Message----- From: Naveen Nathan [mailto:naveen@calpop.com] Sent: Wednesday, August 05, 2009 7:05 PM To: John R. Levine Cc: nanog@nanog.org Subject: Re: dnscurve and DNS hardening, was Re: Dan Kaminsky On Wed, Aug 05, 2009 at 09:17:01PM -0400, John R. Levine wrote:
...
It seems to me that the situation is no worse than DNSSEC, since in both cases the software at each hop needs to be aware of the security stuff, or you fall back to plain unsigned DNS.
I might misunderstand how dnscurve works, but it appears that dnscurve is far easier to deploy and get running. The issue is merely coverage. How much of DNS do you want to protect. This is analagous to SMTP security, the more MTAs that support TLS the proportional increase of security in the system as a whole. Dnscurve appears to be another form of opportunistic encryption, the more servers that employ dnscurve means an accretion in security of DNS as a whole.
On Wed, Aug 5, 2009 at 10:05 PM, Naveen Nathan<naveen@calpop.com> wrote:
I might misunderstand how dnscurve works, but it appears that dnscurve is far easier to deploy and get running.
My understanding: They really do different things. They also have different behaviors. DNSCurve aims to secure the transaction between resolver (client) and the nameserver it queries. It encrypts both question and answer. It authenticates that the answer came from the nameserver queried. DNSSEC authenticates resource records as coming from the "official" delegated zone. Both allow the client to detect a forged answer, and allow the implementation to keep waiting for a non-forged answer. (DJB keeps saying that DNSSEC doesn't allow that, but I don't see why. An implementation of either could respond by giving up on the first invalid packet. Don't do that, then. Keep waiting, like DJB's DNSCurve implementation does.) DNSCurve keeps a sniffer from monitoring DNS queries. DNSSEC leaves those open to any sniffer. DNSSEC authenticates answers to the "zone owner". DNSCurve only protects the "local loop". If a cache is compromised and forged data inserted, a DNSSEC client can detect that; a DNSCurve client cannot. With DNSCurve, to protect against upstream attacks, every upstream cache must implement DNSCurve *and* be trusted by the end-user person. DNSSEC will work even if intermediate caches do not support DNSSEC. Neither is an "easy" implementation; both require changes to DNS infrastructure at both client and server ends to be effective. DNSCurve requires more CPU power on nameservers (for the more extensive crypto); DNSSEC requires more memory (for the additional DNSSEC payload). DNSCurve requires nameservers to have a particular name. DNSSEC does not. I'm told DNSCurve doesn't need any new record types, while DNSSEC does, and that can be a problem for firewalls and intermediate caches which assumed no new record types would ever be defined. There's lots of crappy implementations deployed in the world, so I have no idea how big a problem that might be. I think that's most of it. From a security perspective, I see DNSSEC as having the advantage if you're more worried about someone forging responses (since it authenticates the zone, and not the transaction), and DNSCurve as having the advantage if you're more worried about someone sniffing your DNS traffic. From my chair: I really don't care if someone knows what DNS records I'm looking up. Almost certainly I'll only be looking up records to connect to the associated server. The sniffer can then just look at the IP address. I do care quite a bit if an Internet provider running the local cache starts lying to me about what domain names are what. Say, to redirect things to their "sponsored" sites instead. YMMV. I reserve the right to be wrong. -- Ben
Ben, Thanks for the cogent comparison between the two security systems for DNS.
DNSCurve requires more CPU power on nameservers (for the more extensive crypto); DNSSEC requires more memory (for the additional DNSSEC payload).
This is only true for the initial (Elliptic Curve) Diffie-Hellman exchange An long-term secret key is computed, but I assume the lifetime is dependant on configuration or implementation. It seems DJB is not only advocating his elliptic curve crypto system, but also his own home-rolled symmetric crypto Salsa20, which is meant to be computationally cheaper than AES in conjunction w/ poly1035whatever for integrity/MAC. I'll assume the cipher used for the lasting secret keys is interchangeable. So after initial communication between two servers that can speak DNSCurve, future communication should be computationally cheaper by using long-term keys. - Naveen
* Naveen Nathan:
I'll assume the cipher used for the lasting secret keys is interchangeable.
Last time I checked, even the current cryptographic algorithms weren't specified. It's unlikely that there is an upgrade path (other than stuffing yet another magic label into your name server names). -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
There are really two security problems here, which implies that two different methods might be necessary: 1) Authenticate the nameserver to the client (and so on up the chain to the root) in order to defeat the Kaminsky attack, man in the middle, IP-layer interference. (Are you who you say you are?) 2) Validate the information in the nameserver. (OK, so you're the nameserver; but who says www.google.com is 1.2.3.4?) 1) is the transport layer problem; 2) is the dnssec/zone signing problem.
On Thu, Aug 6, 2009 at 6:06 AM, Alexander Harrowell <a.harrowell@gmail.com> wrote:
1) Authenticate the nameserver to the client (and so on up the chain to the root) in order to defeat the Kaminsky attack, man in the middle, IP-layer interference. (Are you who you say you are?)
DNSSEC fans will be quick to point out that if everyone used DNSSEC, there would be no need to worry about Kaminsky attacks, etc. Nobody would bother with them since nobody would be vulnerable to them. Of course, expecting universal deployment of *anything* is a bit silly, so I think worrying about the transport might have been a good idea, too. But then, the standard was written 15 or so years ago, when CPU power was more expensive. Plus there's generally not a lot of trust between DNS client and server anyway, so I'm not really sure it matters. (It's not like most ISPs issue PKI certificates to their customers.) Something DNSSEC *can't* defend against is a simple DoS flood of bogus questions/answers. Of course, I don't really think DNSCurve can, either. Sure, it discards bogus packets, but it burns up a lot of CPU time doing so, so you're that much more vulnerable to a DoS flood. But then, given sufficient resources on the part of the attacker, there's really nothing anyone can do *locally* do defend against a DoS flood. Stuff enough data into *any* tube and it will clog. -- Ben
On Wed, 5 Aug 2009, Naveen Nathan wrote:
I might misunderstand how dnscurve works, but it appears that dnscurve is far easier to deploy and get running.
Not really. There are multiple competing mature implementations of DNSSEC and you won't be in a network of 1 if you deploy it. Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ GERMAN BIGHT HUMBER: SOUTHWEST 5 TO 7. MODERATE OR ROUGH. SQUALLY SHOWERS. MODERATE OR GOOD.
On 8/5/09 7:05 PM, Naveen Nathan wrote:
On Wed, Aug 05, 2009 at 09:17:01PM -0400, John R. Levine wrote:
...
It seems to me that the situation is no worse than DNSSEC, since in both cases the software at each hop needs to be aware of the security stuff, or you fall back to plain unsigned DNS.
I might misunderstand how dnscurve works, but it appears that dnscurve is far easier to deploy and get running. The issue is merely coverage.
There might be issues related to intellectual property use. :^( -Doug
3 works, but offers zero protection against 'kaminsky spoofing the root' since you can't fold the case of "123456789.". And the root is the goal.
Good point. 5) Download your own copy of the root zone every few days from http://www.internic.net/domain/, check the signature if you can find the signing key for 289FE7AD, and use that rather than the public roots. 6) EDNS0 PING, if you think anyone else will implement it R's, John
On 8/5/09 9:48 AM, John Levine wrote:
Other than DNSSEC, I'm aware of these relatively simple hacks to add entropy to DNS queries.
1) Random query ID
2) Random source port
3) Random case in queries, e.g. GooGLe.CoM
4) Ask twice (with different values for the first three hacks) and compare the answers
DNSSEC introduces vulnerabilities, such as reflected attacks and fragmentation related exploits that might poison glue, where perhaps asking twice might still be needed. Modern implementations use random 16 bit transaction IDs. Interposed NATs may impair effectiveness of random source ports. Use of random query cases may not offer an entropy increase in some instances. Asking twice, although doubling resource consumption and latency, offers an increase in entropy that works best when queried serially. Establishing SCTP as a preferred DNS transport offers a safe harbor for major ISPs. SCTP protects against both spoofed and reflected attack. Use of persistent SCTP associations can provide lower latency than that found using TCP fallback, TCP only, or repeated queries. SCTP also better deals with attack related congestion. Once UDP is impaired by EDNS0 response sizes that exceed reassembly resources, or are preemptively dropped as a result, TCP must then dramatically scale up to offer the resilience achieved by UDP anycast. In this scenario, SCTP offers several benefits. SCTP retains initialization state within cryptographically secured cookies, which provides significant protection against spoofed source resource exhaustion. By first exchanging cookies, the network extends server state storage. SCTP also better ensures against cache poisoning whether DNSSEC is used or not. Having major providers support the SCTP option will mitigate disruptions caused by DNS DDoS attacks using less resources. SCTP will also encourage use of IPv6, and improve proper SOHO router support. When SCTP becomes used by HTTP, this further enhances DDoS resistance for even critical web related services as well. -Doug
On Aug 6, 2009, at 1:12 AM, Douglas Otis wrote:
Having major providers support the SCTP option will mitigate disruptions caused by DNS DDoS attacks using less resources.
Can you elaborate on this (or are you referring to removing the spoofing vector?)? ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Unfortunately, inefficiency scales really well. -- Kevin Lawton
On 8/5/09 11:31 AM, Roland Dobbins wrote:
On Aug 6, 2009, at 1:12 AM, Douglas Otis wrote:
Having major providers support the SCTP option will mitigate disruptions caused by DNS DDoS attacks using less resources.
Can you elaborate on this (or are you referring to removing the spoofing vector?)?
SCTP is able to simultaneously exchange chunks (DNS messages) over an association. Initialization of associations can offer alternative servers for immediate fail-over, which might be seen as means to arrange anycast style redundancy. Unlike TCP, resource commitments are only retained within the cookies exchanged. This avoids consumption of resources for tracking transaction commitments for what might be spoofed sources. Confirmation of the small cookie also offers protection against reflected attacks by spoofed sources. In addition to source validation, the 32 bit verification tag and TSN would add a significant amount of entropy to the DNS transaction ID. The SCTP stack is able to perform the housekeeping needed to allow associations to persist beyond single transaction, nor would there be a need to push partial packets, as is needed with TCP. -Doug
On Wed, Aug 5, 2009 at 5:24 PM, Douglas Otis<dotis@mail-abuse.org> wrote:
On 8/5/09 11:31 AM, Roland Dobbins wrote:
On Aug 6, 2009, at 1:12 AM, Douglas Otis wrote:
Having major providers support the SCTP option will mitigate disruptions caused by DNS DDoS attacks using less resources.
Can you elaborate on this (or are you referring to removing the spoofing vector?)?
SCTP is able to simultaneously exchange chunks (DNS messages) over an association. Initialization of associations can offer alternative servers for immediate fail-over, which might be seen as means to arrange anycast style redundancy. Unlike TCP, resource commitments are only retained within the cookies exchanged. This avoids consumption of resources for tracking transaction commitments for what might be spoofed sources. Confirmation of the small cookie also offers protection against reflected attacks by spoofed sources. In addition to source validation, the 32 bit verification tag and TSN would add a significant amount of entropy to the DNS transaction ID.
The SCTP stack is able to perform the housekeeping needed to allow associations to persist beyond single transaction, nor would there be a need to push partial packets, as is needed with TCP.
and state-management seems like it won't be too much of a problem on that dns server... wait, yes it will.
On 8/5/09 2:49 PM, Christopher Morrow wrote:
and state-management seems like it won't be too much of a problem on that dns server... wait, yes it will.
DNSSEC UDP will likely become problematic. This might be due to reflected attacks, fragmentation related congestion, or packet loss. When it does, TCP fallback will tried. TCP must retain state for every attempt to connect, and will require significantly greater resources for comparable levels of resilience. SCTP instead uses cryptographic cookies and the client to retain this state information. SCTP can bundle several transactions into a common association, which reduces overhead and latency compared against TCP. SCTP ensures against source spoofed reflected attacks or related resource exhaustion. TCP or UDP does not. Under load, SCTP can redirect services without using anycast. TCP can not. -Doug
On Wed, Aug 5, 2009 at 6:53 PM, Douglas Otis<dotis@mail-abuse.org> wrote:
On 8/5/09 2:49 PM, Christopher Morrow wrote:
and state-management seems like it won't be too much of a problem on that dns server... wait, yes it will.
DNSSEC UDP will likely become problematic. This might be due to reflected attacks, fragmentation related congestion, or packet loss. When it does, TCP
because all of these problems aren't already problems today? how is dnssec adding to this? or is your premise that dnssec adds to it because it requires edns0 and larger responses?
fallback will tried. TCP must retain state for every attempt to connect,
ask worldnic how well that works... edns0 exists (for at least) the sidestep of truncate and use tcp.
and will require significantly greater resources for comparable levels of resilience.
Do you really think that dns in the future is going to move to mostly TCP based transport? do you know what added latency that will be for all clients which switch? What about handling more stateful requests on what today are stateless systems? (f-root-style anycasted pods of authoritative resolvers)
SCTP instead uses cryptographic cookies and the client to retain this state information. SCTP can bundle several transactions into a common association, which reduces overhead and latency compared against TCP. SCTP
great... which internet scale applications use SCTP today? Which loadbalancers are prepared to deal with this 'new' requirement?
ensures against source spoofed reflected attacks or related resource exhaustion. TCP or UDP does not. Under load, SCTP can redirect services
how does SCTP ensure against spoofed or reflected attacks?
without using anycast. TCP can not.
explain your assertions please... these seem like overly broad marketing slides which may be truthful in a corner-case but under wide deployment aren't going to work in this manner. -Chris
Christopher Morrow <morrowc.lists@gmail.com> writes:
how does SCTP ensure against spoofed or reflected attacks?
there is no server side protocol control block required in SCTP. someone sends you a "create association" request, you send back a "ok, here's your cookie" and you're done until/unless they come back and say "ok, here's my cookie, and here's my DNS request." so a spoofer doesn't get a cookie and a reflector doesn't burden a server any more than a ddos would do. because of the extra round trips nec'y to create an SCTP "association" (for which you can think, lightweight TCP-like session-like), it's going to be nec'y to leave associations in place between iterative caches and authority servers, and in place between stubs and iterative caches. however, because the state is mostly on the client side, a server with associations open to millions of clients at the same time is actually no big deal. -- Paul Vixie KI6YSY
* Paul Vixie:
there is no server side protocol control block required in SCTP.
SCTP needs per-peer state for congestion control and retransmission.
someone sends you a "create association" request, you send back a "ok, here's your cookie" and you're done until/unless they come back and say "ok, here's my cookie, and here's my DNS request." so a spoofer doesn't get a cookie and a reflector doesn't burden a server any more than a ddos would do.
This is a red herring. The TCP state issues are deeper and haven't got much to do with source address validation. The issues are mostly caused by how the BSD sockets API is designed. SCTP uses the same API model, and suffers from similar problems.
because of the extra round trips nec'y to create an SCTP "association" (for which you can think, lightweight TCP-like session-like), it's going to be nec'y to leave associations in place between iterative caches and authority servers, and in place between stubs and iterative caches.
This doesn't seem possible with current SCTP because the heartbeat rate quickly adds up and overloads servers further upstream. It also does not work on UNIX-like system where processes are short-lived and get a fresh stub resolver each time they are restarted. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
On Thu, 6 Aug 2009, Florian Weimer wrote:
This doesn't seem possible with current SCTP because the heartbeat rate quickly adds up and overloads servers further upstream. It also does not work on UNIX-like system where processes are short-lived and get a fresh stub resolver each time they are restarted.
Stubs on Unix systems can have long-lived processes that handle the actual lookups, the stub component in the process that calls into the resolver then accesses it via IPC. I.e. the NSCD style approach. regards, -- Paul Jakma paul@jakma.org Key ID: 64A2FF6A Fortune: As Zeus said to Narcissus, "Watch yourself."
On Thu, Aug 6, 2009 at 2:51 AM, Paul Vixie<vixie@isc.org> wrote:
Christopher Morrow <morrowc.lists@gmail.com> writes:
how does SCTP ensure against spoofed or reflected attacks?
there is no server side protocol control block required in SCTP. someone sends you a "create association" request, you send back a "ok, here's your cookie" and you're done until/unless they come back and say "ok, here's my cookie, and here's my DNS request." so a spoofer doesn't get a cookie and a reflector doesn't burden a server any more than a ddos would do.
awesome, how does that work with devices in the f-root-anycast design? (both local hosts in the rack and if I flip from rack to rack) If I send along a request to a host which I do not have an association created do I get a failure and then re-setup? (inducing further latency)
because of the extra round trips nec'y to create an SCTP "association" (for which you can think, lightweight TCP-like session-like), it's going to be nec'y to leave associations in place between iterative caches and authority servers, and in place between stubs and iterative caches. however, because the state is mostly on the client side, a server with associations open to millions of clients at the same time is actually no big deal.
See question above, as well as: "Do loadbalancers, or loadbalanced deployments, deal with this properly?" (loadbalancers like F5, citrix, radware, cisco, etc...) -Chris
note, i went off-topic in my previous note, and i'll be answering florian on namedroppers@ since it's not operational. chris's note was operational:
Date: Thu, 6 Aug 2009 10:18:11 -0400 From: Christopher Morrow <morrowc.lists@gmail.com>
awesome, how does that work with devices in the f-root-anycast design? (both local hosts in the rack and if I flip from rack to rack) If I send along a request to a host which I do not have an association created do I get a failure and then re-setup? (inducing further latency)
yes. so, association setup cost will occur once per route-change event. note that the f-root-anycast design already hashes by flow within a rack to keep TCP from failing, so the only route-change events of interest to this point are in wide area BGP.
...: "Do loadbalancers, or loadbalanced deployments, deal with this properly?" (loadbalancers like F5, citrix, radware, cisco, etc...)
as far as i know, no loadbalancer understands SCTP today. if they can be made to pass SCTP through unmodified and only do their enhanced L4 on UDP and TCP as they do now, all will be well. if not then a loadbalancer upgrade or removal will be nec'y for anyone who wants to deploy SCTP. it's interesting to me that existing deployments of L4-aware packet level devices can form a barrier to new kinds of L4. it's as if the internet is really just the web, and our networks are TCP/UDP networks not IP networks.
On Thu, Aug 06, 2009 at 03:16:25PM +0000, Paul Vixie wrote:
...: "Do loadbalancers, or loadbalanced deployments, deal with this properly?" (loadbalancers like F5, citrix, radware, cisco, etc...)
as far as i know, no loadbalancer understands SCTP today. if they can be made to pass SCTP through unmodified and only do their enhanced L4 on UDP and TCP as they do now, all will be well. if not then a loadbalancer upgrade or removal will be nec'y for anyone who wants to deploy SCTP.
F5 BIG-IP 10.0 has support for load balancing SCTP. I have not tested or implemented it. I do not know what feature parity exists with other protocols. But at least it's documented and supported. -- Ross Vandegrift ross@kallisti.us "If the fight gets hot, the songs get hotter. If the going gets tough, the songs get tougher." --Woody Guthrie
On Thu, Aug 6, 2009 at 11:16 AM, Paul Vixie<vixie@isc.org> wrote:
note, i went off-topic in my previous note, and i'll be answering florian on namedroppers@ since it's not operational. chris's note was operational:
Date: Thu, 6 Aug 2009 10:18:11 -0400 From: Christopher Morrow <morrowc.lists@gmail.com>
awesome, how does that work with devices in the f-root-anycast design? (both local hosts in the rack and if I flip from rack to rack) If I send along a request to a host which I do not have an association created do I get a failure and then re-setup? (inducing further latency)
yes. so, association setup cost will occur once per route-change event. note that the f-root-anycast design already hashes by flow within a rack
pulling something I didn't previously understand from an ongoing discussion on the LISP/v6ops mailing lists... most routers today only hash on tcp/udp so.. sctp isn't going to hash in the same 'deterministic' manner, or someone should probably test that that is the case.
to keep TCP from failing, so the only route-change events of interest to this point are in wide area BGP.
right, and the (I think K-root) K-root folks had a study showing <1% of sessions seemed to be failing in this manner? (nanog in Toronto I think?)
...: "Do loadbalancers, or loadbalanced deployments, deal with this properly?" (loadbalancers like F5, citrix, radware, cisco, etc...)
as far as i know, no loadbalancer understands SCTP today. if they can be made to pass SCTP through unmodified and only do their enhanced L4 on UDP and TCP as they do now, all will be well. if not then a loadbalancer upgrade or removal will be nec'y for anyone who wants to deploy SCTP.
it's interesting to me that existing deployments of L4-aware packet level devices can form a barrier to new kinds of L4. it's as if the internet is really just the web, and our networks are TCP/UDP networks not IP networks.
sadly, people have (and continue) to make simplifying assumptions while designing/deploying equipment. -Chris
On Thu, 06 Aug 2009 06:51:24 +0000 Paul Vixie <vixie@isc.org> wrote:
Christopher Morrow <morrowc.lists@gmail.com> writes:
how does SCTP ensure against spoofed or reflected attacks?
there is no server side protocol control block required in SCTP. someone sends you a "create association" request, you send back a "ok, here's your cookie" and you're done until/unless they come back and say "ok, here's my cookie, and here's my DNS request." so a spoofer doesn't get a cookie and a reflector doesn't burden a server any more than a ddos would do.
because of the extra round trips nec'y to create an SCTP "association" (for which you can think, lightweight TCP-like session-like), it's going to be nec'y to leave associations in place between iterative caches and authority servers, and in place between stubs and iterative caches. however, because the state is mostly on the client side, a server with associations open to millions of clients at the same time is actually no big deal.
Am I missing something? The SCTP cookie guards against the equivalent of SYN-flooding attacks. The problem with SCTP is normal operations. A UDP DNS query today takes a message and a reply, with no (kernel) state created on the server end. For SCTP, it takes two round trips to set up the connection -- which includes kernel state -- followed by the query and reply, and tear-down. I confess that I don't remember the SCTP state diagram; in TCP, the side that closes first can end up in FIN-WAIT2 state, which is stable. That is, suppose the server -- the loaded party -- tries to shed kernel state by closing its DNS socket first. If the client goes away or is otherwise uncooperative, the server will then end up in FIN-WAIT2, in which case kernel memory is consumed "forever" by conforming server TCPs. Does SCTP have that problem? --Steve Bellovin, http://www.cs.columbia.edu/~smb
This was responded to on the DNSEXT mailing list. Sorry, but your question was accidentally attributed to Paul who forwarded the message. DNSEXT Archive: http://ops.ietf.org/lists/namedroppers/ -Doug
* Douglas Otis:
DNSSEC UDP will likely become problematic. This might be due to reflected attacks,
SCTP does not stop reflective attacks at the DNS level. To deal with this issue, you need DNSSEC's denial of existence. The DNSSEC specs currently doesn't allow you to stop these attacks dead in your resolver, but the data is already there. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
At 15:53 -0700 8/5/09, Douglas Otis wrote:
DNSSEC UDP will likely become problematic.
dotORG (.org) is DNSSEC signed now. nanog.org is DNSSEC signed now. Still getting mail on the list saying "DNSSEC UDP will be a problem"... (from some commercial's punch line) ...priceless Continuing,
This might be due to reflected attacks, fragmentation related congestion, or packet loss.
The same issues (related to the size of DNSSEC answers) are also true for the size of IPv6 answers (AAAA RR) and the size of ENUM (NAPTR RR) answers. I.e., the perceived issues related to stuffing data into larger (than 512B) datagrams aren't unique to DNSSEC. So, if you are paranoid about DNSSEC now, don't worry, there's more to be paranoid about around the corner. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Edward Lewis NeuStar You can leave a voice message at +1-571-434-5468 As with IPv6, the problem with the deployment of frictionless surfaces is that they're not getting traction.
* Douglas Otis:
Establishing SCTP as a preferred DNS transport offers a safe harbor for major ISPs.
SCTP is not a suitable transport for DNS, for several reasons: Existing SCTP stacks are not particularly robust (far less than TCP). The number of bugs still found in them is rather large. Only very few stacks (if any) implement operation without kernel buffers. The remaining ones are subject to the same state exhaustion attacks as TCP stacks are. At least some parts of SCTP and the SCTP API were designed for a cooperative environment. The SCTP API specification is very ambiguous, which is quite strange for such a young protocol. For instance, it is not clear if a single socket is used to communicate with multiple peers, head-of-line blocking can occur. The protocol has insufficient signalling to ensure that implementations turn off features which are harmful on a global scale. For instance, persistant authoritative <-> resolver connections only work if you switch off heartbeat, but the protocol cannot do this, and it is likely that many peers won't do it. SCTP proposers generally counter these observations by referring to extensions and protocols which are not yet standardized, not implemented, or both, constantly moving the goalposts. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
* John Levine:
3) Random case in queries, e.g. GooGLe.CoM
This does not work well without additional changes because google.com can be spoofed with responses to 123352123.com (or even 123352123.). Unbound strives to implement the necessary changes, some of which are also required if you want to use DNSSEC to compensate for lack of channel security. As far as I know (and Paul will certainly correct me), the necessary changes are not present in current BIND releases.
4) Ask twice (with different values for the first three hacks) and compare the answers
There is a protocol proposal to cope with fluctuating data, but I'm not aware that anyone has expressed interest in implementing it. Basically, the idea is to reduce caching for such data, so that successful spoofing attacks have less amplification effect.
I presume everyone is doing the first two. Any experience with the other two to report?
0x20 has alleged interoperability issues. It's also not such a simple upgrade as it was initially thought, so the trade-off is rather poor for existing resolver code bases. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
On Wed, 29 Jul 2009 22:53:39 BST, "andrew.wallace" said:
The hackers criticized Mitnick and Kaminsky for using insecure blogging and hosting services to publish their sites, that allowed the hackers to gain easy access to their data.
*yawn*. kiddies whack low-value sites, death of Internet predicted. Film at 11. What Mitnick and Kaminsky realize, and most NANOGers hopefully do too, is that security comes with costs, and a cost-benefit analysis is in order. Mitnick came out and *said* that he knew the site was insecure, but since no sensitive data was on there, it didn't matter. Presumably the site's monthly cost, convenience, user-interface, and so on, outweigh the effort of occasionally having to recover after some idiot whizzes all over the site. Now, if they had managed to whack a site that Mitnick and Kaminsky *cared* about, it would be a different story...
Valdis.Kletnieks@vt.edu wrote:
... Mitnick came out and *said* that he knew the site was insecure, but since no sensitive data was on there, it didn't matter. Presumably the site's monthly cost, convenience, user-interface, and so on, outweigh the effort of occasionally having to recover after some idiot whizzes all over the site.
Now, if they had managed to whack a site that Mitnick and Kaminsky *cared* about, it would be a different story...
Remembering those ancient days, it always seemed to me that was Mitnick's usual series of excuses (as in: he was a scapegoat, nobody was physically hurt, their cleanup cost estimates were inflated, et cetera ad nauseum). This just seems like more of the same. I'm not a big fan of throw them in prison and throw away the key, but the fact that his prison sentences (plural) and restitution were so lenient is certainly a factor in the difficulty of convincing LE to take investigation and prosecution seriously. Security consultants that don't practice secure computing on their own sites aren't much more than flacks for hire. http://antilimit.net/zf05.txt Anyway, most of the reading was pretty boring and badly formatted, but it still put a bit of a knot in my intestines.... Are we paying enough attention to securing our systems?
William Allen Simpson <william.allen.simpson@gmail.com> writes:
Are we paying enough attention to securing our systems?
almost certainly not. skimming RFC 2196 again just now i find three things. 1. it's out of date and needs a refresh -- yo barb! 2. i'm not doing about half of what it recommends 3. my users complain bitterly about the other half in terms of cost:benefit, it's more and more the case that outsourcing looks cheaper than doing the job correctly in-house. not because outsourcing *is* more secure but because it gives the user somebody to sue rather than fire, where a lawsuit could recover some losses and firing someone usually won't. digital security is getting a lot of investor attention right now. i wonder if this will ever consolidate or if pandora's box is just broken for all time. -- Paul Vixie KI6YSY
Paul Vixie wrote:
digital security is getting a lot of investor attention right now. i wonder if this will ever consolidate or if pandora's box is just broken for all time.
It'll consolidate to the point where probabilities and probably costs can be accurately assessed, at which point it can be insured, and that's where it'll level off.
participants (41)
-
Alexander Harrowell
-
Andrew D Kirch
-
andrew.wallace
-
Ben Scott
-
bert hubert
-
Buhrmaster, Gary
-
Chris Adams
-
Christopher Morrow
-
Cord MacLeod
-
Curtis Maurand
-
Dave Israel
-
Douglas Otis
-
Dragos Ruiu
-
Edward Lewis
-
Erik Soosalu
-
Florian Weimer
-
James R. Cutler
-
Joe Greco
-
John Levine
-
John R. Levine
-
Jorge Amodio
-
Kevin Oberman
-
Leo Bicknell
-
Mark Andrews
-
Marshall Eubanks
-
Mikael Abrahamsson
-
Naveen Nathan
-
Nick Hilliard
-
Patrick W. Gilmore
-
Paul Jakma
-
Paul Vixie
-
Phil Regnauld
-
Randy Bush
-
Richard A Steenbergen
-
Roland Dobbins
-
Ross Vandegrift
-
Skywing
-
Steven M. Bellovin
-
Tony Finch
-
Valdis.Kletnieks@vt.edu
-
William Allen Simpson