re: Paul Vixie: Re: [dns-operations] DNS issue accidentally leaked?
this is for whoever said "it's just a brute force attack" and/or "it's the same attack that's been described before". maybe it goes double if that person is also the one who said "my knowledge in this area is out of date". grrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr. re: -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
downplay this all you want, we can infect a name server in 11 seconds now, which was never true before. i've been tracking this area since 1995. don't try to tell me, or anybody, that dan's work isn't absolutely groundbreaking.
i am sick and bloody tired of hearing from the people who aren't impressed.
Well, Paul, I'm not *too* impressed, and so far, I'm not seeing what is groundbreaking, except that threats discussed long ago have become more practical due to the growth of network and processing speeds, which was a hazard that ... was actually ALSO predicted. And you know what? I'll give you the *NEXT* evolutionary steps, which could make you want to cry. If the old code system could result in an infected name server in 11 seconds, this "fix" looks to me to be at best a dangerous and risky exercise at merely reducing the odds. Some criminal enterprise will figure out that you've only reduced the odds to 1/64000 of what they were, but the facts are that if you can redirect some major ISP's resolver to punt www.chase.com or www.citibank.com at your $evil server, and you have large botnets on non-BCP38 networks, you can be pumping out large numbers of answers at the ISP's server without a major commitment in bandwidth locally... and sooner or later, you'll still get a hit. You don't need to win in 11 secs, or even frequently. It can be "jackpot" monthly and you still win. Which is half the problem here. Bandwidth is cheap, and DNS packets are essentially not getting any bigger, so back in the day, maybe this wasn't practical over a 56k or T1 line, but now it is trivial to find a colo with no BCP38 and a gigabit into the Internet. The flip side is all those nice multicore CPU's mean that systems aren't flooded by the responses, and they are instead successfully sorting through all the forged responses, which may work in the attacker's advantage (doh!) This problem really is not practically solvable through the technique that has been applied. Give it another few years, and we'll be to the point where both the QID *and* the source port are simply flooded, and it only takes 11 seconds, thanks to the wonder of ever-faster networks and servers. Whee, ain't progress wonderful. Worse, this patch could be seen as *encouraging* the flooding of DNS servers with fake responses, and this is particularly worrying, since some servers might have trouble with this. So, if we want to continue to ignore proper deployment of DNSSEC or equivalent, there are some things we can do: * Detect and alarm on cache overwrite attempts (kind of a meta-RFC 2181 thing). This could be problematic for environments without consistent DNS data (and yes, I know your opinion of that). * Detect and alarm on mismatched QID attacks (probably at some low threshold level). But the problem is, even detected and alerted, what do you do? Alarming might be handy for the large consumer ISP's, but isn't going to be all that helpful for the universities or fortune 500's that don't have 24/7 staff sitting on top of the nameserver deployment. So, look at other options: * Widen the query space by using multiple IP addresses as source. This, of course, has all the problems with NAT gw's that the port solution did, except worse. This makes using your ISP's "properly designed" resolver even more attractive, rather than running a local recurser on your company's /28 of public IP space, but has the unintended consequence of making those ISP recursers even more valuable targets. Makes you wish for wide deployment of IPv6, eh.
every time some blogger says "this isn't new", another five universities and ten fortune 500 companies and three ISP's all decide not to patch. that means we'll have to wait for them to be actively exploited before they will understand the nature of the emergency.
While I applaud the reduction in the attack success rate that the recent patch results in, I am going to take a moment to be critical, and note that I feel you (and the other vendors) squandered a fantastic chance. Just like the Boy Who Cried Wolf, you have a limited number of times that you can cry "vulnerability" and have people mostly all stand up and pay attention in the way that they did. Hardly the first (but possibly one of the most noteworthy), RIPE called for signing of the root zone a year ago. I note with some annoyance that this would have been a great opportunity for someone with a lot of DNS credibility to stand up and say "we need the root signed NOW," and to force the issue. This would have been a lot of work, certainly, but a lot of *worthwhile* work, at various levels. The end result would have been a secure DNS system for those who chose to upgrade and update appropriately. Instead, it looks to me as though the opportunity is past, people are falsely led to believe that their band-aid-patched servers are now "not vulnerable" (oh, I love that term, since it isn't really true!) and the next time we need to cry "fire," fewer people will be interested in changing. The only real fix I see is to deploy DNSSEC. I've tried to keep this message bearably short, so please forgive me if I've glossed over anything or omitted anything. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
So, look at other options:
* Widen the query space by using multiple IP addresses as source. This, of course, has all the problems with NAT gw's that the port solution did, except worse.
This makes using your ISP's "properly designed" resolver even more attractive, rather than running a local recurser on your company's /28 of public IP space, but has the unintended consequence of making those ISP recursers even more valuable targets.
Makes you wish for wide deployment of IPv6, eh.
The only real fix I see is to deploy DNSSEC.
You seem to be saying, above, that IPv6 is also a real fix, presumably because it allows for the 64-bit host id portion of an IP address to "fast flux". Or have I misunderstood? It would be nice for someone to explain how (or if) IPv6 changes this situation since many networks are already well into the planning stages for IPv6 deployment within the next two to three years. --Michael Dillon
So, look at other options:
* Widen the query space by using multiple IP addresses as source. This, of course, has all the problems with NAT gw's that the port solution did, except worse.
This makes using your ISP's "properly designed" resolver even more attractive, rather than running a local recurser on your company's /28 of public IP space, but has the unintended consequence of making those ISP recursers even more valuable targets.
Makes you wish for wide deployment of IPv6, eh.
The only real fix I see is to deploy DNSSEC.
You seem to be saying, above, that IPv6 is also a real fix, presumably because it allows for the 64-bit host id portion of an IP address to "fast flux". Or have I misunderstood?
No, IPv6 is a *potential* fix for the *problem being discussed*, in a practical but not absolute sense. Let's discard the "fast flux" concept, because that's not really right. Let's instead assume that an entire 64-bit space is available for a DNS server to send requests from, not due to rapidly changing IP addresses, but because they're all bound to loopback, and hence always-on. This means that the server can simultaneously have outstanding requests on different IP's, which is why I want to discard "fast flux" terminology. We had a problem with DNS. The problem identified by the released exploit was the fairly obvious one that the query ID is only 16 bits. What that means is that you can readily guess a correct answer 1/65536 times. So you simply hammer the server with all 65536 answers while asking questions until you win the race between the time the server sends a query and the legitimate server responds, rendering the QID useless. The "port fix" increases the search space significantly. Probably to the point where you can not practically send 65536 * 64512 packets (4 billion, all 65536 possible qid's to all 64512 nonpriv ports) within an appropriate timeframe. Regardless, you can keep trying a small number of bogus answers, and over time, statistical probability says that you will eventually get lucky. The sharp folks will realize that this is essentially what is happening in the first scenario anyways, just with smaller numbers. Expanding the search space by adding 64 bits brings the potential total up to roughly 64 + 16 + 16 bits, or about 96 bits. This reduces the likelihood of success substantially, and even allowing for increased network speeds and processor speeds over time, I don't think you'd get a hit in your lifetime. ;-) However, DNSSEC is a better solution, because it also works to guarantee the integrity of data (it will work to solve MITM, etc). So, for the vulnerability just released, yeah, IPv6 could be a solution, but it is a hacky, ugly, partial solution. It would fairly exhaustively fix the problem at hand, but not the more general problem of trust in DNS.
It would be nice for someone to explain how (or if) IPv6 changes this situation since many networks are already well into the planning stages for IPv6 deployment within the next two to three years.
Personally, I wouldn't worry too much about it. What we really need is some political backbone behind getting DNSSEC deployed, because the vulnerability that was just released is just one of many possible attacks against the DNS, and DNSSEC is a much better general fix, etc. I do not believe that you will find an IPv6 extension of this sort useful in the short term, and in the long term, I'm hoping for DNSSEC. Now you can fully appreciate my comment: "Makes you wish for wide deployment of IPv6, eh." Because if we did have wide deployment of IPv6, adding 64 bits to the search window would certainly be a practical solution to this particular attack on the protocol. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
i am sick and bloody tired of hearing from the people who aren't impressed.
Well, Paul, I'm not *too* impressed, and so far, I'm not seeing what is groundbreaking, except that threats discussed long ago have become more practical due to the growth of network and processing speeds, which was a hazard that ... was actually ALSO predicted.
11 seconds. and at&t refuses to patch. and all iphones use those name servers. your move. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Jul 24, 2008, at 9:22 AM, Paul Vixie wrote:
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
This caught my attention, and so I tossed the AT&T wireless card in my laptop and ran the test: [rogue:~] steve% dig +short porttest.dns-oarc.net TXT z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. "209.183.54.151 is GOOD: 26 queries in 0.8 seconds from 26 ports with std dev 22831.70" [rogue:~] steve% host 209.183.54.151 151.54.183.209.in-addr.arpa domain name pointer bthinetdns.mycingular.net. doxpara.com tests to lock up my iPhone, or I would use that checker to verify the iPhone DNS. Anyone have a link to a decent test that I could run on the iPhone? Thanks, Steve
On Thu, 24 Jul 2008, Steve Tornio said:
doxpara.com tests to lock up my iPhone, or I would use that checker to verify the iPhone DNS. Anyone have a link to a decent test that I could run on the iPhone?
Give this one a try: http://entropy.dns-oarc.net/test/
On Jul 24, 2008, at 12:17 PM, Duane Wessels wrote:
xpara.com tests to lock up my iPhone, or I would use that checker to verify the iPhone DNS. Anyone have a link to a decent test that I could run on the iPhone?
Give this one a try:
In this test, my iPhone reports: 209.183.33.23 Source Port Randomness: GREAT 209.183.33.23 Transaction ID Randomness: GREAT I encourage anyone else concerned with their providers to actually test them instead of taking anyone's word for it. Steve
Steve Tornio wrote:
On Jul 24, 2008, at 12:17 PM, Duane Wessels wrote:
xpara.com tests to lock up my iPhone, or I would use that checker to verify the iPhone DNS. Anyone have a link to a decent test that I could run on the iPhone?
Give this one a try:
In this test, my iPhone reports:
209.183.33.23 Source Port Randomness: GREAT 209.183.33.23 Transaction ID Randomness: GREAT
I encourage anyone else concerned with their providers to actually test them instead of taking anyone's word for it.
Steve
on AT&T you might want to run it more than once.. Mine shows POOR 1 out of 5 times. :-( Hope they finish patching sooooon! Ken -- Ken Anderson Pacific.Net
Is it just me or is the test page below down now? Or maybe some poisoned the NS record for dns-oarc.net and sent it to nowhere to stop testing! (J/K since I can get to the rest of the page fine). -Scott -----Original Message----- From: Ken A [mailto:ka@pacific.net] Sent: Thursday, July 24, 2008 2:40 PM To: Steve Tornio Cc: nanog@merit.edu Subject: Re: Paul Vixie: Re: [dns-operations] DNS issue accidentally leaked? Steve Tornio wrote:
On Jul 24, 2008, at 12:17 PM, Duane Wessels wrote:
xpara.com tests to lock up my iPhone, or I would use that checker to verify the iPhone DNS. Anyone have a link to a decent test that I could run on the iPhone?
Give this one a try:
In this test, my iPhone reports:
209.183.33.23 Source Port Randomness: GREAT 209.183.33.23 Transaction ID Randomness: GREAT
I encourage anyone else concerned with their providers to actually test them instead of taking anyone's word for it.
Steve
on AT&T you might want to run it more than once.. Mine shows POOR 1 out of 5 times. :-( Hope they finish patching sooooon! Ken -- Ken Anderson Pacific.Net
On Thu, 24 Jul 2008, Duane Wessels wrote: Suggestion - add to the bottom of the results page a link to the CERT page: http://www.kb.cert.org/vuls/id/800113 -Hank
Give this one a try:
On Jul 24, 2008, at 10:17 AM, Duane Wessels wrote:
Give this one a try:
For one iPhone it reported 209.183.54.151 as having GREAT source port randomness and GREAT transaction ID randomness. However, despite the test reporting GREAT, the source ports were _definitely_ non-random. http://5d93b9656563a44e4c900ff9.et.dns-oarc.net/ -Richard
For one iPhone it reported 209.183.54.151 as having GREAT source port randomness and GREAT transaction ID randomness. However, despite the test reporting GREAT, the source ports were _definitely_ non-random.
"Proving random" is not easy. Proving random that isn't done by certain methods (i.e. certain algorithms or certain sources of entropy) is easier. Deepak
i am sick and bloody tired of hearing from the people who aren't impressed.
Well, Paul, I'm not *too* impressed, and so far, I'm not seeing what is groundbreaking, except that threats discussed long ago have become more practical due to the growth of network and processing speeds, which was a hazard that ... was actually ALSO predicted.
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
your move.
MY move? Fine. You asked for it. Had I your clout, I would have used this opportunity to convince all these new agencies that the security of the Internet was at risk, and that getting past the "who holds the keys" for the root zone should be dealt with at a later date. Get the root signed and secured. Get the GTLD's signed and secured. Give people the tools and techniques to sign and secure their zones. Focus on banks, ISP's, and other critical infrastructure. You don't have to do all that yourself, since we have all these wonderful new agencies charged with various aspects of keeping our nation secure, including from electronic threats, and certainly there is some real danger here. This in no way prevents you from simultaneously releasing patches to do query source port randomization, of course, and certainly I think that a belt and suspenders solution is perfectly fine, but right now, I'm only seeing the belt... But realizing that going from 11 seconds to (11 * 64512 =) 8.21 days is not a significant jump from the PoV of an attacker would certainly have factored into my decision-making process. But we didn't do my move. We did yours. So back to the real world. You're still vulnerable. Your move. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 24 Jul 2008, at 10:56, Joe Greco wrote:
MY move? Fine. You asked for it. Had I your clout, I would have used this opportunity to convince all these new agencies that the security of the Internet was at risk, and that getting past the "who holds the keys" for the root zone should be dealt with at a later date. Get the root signed and secured.
Even if that was done today, there would still be a risk of cache poisoning for months and years to come. You're confusing the short-term and the long-term measures, here.
Get the GTLD's signed and secured.
I encourage you to read some of the paper trail involved with getting ORG signed, something that the current roadmap still doesn't accommodate for the general population of child zones until 2010. It might be illuminating. Even once everything is signed and working well to the zones that registries are publishing, we need to wait for registrars to offer DNSSEC key management to their customers. Even once registrars are equipped, we need people who actually host customer zones to sign them, and to acquire operational competence required to do so well. And even after all this is done, we need a noticeable proportion of the world's caching resolvers to turn on validation, and to keep validation turned on even though the helpdesk phone is ringing off the hook because the people who host the zones your customers are trying to use haven't quite got the hang of DNSSEC yet, and their signatures have all expired. Compared with the problem of global DNSSEC deployment, getting everybody in the world to patch their resolvers looks easy. Joe
On 24 Jul 2008, at 10:56, Joe Greco wrote:
MY move? Fine. You asked for it. Had I your clout, I would have used this opportunity to convince all these new agencies that the security of the Internet was at risk, and that getting past the "who holds the keys" for the root zone should be dealt with at a later date. Get the root signed and secured.
Even if that was done today, there would still be a risk of cache poisoning for months and years to come.
You're confusing the short-term and the long-term measures, here.
No, I'm not. I did say that the other fix could be implemented regardless. However, since it is at best only a band-aid, it should be treated and understood as such, rather than misinforming people into thinking that their nameservers are "not vulnerable" once they've applied it. So I'm not the confused party. There are certainly a lot of confused parties out there who believe they have servers that are not vulnerable.
Get the GTLD's signed and secured.
I encourage you to read some of the paper trail involved with getting ORG signed, something that the current roadmap still doesn't accommodate for the general population of child zones until 2010. It might be illuminating.
You know, I've been watching this DNSSEC thing for *years*. I don't need to read any more paper trail. There was no truly good excuse for this not to have been done years ago.
Even once everything is signed and working well to the zones that registries are publishing, we need to wait for registrars to offer DNSSEC key management to their customers.
Even once registrars are equipped, we need people who actually host customer zones to sign them, and to acquire operational competence required to do so well.
And even after all this is done, we need a noticeable proportion of the world's caching resolvers to turn on validation, and to keep validation turned on even though the helpdesk phone is ringing off the hook because the people who host the zones your customers are trying to use haven't quite got the hang of DNSSEC yet, and their signatures have all expired.
Compared with the problem of global DNSSEC deployment, getting everybody in the world to patch their resolvers looks easy.
Of course. That's why I said that deploying this patch was something that could be done *too*. The point, however, was contained in my earlier message. You can only cry "wolf" so many times before a lot of people stop listening. Various evidence over the years leads me to believe that this is any number greater than one time. The point is that I believe the thing to do would have been to use this as a giant push for "DNSSEC Now! No More Excuses!" As it stands, there will likely be another exploit discovered in a year, or five years, or whatever, which is intimately related to this attack, and which DNSSEC would have solved. I don't particularly care to hear excuses about why DNSSEC is {a failure, impractical, can't be deployed, hasn't been deployed, won't be deployed, isn't a solution, isn't useful, etc} because I've probably heard them all before. We should either embrace DNSSEC, or we should simply admit that this is one of the many problems we just don't really care to fix for real. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 24 Jul 2008, at 11:40, Joe Greco wrote:
Compared with the problem of global DNSSEC deployment, getting everybody in the world to patch their resolvers looks easy.
Of course. That's why I said that deploying this patch was something that could be done *too*.
OK, good. Sorry if I misinterpreted your earlier message. Joe
On 24 Jul 2008, at 11:40, Joe Greco wrote:
Compared with the problem of global DNSSEC deployment, getting everybody in the world to patch their resolvers looks easy.
Of course. That's why I said that deploying this patch was something that could be done *too*.
OK, good.
Yeah, I'm not arguing against mitigating the immediate problem, but rather:
Sorry if I misinterpreted your earlier message.
The problem is that we have this reactionary mindset to threats that have been known for a long time, and we're perfectly happy to issue one-off band-aid fixes, often while not fixing the underlying problem. DNSSEC was designed to deal with just this sort of thing. In almost TWO DECADES since Bellovin's paper, which was arguably the motivation behind DNSSEC, we've ... still got an unsigned root, unsigned GTLD's, unsigned zones, and we've successfully managed to get Gates to train users to click on "OK" for any message where they don't understand what it's trying to say, so relying on security at other layers isn't particularly effective either. Collectively, those of us reading this list are responsible for creating at least part of this mess, either through inaction or foot-dragging. Welcome to the Internet that we've created. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
... and we've successfully managed to get Gates to train users to click on "OK" for any message where they don't understand what it's trying to say, so relying on security at other layers isn't particularly effective either.
He,he,nice comment. The issue is that with todays html crap and embedded images on mails "click" is no longer required, just include a malicious tag forcing your resolver to go to bad boy's NS to resolve the URL and you are up in biz. /etc/hosts rulez !!! :-) Regards Jorge
Jorge Amodio wrote:
/etc/hosts rulez !!! :-)
Wonder if SRI wstill has the files..... -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs
/etc/hosts rulez !!! :-)
Wonder if SRI wstill has the files.....
The SRI-NIC is long gone, I still remember the IP address of the ftp server 10.0.0.51 :-) There are several "historic copies" all over the net. Jorge
Jorge Amodio wrote:
/etc/hosts rulez !!! :-)
Wonder if SRI wstill has the files.....
Using the methods in RFC-952 and RFC-953 I wasn't able to get them. I can't find if there is an updated RFC/name to use. Tuc/TBOH ;)
Jorge Amodio wrote:
/etc/hosts rulez !!! :-)
Wonder if SRI wstill has the files.....
UNOFFICIAL copy from 15-Apr-94 : http://ftp.univie.ac.at/netinfo/netinfo/hosts.txt Tuc/TBOH
Here's some older ones: http://pdp-10.trailing-edge.com/cgi-bin/searchbyname?name=hosts.txt Prior to departing SRI last year I spent a bunch of time trying to find some of the old SRI-NIC records. It appears that they were all cleaned out once the contract was closed and the Internet was handed over to Network Solutions. I think that a lot of old records still exist in personal file cabinets and garages around Menlo Park but nothing "official" is on the campus of SRI. Marc -----Original Message----- From: Tuc at T-B-O-H [mailto:ml@t-b-o-h.net]
Jorge Amodio wrote:
/etc/hosts rulez !!! :-)
Wonder if SRI wstill has the files.....
UNOFFICIAL copy from 15-Apr-94 : http://ftp.univie.ac.at/netinfo/netinfo/hosts.txt Tuc/TBOH
There was a discussion on the internet-history mailing list some time ago about old hosts.txt files. You might also check the Computer History Museum in Mountain View, where BTW Jake Feinler volunteers. http://www.postel.org/internet-history.htm --gregbo On Thu, Jul 24, 2008 at 03:54:23PM -0400, marcus.sachs@verizon.com wrote:
Here's some older ones:
http://pdp-10.trailing-edge.com/cgi-bin/searchbyname?name=hosts.txt
Prior to departing SRI last year I spent a bunch of time trying to find some of the old SRI-NIC records. It appears that they were all cleaned out once the contract was closed and the Internet was handed over to Network Solutions. I think that a lot of old records still exist in personal file cabinets and garages around Menlo Park but nothing "official" is on the campus of SRI.
Marc
He,he,nice comment. The issue is that with todays html crap and embedded images on mails "click" is no longer required, just include a malicious tag forcing your resolver to go to bad boy's NS to resolve the URL and you are up in biz.
Can't stop laughing ... its a rainy boring day in south TX, just thinking that MSFT is probably working on a security patch for Vista that will ask you every few seconds "Are you sure you want to resolve this domain name?" .... Just a bit of humor before my resolver is poisoned ... Cheers Jorge
On Thu, Jul 24, 2008 at 09:56:32AM -0500, Joe Greco wrote:
MY move? Fine. You asked for it. Had I your clout, I would have used this opportunity to convince all these new agencies that the security of the Internet was at risk, and that getting past the "who holds the keys" for the root zone should be dealt with at a later date. Get the root signed and secured. Get the GTLD's signed and secured. Give people the tools and techniques to sign and secure their zones. Focus on banks,
I admit readily that I am not one of the 'dns guys' around here, but I have been watching with some interest for a few years now, and have more or less become convinced that the players involved are willing to tolerate, downplay, or even flat out ignore a great deal. Except losing their own relevance. This is cherished above all. The only times I have seen these parties move is when it has been realistically threatened. So in brandishing this world event as like a holy sword of fire to smite some nefarious beaurocracy, there is no danger its strike will drain any relevance. The band aid fix is there. Their relevance is saved along with all of our businesses. There is still plenty of time to argue about who gets the keys. Who gets nearly the entire pot of this magical relevance ambrosia? It wouldn't work. Paul's booming voice would serve only to make him hoarse. The strike only lands for effect if you withold the band aid fix, which simply can not be done in this case either. I'm only really aware of two ways to reduce the relevance of the root and its children (I did say I am not a DNS guy). You can join one of the alternate roots, which I do not recommend. Or you can sign your zones using a DLV registry. If DLV registries became 'de rigeur', it would effectively halve the root and by extension the GTLDs' relevance. I do not believe they will permit this to come to pass. Provided they did, we would win anyway, as signing zones itself would have become the norm. -- David W. Hankins "If you don't do it right the first time, Software Engineer you'll just have to do it again." Internet Systems Consortium, Inc. -- Jack T. Hankins
On Thu, 24 Jul 2008, Paul Vixie wrote:
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
Has at&t told you they are refusing to patch? Or are you just spreading FUD about at&t and don't actually have any information about their plans?
On Thu, 2008-07-24 at 11:21 -0400, Sean Donelan wrote:
On Thu, 24 Jul 2008, Paul Vixie wrote:
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
Has at&t told you they are refusing to patch? Or are you just spreading FUD about at&t and don't actually have any information about their plans?
I believe it is a hypothetical situation being presented... William
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
Has at&t told you they are refusing to patch? Or are you just spreading FUD about at&t and don't actually have any information about their plans?
I believe it is a hypothetical situation being presented...
so, noone else has had multiple copies of the following fwd'd to them with the heading, "WTF,O?" note that it's full of factual errors but does seem to represent AT&T's position on CERT VU# 800113. that someone inside AT&T just assumed that this was the same problem as described in CERT VU#252735 and didn't bother to call CERT, or kaminsky, or me, to verify, ASTOUNDS me. (if someone from AT&T's DNS division has time to chat, my phone numbers are in `whois -h whois.arin.net pv15-arin`.) --- "AT&T Response: US-CERT DNS Security Alert- announced July 8, 2008 On July 8, 2008, US-CERT issued a Technical Cyber Security Alert TA08-190B with the title 'Multiple DNS implementations vulnerable to cache poisoning.' This alert describes how deficiencies in the DNS protocol and common DNS implementations facilitate DNS Cache poisoning attacks. This vulnerability only affects caching DNS servers, not authoritative DNS servers. This alert instructed administrators to contact their vendors for patches. The DNS community has been aware of this vulnerability for some time. CERT technical bulletin http://www.kb.cert.org/vuls/id/252735 issued in July, 2007, identified this vulnerability but at the time no patches were available from vendors. AT&T does not disclose the name of its DNS vendors as a security measure but has implemented a preliminary patch that was available in January, 2008. The latest patch for alert TA08-190B is currently being tested and will be deployed in the network as soon as its quality has been assured. AT&T employs best practices in the management of its DNS infrastructure. For example, the majority of AT&T's caching DNS infrastructures have load balancers. Load balancers decrease the risk significantly because hackers are unable to target specific DNS servers. As with all patches to software affecting AT&T's production networks and infrastructure, AT&T first tests the patches in the lab to ensure they work as expected and then certifies them before deploying them into our production infrastructure. Conclusion: Security is of paramount importance to AT&T. AT&T has a comprehensive approach to the security of its networks and supporting infrastructures. AT&T is meeting or exceeding our world-class DNS network performance measures. We will continue to monitor the situation and will deploy software upgrades, as warranted, following our structured testing and certification process." === -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Thu, 24 Jul 2008, Paul Vixie wrote:
"AT&T Response: US-CERT DNS Security Alert- announced July 8, 2008 2008. The latest patch for alert TA08-190B is currently being tested and will be deployed in the network as soon as its quality has been assured.
That doesn't sound like "refuses to patch." It sounds like at&t is testing the patch and will deploy it as soon as its testing is finished. "Refuses to patch" sounds likes FUD.
"Refuses to patch" sounds likes FUD.
go ask 'em, and let us all know what they say. kaminsky tried to get everybody a month, but because of ptacek's sloppiness it ended up being 13 days. if any dns engineer at any internet carrier goes home to sleep or see their families before they patch, then they're insane. yes, i know the dangers of rolling patches out too quickly. better than most folks, since i've been on the sending side of patches that caused problems, and i've learned caution from the pain i've inadvertantly caused in that way. in spite of that caution i am telling you all, patch, and patch now. if you have firewall or NAT configs that prevent it, then redo your topology -- NOW. and make sure your NAT isn't derandomizing your port numbers on the way out. and if you have time after that, write a letter to your congressman about the importance of DNSSEC, which sucks green weenies, and is a decade late, and which has no business model, but which the internet absolutely dearly needs. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Thu, 24 Jul 2008, Paul Vixie wrote:
"Refuses to patch" sounds likes FUD.
go ask 'em, and let us all know what they say.
I believe at&t has already said they are testing the patch and will deploy it as soon as their testing is completed. Other than you, I have not heard anyone in at&t say they are refusing to patch. Doing my own tests, at&t appears to be testing the patch on DNS servers in Tulsa and St. Louis. They may be testing on other DNS servers in other regions too. The at&t anycast DNS ip addresses go to different servers in different locations, so you may get different results using the same IP address in your DNS client. But if you want to continue spreading the FUD that at&t is refusing to patch, I can't stop you.
go ask 'em, and let us all know what they say.
I believe at&t has already said they are testing the patch and will deploy it as soon as their testing is completed. Other than you, I have not heard anyone in at&t say they are refusing to patch.
i read at&t write that this was a rehash of a previously known issue. i heard at&t tell a customer that they were in no hurry to patch.
Doing my own tests, at&t appears to be testing the patch on DNS servers in Tulsa and St. Louis. They may be testing on other DNS servers in other regions too.
that's good news.
The at&t anycast DNS ip addresses go to different servers in different locations, so you may get different results using the same IP address in your DNS client.
But if you want to continue spreading the FUD that at&t is refusing to patch, I can't stop you.
hopefully at&t will issue a clarifying statement, indicating that they know now that this is not a rehash of last year's update, and that they will have those iphones covered by july 25 even though they were noticed before july 8. by the way we just found an abuse@ mailbox protected by challenge-response (there's a notification effort underway to let folks know which of their open resolvers are unpatched. i don't know what this means anymore.) -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Thu, 24 Jul 2008, Paul Vixie wrote:
I believe at&t has already said they are testing the patch and will deploy it as soon as their testing is completed. Other than you, I have not heard anyone in at&t say they are refusing to patch.
i read at&t write that this was a rehash of a previously known issue.
i heard at&t tell a customer that they were in no hurry to patch.
If the customer believes AT&T reputation is more reliable than Paul Vixie's reputation (not saying they are right or wrong, just if); point out at&t said they are testing the patch and plan to deploy it as soon as their testing is finished. If the customer believes at&t, the customer should also at least be testing the patch NOW and should deploy it WHEN they finish testing it. That is not the same as "refusing to patch." Test Patch->Deploy Patch.
Paul Vixie wrote:
"Refuses to patch" sounds likes FUD.
go ask 'em, and let us all know what they say.
AT&T dsl line. #dig +short porttest.dns-oarc.net TXT @68.94.157.1 z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. "65.68.49.31 is POOR: 26 queries in 1.4 seconds from 1 ports with std dev 0.00" Ken -- Ken Anderson Pacific.Net
On Thu, Jul 24, 2008 at 1:14 PM, Paul Vixie <vixie@isc.org> wrote:
in spite of that caution i am telling you all, patch, and patch now. if you have firewall or NAT configs that prevent it, then redo your topology -- NOW. and make sure your NAT isn't derandomizing your port numbers on the way out.
and if you have time after that, write a letter to your congressman about the importance of DNSSEC, which sucks green weenies, and is a decade late, and which has no business model, but which the internet absolutely dearly needs.
So is this patch a "true" fix or just a temporary fix until further work can be done on the problem? I listened to Dan's initial presentation and I've read a lot of speculation since then. I've also taken a look at the various blog entries that detail the problem. I believe I understand what the issue is and I can see how additional randomization helps. But it that truly an end-all fix, or is this just the initial cry to stop short-term hijacking? -- Jason 'XenoPhage' Frisvold XenoPhage0@gmail.com http://blog.godshell.com
On Thu, Jul 24, 2008 at 1:14 PM, Paul Vixie <vixie@isc.org> wrote:
and if you have time after that, write a letter to your congressman about the importance of DNSSEC, which sucks green weenies, and is a decade late, and which has no business model, but which the internet absolutely dearly needs.
Ok. I'm just a small-edge-networks weenie; IANAI, or anything else big like all that. So I usually try to listen more than I talk. But it seems to me that Paul, you are here espousing the opinion that there's no business value in people being able to trust that the domain name they heard on a TV ad and typed into a browser (let's ignore phishing for the moment) actually takes them to E-Trade, and not RBN. Am I misunderstanding you? Cause I see business value in trying to perpetuate that condition. I don't say it's easy to sell to suits. But that doesn't make it unnecessary. I also don't say it's the only way to guarantee that trustability. But what else have you? Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com '87 e24 St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274 Those who cast the vote decide nothing. Those who count the vote decide everything. -- (Josef Stalin)
jra@baylink.com ("Jay R. Ashworth") writes:
and if you have time after that, write a letter to your congressman about the importance of DNSSEC, which sucks green weenies, and is a decade late, and which has no business model, but which the internet absolutely dearly needs.
But it seems to me that Paul, you are here espousing the opinion that there's no business value in people being able to trust that the domain name they heard on a TV ad and typed into a browser (let's ignore phishing for the moment) actually takes them to E-Trade, and not RBN.
Am I misunderstanding you?
yes. and if you re-ask this on dns-operations@lists.oarci.net, i'll explain. -- Paul Vixie -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Thu, 24 Jul 2008 17:31:01 EDT, "Jay R. Ashworth" said:
But it seems to me that Paul, you are here espousing the opinion that there's no business value in people being able to trust that the domain name they heard on a TV ad and typed into a browser (let's ignore phishing for the moment) actually takes them to E-Trade, and not RBN.
The problem is that the business value, in general, accrues to the wrong people. It's useful and valuable for the *end user* and for *E-Trade* to be able to be sure they didn't go to RBN. The problem is that Joe Sixpack points his resolver stub at "Bubba's Bait, Tackle, and Internet Emporium ISP", and it's Bubba that has to fix stuff. And Bubba doesn't have a clear way to make money off the fixing - there's no way Bubba can explain to Joe that Bubba is more secure than the *other* bait, tackle, and DSL reseller in town, because Joe can't understand the problem.... It doesn't help that apparently there's some multi-billion-dollar Bubbas out there.
On Thu, Jul 24, 2008 at 08:37:55PM -0400, Valdis.Kletnieks@vt.edu wrote:
On Thu, 24 Jul 2008 17:31:01 EDT, "Jay R. Ashworth" said:
But it seems to me that Paul, you are here espousing the opinion that there's no business value in people being able to trust that the domain name they heard on a TV ad and typed into a browser (let's ignore phishing for the moment) actually takes them to E-Trade, and not RBN.
The problem is that the business value, in general, accrues to the wrong people.
It's useful and valuable for the *end user* and for *E-Trade* to be able to be sure they didn't go to RBN. The problem is that Joe Sixpack points his resolver stub at "Bubba's Bait, Tackle, and Internet Emporium ISP", and it's Bubba that has to fix stuff.
And Bubba doesn't have a clear way to make money off the fixing - there's no way Bubba can explain to Joe that Bubba is more secure than the *other* bait, tackle, and DSL reseller in town, because Joe can't understand the problem....
It doesn't help that apparently there's some multi-billion-dollar Bubbas out there.
I would argue most of the responsible providers took actions to prepare for such a leak two weeks ago. Some places have longer test cycles, so those fixes may be somewhere in the deployment queue. Change managment policies can be a problem if you're a large telco, and I'm sympathetic. Regarding Bubba, he won't likely move until there is a real problem, this makes it on CNN, and even then, he may not understand what is going on. That win2k server in the corner never got updated. But when he realizes his business is at risk due to the buggy software, our pal Bubba will eventually upgrade. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Regarding Bubba, he won't likely move until there is a real problem, this makes it on CNN, and even then, he may not understand what is going on. That win2k server in the corner never got updated. But when he realizes his business is at risk due to the buggy software, our pal Bubba will eventually upgrade.
[sarcasm] It won't take too long until the three lettered news outfits declare the demise of the internet, Angelina already had the twins, Obama convinced the Germans to vote for him, McCain didn't win bingo night, and Dolly didn't sunk the state of Texas under water, so there are no really exciting news to report, sooner or later they will fish what is being discussed and make up something to fill the void. I'd leave the "Bubba in chief" and his associates out of the loop, if they see the word "poisoned" they will decimate the entire DNS infrastructure and later send Paul Vixie to explain to Congress why is a good idea to go back to the HOSTS.TXT, charge 5 bucks for each download (price dependant on the actual barrel of oil) and return to punch cards (made by Halliburton) ... [/sarcasm] Cheers Jorge
On Fri, Jul 25, 2008 at 09:59:35AM -0500, Jorge Amodio wrote:
Regarding Bubba, he won't likely move until there is a real problem, this makes it on CNN, and even then, he may not understand what is going on. That win2k server in the corner never got updated. But when he realizes his business is at risk due to the buggy software, our pal Bubba will eventually upgrade.
[sarcasm] It won't take too long until the three lettered news outfits declare the demise of the internet, Angelina already had the twins, Obama convinced the Germans to vote for him, McCain didn't win bingo night, and Dolly didn't sunk the state of Texas under water, so there are no really exciting news to report, sooner or later they will fish what is being discussed and make up something to fill the void.
I'd leave the "Bubba in chief" and his associates out of the loop, if they see the word "poisoned" they will decimate the entire DNS infrastructure and later send Paul Vixie to explain to Congress why is a good idea to go back to the HOSTS.TXT, charge 5 bucks for each download (price dependant on the actual barrel of oil) and return to punch cards (made by Halliburton) ... [/sarcasm]
So, you say that(sarcasm). I just got off a 45 minute call where the US Federal government is interested in how to effectively communicate to the infrastructure operators the importance and risks of not upgrading the resolvers. They wanted someone to apporach those NANOG guys to see if they'll get off their butts and upgrade. Personally, I share some of their frustration in getting the reasonable people to upgrade their software, knowing that the unreasonable folks won't. The question is how can we as an interdependent industry close the gaps of the "Bubba" SPs and their software upgrade policies? That being said, is there anyone keeping metrics of what upgrades have been done so far? - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
So, you say that(sarcasm). I just got off a 45 minute call where the US Federal government is interested in how to effectively communicate to the infrastructure operators the importance and risks of not upgrading the resolvers.
Just tell them to call the head of DoC and explain why is so important and imperative to come up with an acceptable/reasonable signing authority and enable the deployment of DNSSEC. The patch is just a workaround, it does not fix the underlying problem that has been there for very long time. My .02
On Fri, Jul 25, 2008 at 11:04:59AM -0500, Jorge Amodio wrote:
So, you say that(sarcasm). I just got off a 45 minute call where the US Federal government is interested in how to effectively communicate to the infrastructure operators the importance and risks of not upgrading the resolvers.
Just tell them to call the head of DoC and explain why is so important and imperative to come up with an acceptable/reasonable signing authority and enable the deployment of DNSSEC. The patch is just a workaround, it does not fix the underlying problem that has been there for very long time.
I raised that issue earlier in the week with some parts of the US Feds already. Personally, I see this event as major driver for deploying dnssec. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Fri, 25 Jul 2008 12:07:40 -0400 Jared Mauch <jared@puck.nether.net> wrote:
On Fri, Jul 25, 2008 at 11:04:59AM -0500, Jorge Amodio wrote:
So, you say that(sarcasm). I just got off a 45 minute call where the US Federal government is interested in how to effectively communicate to the infrastructure operators the importance and risks of not upgrading the resolvers.
Just tell them to call the head of DoC and explain why is so important and imperative to come up with an acceptable/reasonable signing authority and enable the deployment of DNSSEC. The patch is just a workaround, it does not fix the underlying problem that has been there for very long time.
I raised that issue earlier in the week with some parts of the US Feds already.
Personally, I see this event as major driver for deploying dnssec.
I've been talking to US Gov't folks, too. They really want DNSSEC (and secure BGP...) deployed. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Fri, Jul 25, 2008 at 12:36:57PM -0400, Steven M. Bellovin <smb@cs.columbia.edu> wrote a message of 29 lines which said:
I've been talking to US Gov't folks, too. They really want DNSSEC (and secure BGP...) deployed.
Then, why ".gov" is not signed? Talk is cheap.
On Tue, 29 Jul 2008 13:06:40 +0100 Stephane Bortzmeyer <bortzmeyer@nic.fr> wrote:
On Fri, Jul 25, 2008 at 12:36:57PM -0400, Steven M. Bellovin <smb@cs.columbia.edu> wrote a message of 29 lines which said:
I've been talking to US Gov't folks, too. They really want DNSSEC (and secure BGP...) deployed.
Then, why ".gov" is not signed?
Talk is cheap.
There's is in fact movement in that direction. See http://snad.ncsl.nist.gov/dnssec/ to start. --Steve Bellovin, http://www.cs.columbia.edu/~smb
jared@puck.nether.net (Jared Mauch) writes:
That being said, is there anyone keeping metrics of what upgrades have been done so far?
yes. OARC is coordinating that, with data from its own test tool, and from kaminsky's test tool, and from passive DNS traces seen at ISC SIE. OARC is also coordinating a notification effort in conjunction with lawrence baldwin of MyNetWatchman. we're also sharing data with US-CERT to gauge penetration and while i don't have a pretty graph to show yet, i can say "it's ugly." -- Paul Vixie -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
The question is how can we as an interdependent industry close the gaps of the "Bubba" SPs and their software upgrade policies?
The depends upon your definition of a "Bubba SP" I guess. Does that mean small? If so we might qualify. Or does "Bubba" mean not listening to lists like this?
That being said, is there anyone keeping metrics of what upgrades have been done so far?
Like everyone else (I hope!) we're tracking progress in *our* network. The hard part, besides flogging reluctant server owners, is that some OS' are still lacking in "official" patches. Apple for example. Not a peep from them, and as you would expect those server owners are not the sort to install anything unless it shows up in their Software Update app. Has anyone heard any ETA on a patch from Cupertino? Short of unplugging customers there's not much we can do with those... except wait.
OARC is also coordinating a notification effort in conjunction with lawrence baldwin of MyNetWatchman.
We've seen those at our "abuse@" account, and they are helpful. Keep sending them. If we qualify as "bubba" that works.
Personally, I see this event as major driver for deploying dnssec.
Agreed. Patching is just a band-aid and this really needs to be an amputation. --chuck "On any given day, there's always something broken somewhere. In DNS, there's always something broken everywhere." --Paul Vixie @ 4:20 PM 3/31/07, on NANOG
On Fri, 25 Jul 2008, Jared Mauch wrote:
They wanted someone to apporach those NANOG guys to see if they'll get off their butts and upgrade. Personally, I share some of their frustration in getting the reasonable people to upgrade their software, knowing that the unreasonable folks won't. The question is how can we as an interdependent industry close the gaps of the "Bubba" SPs and their software upgrade policies?
That being said, is there anyone keeping metrics of what upgrades have been done so far?
Unfortunately, several of the public "testing" sites have been generating false-positives. The ISPs have updated their DNS servers, some several weeks ago, but the testing site gets confused. Several DNS "security experts" (i.e. anyone with a blog) have also been confused about which ISPs manage which DNS servers versus other DNS servers on a network. Lots of phone calls to the wrong service providers complaining about the wrong things. Some folks who handle lookups for lots of domains have some data, but without knowing which DNS servers are "official" ISP recursive servers and which DNS servers are just random recursive resolvers owned by end-users, breaking down the data by ISP is a bit of a challange. If you just want data about overall DNS upgrade activity, not broken down by "official" or "unofficial" servers, that could be easier to collect.
On Jul 25, 2008, at 10:32 AM, Sean Donelan wrote:
Unfortunately, several of the public "testing" sites have been generating false-positives.
It would be good of you to list those here if you know which ones are generating false positives, so folks can avoid using them. -b
On Fri, 25 Jul 2008, brett watson wrote:
Unfortunately, several of the public "testing" sites have been generating false-positives.
It would be good of you to list those here if you know which ones are generating false positives, so folks can avoid using them.
Under the right (or wrong) conditions every public DNS testing site has generated a false positive for some complex DNS configurations which the test creators hadn't expected. The maintainers of the public testing sites have been "patching" their tests to better handle some of problems. So the testing results may change from day to day. You need to understand how both the test works and what the results mean; not only whether it says "Pass"/"Fail".
So is this patch a "true" fix or just a temporary fix until further work can be done on the problem?
the only true fix is DNSSEC. meanwhile we'll do UDP port randomization, plus we'll randomize the 0x20 bits in QNAMEs, plus we'll all do what nominum does and retry with TCP if there's a QID mismatch while waiting for a response, and we'll start thinking about using TKEY and TSIG for stub-to-RDNS relationships. but the only true long term fix for this is DNSSEC. all else is bandaids, which is a shame, since it's a sucking chest wound and bandaids are silly.
But it that truly an end-all fix, or is this just the initial cry to stop short-term hijacking?
all we're trying to do is keep the 'net running long enough to develop and deploy DNSSEC, which would be much harder if updates.microsoft.com almost never points to a microsoft-owned computer. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
So is this patch a "true" fix or just a temporary fix until further work can be done on the problem?
I guess you need to read some of the related papers/presentations/advisories/etc related to a subject that has been under discussion for more 20+ years. Answering your questions, as said before, the patch is NO FIX to the problem, it's just a workaround that (together with an appropiate architecture and following well know best practices for DNS deployment) *may* reduce the chances of becoming a victim of the exploit. The solution ? DNSSEC, I believe Paul is asking people interested to learn more about what needs to be done to get it done to discuss the subject in the dns-operations list. My .02 Regards Jorge
11 seconds.
and at&t refuses to patch.
and all iphones use those name servers.
your move.
i am just back from a week off net on the nw irish coast. stunning. the messages on this thread outnumber the nightly log reports from 20+ servers, plus the whining of a server with a hardware problem, plus ... what i do not understand is why people think screaming to the choir will make any significant difference? if fools want to be fools, you are not going to stop them, especially not by screaming on nanog or dns operations lists. i hope all my competitors don't patch. randy
randy@psg.com (Randy Bush) writes:
i hope all my competitors don't patch.
i think that that statement is false. the resulting insecurity of that endpoint population will be a tsunami that will swamp people far away, it'll just be worse for those at the epicenter (meaning: who don't patch.) if your customers are exchanging data with your competitors' customers (and, face it, they are) then your customers can still lose data and money, and your support lines can still ring. -- Paul Vixie -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
what i do not understand is why people think screaming to the choir will make any significant difference?
Think about it. Would you rather nobody make a big deal about it and have it go unpatched lots of places, and have nobody understand what a monumental train wreck this all is, or would it be better that people take some notice, and have resources like NANOG available to help them make the case about how this needs to be patched, and also just how much we all need DNSSEC? Sometimes the only thing you can do is scream at the choir, but if that can make even a small difference, why not? And Paul's absolutely correct, this is not something where we can afford to let that happen. You will be affected regardless, whether it is because your customers are relying on an answer provided by a nameserver somewhere else in the infrastructure that has been corrupted, or whatever. And patching does not appear to guarantee invulnerability (eek!) The Really Scary Possibilities (at least the one that really frightens me) Have Not Been Discussed On This List. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sat, Jul 26, 2008 at 03:05:18PM -0500, Joe Greco wrote:
what i do not understand is why people think screaming to the choir will make any significant difference?
And Paul's absolutely correct, this is not something where we can afford to let that happen.
Paul is correct if you work from his point of view. there are other pov where the frantic energy expenditure might be better spent. If you -must- patch, try patching w/ code that is -not- vulnerable... unbound has been reported as being "safe" if properly configured. So that was my patch profile. actually, i think this is a whole lot of effort for what is essentually a diversion tactic. Why you ask?
And patching does not appear to guarantee invulnerability (eek!)
there you go. the massive effort to patch would likley have better been spent to actually -sign- the stupid zones and work out key distribution. but no... running around like the proverbial headless chicken seems to get the PR. The real value in this frantic exercise was pointed out by Roy Arends... the number of folks who now have (possibly) DNSSEC aware code in play is much higher than last month.
The Really Scary Possibilities (at least the one that really frightens me) Have Not Been Discussed On This List.
true enough. and that is a good thing.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
--bill
On Sat, 26 Jul 2008, bmanning@vacation.karoshi.com wrote:
there you go. the massive effort to patch would likley have better been spent to actually -sign- the stupid zones and work out key distribution. but no... running around like the proverbial headless chicken seems to get the PR.
Maybe someone could publish a blacklist of vulnerable recursive name servers, and then F-Root, the other root name servers, and other "popular" sites could start refusing to answer queries from vunerable name servers until after the blacklist operator decides they've patched their recursive server sufficiently? Maybe that would get their attention and encourage them to apply resources to the problem? Extreme situations justify extreme measures; or how extreme do you believe justifies what measures?
On Sat, Jul 26, 2008 at 05:47:54PM -0400, Sean Donelan wrote:
On Sat, 26 Jul 2008, bmanning@vacation.karoshi.com wrote:
there you go. the massive effort to patch would likley have better been spent to actually -sign- the stupid zones and work out key distribution. but no... running around like the proverbial headless chicken seems to get the PR.
Maybe someone could publish a blacklist of vulnerable recursive name servers, and then F-Root, the other root name servers, and other "popular" sites could start refusing to answer queries from vunerable name servers until after the blacklist operator decides they've patched their recursive server sufficiently?
Maybe that would get their attention and encourage them to apply resources to the problem?
Extreme situations justify extreme measures; or how extreme do you believe justifies what measures?
Knock yourself out Sean. --bill
How about blacklists for; Outdated and insecure IOS Outdated and insecure SSH Outdated and insecure Unix implementations Spam appliancesOutdated OS images everywhere Outdated and insecure dns Outdated and insecure proxies Outdated and insecure mysql, php, etc Richard Stallman for rms/rms One worthy example of leadership related to this current issue is RCN. They apparently scanned their customer networks for this vuln and called the vulnerable customer advising them of the problem and politely requesting a fix. Reinforces why full disclosure is better as well. Who got the early warnings? Better yet, who didn't? Best, Marty On 7/26/08, Sean Donelan <sean@donelan.com> wrote:
On Sat, 26 Jul 2008, bmanning@vacation.karoshi.com wrote:
there you go. the massive effort to patch would likley have better been spent to actually -sign- the stupid zones and work out key distribution. but no... running around like the proverbial headless chicken seems to get the PR.
Maybe someone could publish a blacklist of vulnerable recursive name servers, and then F-Root, the other root name servers, and other "popular" sites could start refusing to answer queries from vunerable name servers until after the blacklist operator decides they've patched their recursive server sufficiently?
Maybe that would get their attention and encourage them to apply resources to the problem?
Extreme situations justify extreme measures; or how extreme do you believe justifies what measures?
-- Sent from Gmail for mobile | mobile.google.com
On Thu, 24 Jul 2008, Joe Greco wrote:
downplay this all you want, we can infect a name server in 11 seconds now, which was never true before. i've been tracking this area since 1995. don't try to tell me, or anybody, that dan's work isn't absolutely groundbreaking.
i am sick and bloody tired of hearing from the people who aren't impressed.
Well, Paul, I'm not *too* impressed, and so far, I'm not seeing what is groundbreaking, except that threats discussed long ago have become more practical due to the growth of network and processing speeds, which was
Joe, you should be well aware most of what we face today and will face tomorrow is based on concepts of old, and/or has been thought of/seen before in a different format. But as you well know, threats are still real, especially where the practical vulnerability is hight. Further, this is especially true in the operators communities where unless there is a power point presentation about it, the threat is thought a pipe-dream.
a hazard that ... was actually ALSO predicted.
Well, it's here now, and I am happy there is something to be done about it, pro-actively. Gadi.
And you know what? I'll give you the *NEXT* evolutionary steps, which could make you want to cry.
If the old code system could result in an infected name server in 11 seconds, this "fix" looks to me to be at best a dangerous and risky exercise at merely reducing the odds. Some criminal enterprise will figure out that you've only reduced the odds to 1/64000 of what they were, but the facts are that if you can redirect some major ISP's resolver to punt www.chase.com or www.citibank.com at your $evil server, and you have large botnets on non-BCP38 networks, you can be pumping out large numbers of answers at the ISP's server without a major commitment in bandwidth locally... and sooner or later, you'll still get a hit. You don't need to win in 11 secs, or even frequently. It can be "jackpot" monthly and you still win.
Which is half the problem here. Bandwidth is cheap, and DNS packets are essentially not getting any bigger, so back in the day, maybe this wasn't practical over a 56k or T1 line, but now it is trivial to find a colo with no BCP38 and a gigabit into the Internet. The flip side is all those nice multicore CPU's mean that systems aren't flooded by the responses, and they are instead successfully sorting through all the forged responses, which may work in the attacker's advantage (doh!)
This problem really is not practically solvable through the technique that has been applied. Give it another few years, and we'll be to the point where both the QID *and* the source port are simply flooded, and it only takes 11 seconds, thanks to the wonder of ever-faster networks and servers. Whee, ain't progress wonderful.
Worse, this patch could be seen as *encouraging* the flooding of DNS servers with fake responses, and this is particularly worrying, since some servers might have trouble with this.
So, if we want to continue to ignore proper deployment of DNSSEC or equivalent, there are some things we can do:
* Detect and alarm on cache overwrite attempts (kind of a meta-RFC 2181 thing). This could be problematic for environments without consistent DNS data (and yes, I know your opinion of that).
* Detect and alarm on mismatched QID attacks (probably at some low threshold level).
But the problem is, even detected and alerted, what do you do? Alarming might be handy for the large consumer ISP's, but isn't going to be all that helpful for the universities or fortune 500's that don't have 24/7 staff sitting on top of the nameserver deployment.
So, look at other options:
* Widen the query space by using multiple IP addresses as source. This, of course, has all the problems with NAT gw's that the port solution did, except worse.
This makes using your ISP's "properly designed" resolver even more attractive, rather than running a local recurser on your company's /28 of public IP space, but has the unintended consequence of making those ISP recursers even more valuable targets.
Makes you wish for wide deployment of IPv6, eh.
every time some blogger says "this isn't new", another five universities and ten fortune 500 companies and three ISP's all decide not to patch. that means we'll have to wait for them to be actively exploited before they will understand the nature of the emergency.
While I applaud the reduction in the attack success rate that the recent patch results in, I am going to take a moment to be critical, and note that I feel you (and the other vendors) squandered a fantastic chance. Just like the Boy Who Cried Wolf, you have a limited number of times that you can cry "vulnerability" and have people mostly all stand up and pay attention in the way that they did.
Hardly the first (but possibly one of the most noteworthy), RIPE called for signing of the root zone a year ago.
I note with some annoyance that this would have been a great opportunity for someone with a lot of DNS credibility to stand up and say "we need the root signed NOW," and to force the issue. This would have been a lot of work, certainly, but a lot of *worthwhile* work, at various levels. The end result would have been a secure DNS system for those who chose to upgrade and update appropriately.
Instead, it looks to me as though the opportunity is past, people are falsely led to believe that their band-aid-patched servers are now "not vulnerable" (oh, I love that term, since it isn't really true!) and the next time we need to cry "fire," fewer people will be interested in changing.
The only real fix I see is to deploy DNSSEC.
I've tried to keep this message bearably short, so please forgive me if I've glossed over anything or omitted anything.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Thu, Jul 24, 2008 at 9:35 AM, Joe Greco <jgreco@ns.sol.net> wrote:
Well, Paul, I'm not *too* impressed, and so far, I'm not seeing what is groundbreaking, except that threats discussed long ago have become more practical due to the growth of network and processing speeds, which was a hazard that ... was actually ALSO predicted.
Joe, Early attacks were based on returning out-of-scope data in the Additional section of the response. This was an implementation error in the servers: they should never have accepted out of scope data. Later attacks were based on forging responses to a query. The resolver sends a query packet and the attacker has a few tens of milliseconds in which to throw maybe a few tens of guesses about correct ID at the resolver before the real answer arrives from the the real server. These were mitigated because: a. You had maybe a 1 in 1000 chance of guessing right during the window of opportunity. b. If you guessed wrong, you had to wait until the TTL expired to try again, maybe as much as 24 hours later. So, it could take months or years to poison a resolver just once, far below the patience threshold for your run-of-the-mill script kiddie. What's new about this attack is that it removes mitigator B. You can guess again and again, back to back, until you hit that 1 in 1000. Paul tells us this can happen in about 11 seconds, well inside the tolerance of your normal script kiddie and long before you'll notice the log messages about invalid responses. Anyway, it shouldn't be hard to convert this from a poisoning vulnerability to a less troubling DOS vulnerability by rejecting responses for a particular query (even if valid) when received near a bad-id response. From there it just takes some iterative improvements to mitigate the DOS. In the mean time, randomizing the query port makes the attack more than four orders of magnitude less effective and causes it to require more than four orders of magnitude of additional resources on the attacker's part. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Thu, Jul 24, 2008 at 8:35 AM, Joe Greco <jgreco@ns.sol.net> wrote:
If the old code system could result in an infected name server in 11 seconds, this "fix" looks to me to be at best a dangerous and risky exercise at merely reducing the odds. Some criminal enterprise will figure out that you've only reduced the odds to 1/64000 of what they were, but the facts are that if you can redirect some major ISP's resolver to punt www.chase.com or www.citibank.com at your $evil server,
Provided the attacker cannot run 'tcpdump' or similar and _see_ the target server's DNS traffic, perhaps cache itself could become a defense mechanism... Whenever a user of a DNS server requests a lookup for a cached entry nearing expiration, give the user the cached entry, but, also perform a background lookup to renew the cache: only accept the DNS response, to that special query, if it has the same data as what is already cached. Nameservers could incorporate poison detection... Listen on 200 random fake ports (in addition to the true query ports); if a response ever arrives at a fake port, then it must be an attack, read the "identified" attack packet, log the attack event, mark the RRs mentioned in the packet as "poison being attempted" for 6 hours; for such domains always request and collect _two_ good responses (instead of one), with a 60 second timeout, before caching a lookup. The attacker must now guess nearly 64-bits in a short amount of time, to be successful. Once a good lookup is received, discard the normal TTL and hold the good answer cached and immutable, for 6 hours (_then_ start decreasing the TTL normally). -- When the patch also prevents caching additional sections, for a DNS label previously cached, this type of sustained brute force should not be feasible: when the name targetted for poisoning must be the name that is being looked up. It is a much larger reduction than 1:64000 Since then the vulnerability to poisoning only exists when a name has not yet been cached: sustained brute force cannot occur (there is an element of waiting a long time, for the proper cached entry to expire). Once a DNS request occurs for the target name followed by the legitimate response winning, an attacker has to wait until the proper cached entry expires, before any poisoning effort could be successful. -- -J
participants (34)
-
bmanning@vacation.karoshi.com
-
brett watson
-
chuck goolsbee
-
David W. Hankins
-
Deepak Jain
-
Duane Wessels
-
Gadi Evron
-
Greg Skinner
-
Hank Nussbacher
-
James Hess
-
Jared Mauch
-
Jason Frisvold
-
Jay R. Ashworth
-
Joe Abley
-
Joe Greco
-
Jorge Amodio
-
Ken A
-
Laurence F. Sheldon, Jr.
-
marcus.sachs@verizon.com
-
Martin Hannigan
-
michael.dillon@bt.com
-
Paul Vixie
-
Randy Bush
-
Richard Parker
-
Scott Berkman
-
Sean Donelan
-
Stephane Bortzmeyer
-
Steve Tornio
-
Steven M. Bellovin
-
Tuc at T-B-O-H
-
Tuc at T-B-O-H.NET
-
Valdis.Kletnieks@vt.edu
-
William Herrin
-
William Pitcock