DNS cache poisoning attacks -- are they real?
ISC SANS has recently disclosed yet another suspected DNS cache poisoning attack. I reach a different conclusion. based on publicly available data. Maybe there is unpublished information which suggests a different view. Unofficial name servers which pose as authoritative for well-known zones have been around for ages. An astonishingly large number is officially authoritative for (at least somewhat) frequented zones, and from time to time, your resolvers receive authority sections containing leaked unofficial data. I noticed this unfortunate fact back in July 2004, when I looked more closely at DNS packet captures for debugging purposes. Even in my limited sample, the number leaking name servers was so high that systematically contacting their operators and convincing them to change their configurations seemed unfeasible (and many of them were located in regions which are not exactly known for their cooperative spirit when it comes to such matters). Today, I looked again at a few unofficial servers. Quite a few of them are operated by apparently respectable organizations with an AS number etc. (definitely not the backyard servers behind a cable modem I would expect in an attack). It is hard to tell if the more shady ones legitimately redirect customer traffic, and unintentionally leak these records to the general Internet, or attempt an actual attack. (I'm not sure how to tell them apart at the protocol level. Maybe I'm missing something.) Many of the unofficial records have been unchanged for quite some (i.e. predating the current "pharming" craze). Even the DNS cache poisoning case described in the ISC diary could be the unwanted consequence of an oversimplified DNS configuration (wildcard RRs for *.com instead of a proper DNS zone). Are any ISPs actually willing to disconnect customer name servers which serve unofficial zones? I don't believe that many ISPs would try to exercise this much control over the packets their customers send. Furthermore, there are apparently some reasons for running such servers which generally are considered legitimate. Should we monitor for evidence of hijacks (unofficial NS and SOA records are good indicators)? Should we actively scan for authoritative name servers which return unofficial data? I don't think this makes sense, even if we could strongly discourage the practice. Right now, I suspect that many people rediscovered the relative weakness of the domain name system and started looking for anomalies, and that's why we see an increasing number of reports -- not because of an increasing number of actual attacks.
--On 26 March 2005 23:23 +0100 Florian Weimer <fw@deneb.enyo.de> wrote:
Should we monitor for evidence of hijacks (unofficial NS and SOA records are good indicators)? Should we actively scan for authoritative name servers which return unofficial data?
And what if you find them? I seem to remember a uu.net server (from memory ns.uu.net) many many years ago had some polluted data out there as an A record. All bright and bushy-tailed I told the UUnet folks about this. They were resigned. Someone, somewhere, had mistyped an IP address, and it had got into everyone's glue, got republished by anyone and everyone, and in essence had no chance of going away. Now I understand (a little) more about DNS than I did at the time so I now (just about) know how DNS servers should avoid returning such info (where they are both caching and authoritative), but I equally know this is built upon the principle no-one does anything actively malicious. The only way you are going to prevent packet level (as opposed to organization level) DNS hijack is get DNSSEC deployed. Your ietf list is over ------> there. Alex
* Alex Bligh:
--On 26 March 2005 23:23 +0100 Florian Weimer <fw@deneb.enyo.de> wrote:
Should we monitor for evidence of hijacks (unofficial NS and SOA records are good indicators)? Should we actively scan for authoritative name servers which return unofficial data?
And what if you find them?
If leaking unofficial data were considered a capital offense (in Internet terms), many ISPs would take action. Apparently, it's not, so detection is pretty much pointless.
The only way you are going to prevent packet level (as opposed to organization level) DNS hijack is get DNSSEC deployed.
DNS cache poisoning (at least in the form which prompted me to start this thread) is a quality-of-implementation issue. DNSSEC will not magically increase code quality (but it will definitely increase complexity), that's why I don't share the enthusiasm of the DNSSEC crowed. 8->
You forgot the most important requirement, you have to be using insecure, unpatched DNS code (old versions of BIND, old versions of Windows, etc). If you use modern DNS code and which only follows trustworthy pointers from the root down, you won't get hooked by this. A poisoned DNS cache is irrelevant if your resolver never queries servers with poisoned caches. If you do, you should fix the your code. On the other hand, there are a lot of reasons why a DNS operator may return different answers to their own users of their resolvers. Reverse proxy caching is very common. Just about all WiFi folks use cripple DNS as part of their log on. Or my favorite, quarantining infected computers to get the attention of their owners. But it shouldn't matter what other DNS operators do, as long as your DNS code doesn't use them to resolve names without a pointer from the root (although you may not be able to log on to some WiFi hotspots). Why Microsoft didn't make "Secure cache against pollution" the default setting, I don't know.
* Sean Donelan:
You forgot the most important requirement, you have to be using insecure, unpatched DNS code (old versions of BIND, old versions of Windows, etc). If you use modern DNS code and which only follows trustworthy pointers from the root down, you won't get hooked by this. A poisoned DNS cache is irrelevant if your resolver never queries servers with poisoned caches.
Yes, this is yet another reason why I'm inclined to apply Hanlon's razor here. Totally forgot to mention it, thanks.
If you do, you should fix the your code.
This would defeat its purpose, at least to some extent. 8-) I'm interested in recording bogus RRs as well because I can't really be sure whether there isn't some resolver which takes them for valid.
On the other hand, there are a lot of reasons why a DNS operator may return different answers to their own users of their resolvers. Reverse proxy caching is very common. Just about all WiFi folks use cripple DNS as part of their log on. Or my favorite, quarantining infected computers to get the attention of their owners.
And sometimes such things leak to the Internet. However, most of the publicly visible bogus records seem to be caused by laziness. If you handle thousands of com. domains, it's easier to use a fake com. zone on your authoritative servers, with a few records like: com 172800 IN SOA ns1.example.org [...] *.com 172800 IN NS ns1.example.org *.com 172800 IN NS ns2.example.org *.com 172800 IN A 192.0.2.1 In most cases, 192.0.2.1 runs a web server that serves a "buy this domain" page. Uh-oh, this hurts. There must be a how-to somewhere which recommends this shortcut.
Why Microsoft didn't make "Secure cache against pollution" the default setting, I don't know.
Apparently, they do in recent versions. It might have been viewed as a change too risky for a service pack or regular patch (Microsoft's risk assessments are sometimes rather bizarre).
Here is a link about how Cox Cable uses DNS to block phishing and certain malicious sites. http://www.broadbandreports.com/forum/remark,12922412
Sean Donelan wrote:
Here is a link about how Cox Cable uses DNS to block phishing and certain malicious sites.
If that manipulation is done on their internal servers, its their business; that isn't uncommon anymore, and in fact, is on the increase (mea culpa). However, if an external server is manipulated, that's a different story. Jeff
Le 26 mars 2005, à 17:52, Sean Donelan a écrit :
You forgot the most important requirement, you have to be using insecure, unpatched DNS code (old versions of BIND, old versions of Windows, etc). If you use modern DNS code and which only follows trustworthy pointers from the root down, you won't get hooked by this.
The obvious rejoinder to this is that there are no trustworthy pointers from the root down (and no way to tell if the root you are talking to contains genuine data) unless all the zones from the root down are signed with signatures you can verify and there's a chain of trust to accompany each delegation. If you don't have cryptographic signatures in the mix somewhere, it all boils down to trusting IP addresses. Joe
On Sat, 26 Mar 2005, Joe Abley wrote:
The obvious rejoinder to this is that there are no trustworthy pointers from the root down (and no way to tell if the root you are talking to contains genuine data) unless all the zones from the root down are signed with signatures you can verify and there's a chain of trust to accompany each delegation.
If you don't have cryptographic signatures in the mix somewhere, it all boils down to trusting IP addresses.
Signatures don't create trust. A signature can only confirm an existing trust relationship. DNSSEC would have the same problem, where do you get the trustworthing signatures? By connecting to the same root you don't trust? As a practical matter, you can stop 99% of the problems with a lot less effort. Why has SSH been so successful, and DNSSEC stumbled so badly? Always initiate the call yourself. Always check the nonce in the answer. Never accept unsolicited data. Never accept answers to questions you didn't ask. Besides, if you don't trust IP addresses even if the entire DNS tree was signed by trustworthy keys I'd just hijack the IP address in the DNS answer anyway. Quarantine NAT is very good at this.
On 26 Mar 2005, at 20:15, Sean Donelan wrote:
Signatures don't create trust. A signature can only confirm an existing trust relationship. DNSSEC would have the same problem, where do you get the trustworthing signatures? By connecting to the same root you don't trust?
No, by using a known local trust anchor for the root and following the chain of trust from there.
As a practical matter, you can stop 99% of the problems with a lot less effort. Why has SSH been so successful, and DNSSEC stumbled so badly?
For most people SSH encrypts a session, and says nothing about the identity of the remote host. Most people ignore the warnings about host keys changing, and never check an ssh fingerprint with the remote host before accepting it and caching it until next time. DNSSEC doesn't attempt to encrypt the transport; it is all about the authenticity of the data. So, they are doing different things. SSH deployment requires no coordination between organisations really; while there are public services deployed over SSH, I would be very surprised if its main use is not intra-organisation. DNSSEC, on the other hand, requires extensive standardisation and buy-in from a huge number of different organisations before it is useful in a general sense. (You can use DNSSEC in a private, intra-organisational context, much as you might use SSH, today.) I'm not sure what 99% of DNS authenticity problems you think you can solve without DNSSEC; perhaps it might be useful for you to enumerate them.
Always initiate the call yourself. Always check the nonce in the answer. Never accept unsolicited data. Never accept answers to questions you didn't ask.
And, according to your theory, be happy that you have no way to validate the authenticity of any answers you do get?
Besides, if you don't trust IP addresses
If? We have meandered from the topic at hand, a bit. But the general point I was trying to make was that all the robust DNS software in the world will not avoid the propagation of rogue DNS answers if there's no way for a client (or a trusted, validating resolver) to verify the authenticity of the data contained within them. Joe
* sean@donelan.com (Sean Donelan) [Sun 27 Mar 2005, 03:16 CEST]:
As a practical matter, you can stop 99% of the problems with a lot less effort. Why has SSH been so successful, and DNSSEC stumbled so badly?
Because one of these products came with "./configure; make; make install" -- Niels. -- The idle mind is the devil's playground
* Sean Donelan:
Signatures don't create trust. A signature can only confirm an existing trust relationship. DNSSEC would have the same problem, where do you get the trustworthing signatures? By connecting to the same root you don't trust?
As a practical matter, you can stop 99% of the problems with a lot less effort. Why has SSH been so successful, and DNSSEC stumbled so badly?
Because SSH "signatures" do create trust. SSH uses the key continuity model, not the PKI model.
At 20:15 -0500 3/26/05, Sean Donelan wrote:
effort. Why has SSH been so successful, and DNSSEC stumbled so badly?
Short answer to that question alone. (Believe me, I've considered it too.) SSH is an example of innovation that requires only the end points to cooperate - e.g., like TCP doing congestion control at the edges. In particular, the key exchange in SSH is simplistic... DNSSEC is a change to the operations at the mythical core of the Internet. DNSSEC won't work until third parties are involved, i.e., the parents (et.al.) of the server are involved, not just the server and client. In particular, the key exchange in DNSSEC has been the sore spot. Mythical core: in this case, the administration of the root zone, the TLDs, etc., not the routing/transit/peering core. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Edward Lewis +1-571-434-5468 NeuStar Achieving total enlightenment has taught me that ignorance is bliss.
On Sat, 26 Mar 2005, Joe Abley wrote:
Le 26 mars 2005, à 17:52, Sean Donelan a écrit :
You forgot the most important requirement, you have to be using insecure, unpatched DNS code (old versions of BIND, old versions of Windows, etc). If you use modern DNS code and which only follows trustworthy pointers from the root down, you won't get hooked by this.
The obvious rejoinder to this is that there are no trustworthy pointers from the root down (and no way to tell if the root you are talking to contains genuine data) unless all the zones from the root down are signed with signatures you can verify and there's a chain of trust to accompany each delegation.
If you don't have cryptographic signatures in the mix somewhere, it all boils down to trusting IP addresses.
where was www.makelovenotspam.com re-pointed to and 'hacked' again?? I forget... 'trust of the ip address' :(
On Sat, 26 Mar 2005 17:52:56 -0500 (EST), Sean Donelan <sean@donelan.com> wrote:
On the other hand, there are a lot of reasons why a DNS operator may return different answers to their own users of their resolvers. Reverse proxy caching is very common. Just about all WiFi folks use cripple DNS as part of their log on. Or my favorite, quarantining infected computers to get the attention of their owners.
I hate that cripple dns stuff - they seem to add transparent proxying of dns requests to it as well, sometimes. I've seen cases where my laptop's local resolver (dnscache) suddenly starts returning weird values like 1.1.1.1, 120.120.120.120 etc for *.one-of-my-domains.com for some reason. Thank $DEITY for large ISPs running open resolvers on fat pipes .. those do come in quite handy in a resolv.conf sometimes, when I run into this sort of behavior. --srs
Suresh Ramasubramanian wrote:
On Sat, 26 Mar 2005 17:52:56 -0500 (EST), Sean Donelan <sean@donelan.com> wrote:
<snip>
Thank $DEITY for large ISPs running open resolvers on fat pipes .. those do come in quite handy in a resolv.conf sometimes, when I run into this sort of behavior.
--srs
Slightly OT to parent thread...on the subject of open dns resolvers. Common best practices seem to suggest that doing so is a bad thing. DNS documentation and http://www.dnsreport.com appear to view this negatively. Is that the consensus among operators here? Does anyone feel that in spite of the {negligble} risk involved, since any abuse would be local in nature (as opposed to SMTP open relay) one should be good neighborly in this way? Or perhaps the prospect of yet another list of $IP_BLOCKS_THAT_ARE_OUR_NETWORK make this a low priority on the TODO list of DNS operators? Yes, if your resolvers are open to the world, cache poisoning becomes a lot easier and better targetted -- but then, if your resolvers are vulnerable to that, you would get bit by it sooner or later anyways. Joe
i have yet to see cogent arguments, other than scaling issues, against running open recursive servers. randy
On Sun, 27 Mar 2005, Randy Bush wrote:
i have yet to see cogent arguments, other than scaling issues, against running open recursive servers.
The common example to NOT run them is the DNS Smurf attack, forge dns requests from your victim for some 'large' response: MX for mci.com works probably for this and make that happen from a few hundred of your friends/bots. It seems that MX lookup will return 497 bytes, a query that returns "see root please" is only 236 today. Larger providers have the problem that you can't easily filter 'customers' from 'non-customers' in a sane and scalable fashion. While they have to run the open resolvers for custoemr service reasons they can't adequately protect them from abusers or attackers in all cases. -Chris
On Mar 27, 2005, at 1:25 PM, Christopher L. Morrow wrote:
Larger providers have the problem that you can't easily filter 'customers' from 'non-customers' in a sane and scalable fashion.
Hrm? Larger providers tend to have old swamp space lying around :) Throw the resolvers on a netblock that's not routed out to your border routers (transit, peering), only the customer facing ones... with a secondary address that is routed. Secondary address doesn't listen for queries, only answers. And to Randy's point about problems with open recursive nameservers... abusers have been known to cache "hijack". Register a domain, configure an authority with very large TTLs, seed it onto known open recursive nameservers, update domain record to point to the open recursive servers rather than their own. Wammo, "bullet proof" dns hosting. (Yeah, it'd be nice if people didn't listen to non-AA answers to their queries, but they do).
And to Randy's point about problems with open recursive nameservers... abusers have been known to cache "hijack". Register a domain, configure an authority with very large TTLs, seed it onto known open recursive nameservers, update domain record to point to the open recursive servers rather than their own. Wammo, "bullet proof" dns hosting.
as has been said here repeatedly, you should not be running servers, recursive or not, on old broken and vulnerable software. randy
On Mar 28, 2005, at 1:11 AM, Randy Bush wrote:
And to Randy's point about problems with open recursive nameservers... abusers have been known to cache "hijack". Register a domain, configure an authority with very large TTLs, seed it onto known open recursive nameservers, update domain record to point to the open recursive servers rather than their own. Wammo, "bullet proof" dns hosting.
as has been said here repeatedly, you should not be running servers, recursive or not, on old broken and vulnerable software.
Huh? I think you do not understand. Do not mistake "cache hijack" for "cache poison". This is _nothing_ to do with what you're running on the recursive nameserver. It is doing _exactly_ what it is supposed to do. Get answers, store in cache, respond to queries from cache if TTL isn't expired.
On Monday 28 Mar 2005 4:54 pm, John Payne wrote:
This is _nothing_ to do with what you're running on the recursive nameserver. It is doing _exactly_ what it is supposed to do. Get answers, store in cache, respond to queries from cache if TTL isn't expired.
The answers from a recursive servers won't be marked authoritative (AA bit not set), and so correct behaviour is to discard (BIND will log a lame server message as well by default) these records. If your recursive resolver doesn't discard these records, suggest you get one that works ;) I assumed the reason open recursive servers are a "bad idea" are that you can guess to within a second when they will rerequest a record from the authoritative servers, so you know when to try and send a spoofed answer for a domain you are trying to poison. Thus it makes brute force poisoning attacks less obvious because you don't have to send thousands of packets till you hit the right time and client id, you just have to guess the right client id, as you can guess the "right time" (for busy domains at least) by asking when will this record expire. I've never seen a malicious attack of this type in anger, but it is theoretically possible (although much harder again DJB dnscache because it opens a new port per request), and well documented as a vulnerability of the DNS protocol. For large ISPs I would have thought this was a legitimate concern, but being able to poison one cache, at one small ISP, one time in so many thousand, is a limited result for a lot of effort. Still if you have a "botnet" spare and no spam runs to process I guess the effort is writing the software to try.
* Simon Waters:
This is _nothing_ to do with what you're running on the recursive nameserver. It is doing _exactly_ what it is supposed to do. Get answers, store in cache, respond to queries from cache if TTL isn't expired.
The answers from a recursive servers won't be marked authoritative (AA bit not set), and so correct behaviour is to discard (BIND will log a lame server message as well by default) these records.
Unfortunately, this is not quite true. Brad and Chris are right. I couldn't believe it either, but after a long stare at BIND's is_lame function, I have to agree with them. BIND accepts non-authoritative answers if their additional section looks a bit like a referral. I don't tink that this check is deliberately lax, but stricter checks are simply harder to do on this particular code path.
If your recursive resolver doesn't discard these records, suggest you get one that works ;)
Which one would? Keep in mind that referrals do not have the AA bit set, so a simple filter wouldn't work.
On Tue, 2005-03-29 at 05:37, Simon Waters wrote:
The answers from a recursive servers won't be marked authoritative (AA bit not set), and so correct behaviour is to discard (BIND will log a lame server message as well by default) these records.
If your recursive resolver doesn't discard these records, suggest you get one that works ;)
In a perfect world, this might be a viable solution. The problem is there are far too many legitimate but "broken" name servers out there. On an average day I log well over 100 lame servers. If I broke this functionality, my helpdesk would get flooded pretty quickly with angry users. HTH, Chris
* Chris Brenton:
In a perfect world, this might be a viable solution. The problem is there are far too many legitimate but "broken" name servers out there. On an average day I log well over 100 lame servers. If I broke this functionality, my helpdesk would get flooded pretty quickly with angry users.
Assuming BIND 9: /* * Is the server lame? */ if (fctx->res->lame_ttl != 0 && !ISFORWARDER(query->addrinfo) && is_lame(fctx)) { log_lame(fctx, query->addrinfo); result = dns_adb_marklame(fctx->adb, query->addrinfo, &fctx->domain, now + fctx->res->lame_ttl); if (result != ISC_R_SUCCESS) isc_log_write(dns_lctx, DNS_LOGCATEGORY_RESOLVER, DNS_LOGMODULE_RESOLVER, ISC_LOG_ERROR, "could not mark server as lame: %s", isc_result_totext(result)); broken_server = DNS_R_LAME; keep_trying = ISC_TRUE; goto done; } So if you see something in the logs, it is already broken. 8-) The discussion in this part of the thread focuses on flagging more servers as lame (which are currently not detected by BIND or even logged).
On Mar 29, 2005, at 5:37 AM, Simon Waters wrote:
The answers from a recursive servers won't be marked authoritative (AA bit not set), and so correct behaviour is to discard (BIND will log a lame server message as well by default) these records.
As others have pointed out, BZZZZT
If your recursive resolver doesn't discard these records, suggest you get one that works ;)
Yeah, problem is, it ain't my recursive resolver that's the problem... I don't actually follow links in spam (shock, horror), just pointing out the problem.
On Mon, 2005-03-28 at 01:04, John Payne wrote:
And to Randy's point about problems with open recursive nameservers... abusers have been known to cache "hijack". Register a domain, configure an authority with very large TTLs, seed it onto known open recursive nameservers, update domain record to point to the open recursive servers rather than their own. Wammo, "bullet proof" dns hosting.
I posted a note to Bugtraq on this process about a year and a half ago as at the time I noticed a few spammers using this technique. Seems they were doing this to protect their NS from retaliatory attacks. http://cert.uni-stuttgart.de/archive/bugtraq/2003/09/msg00164.html Large TTLs only get you so far. All depends on the default setting of max-cache-ttl. For Bind this is 7 days. MS DNS is 24 hours. Obviously spammers can do a lot of damage in 7 days. :( HTH, Chris
Chris Brenton wrote:
On Mon, 2005-03-28 at 01:04, John Payne wrote:
And to Randy's point about problems with open recursive nameservers... abusers have been known to cache "hijack". Register a domain, configure an authority with very large TTLs, seed it onto known open recursive nameservers, update domain record to point to the open recursive servers rather than their own. Wammo, "bullet proof" dns hosting.
I posted a note to Bugtraq on this process about a year and a half ago as at the time I noticed a few spammers using this technique. Seems they were doing this to protect their NS from retaliatory attacks. http://cert.uni-stuttgart.de/archive/bugtraq/2003/09/msg00164.html
Large TTLs only get you so far. All depends on the default setting of max-cache-ttl. For Bind this is 7 days. MS DNS is 24 hours. Obviously spammers can do a lot of damage in 7 days. :(
HTH, Chris
TIC: Apparently DNS was designed to be TOO reliable and failure resistant. As I understand from reading the referenced cert thread, there is the workaround which is disabling open recursion and then there are the potential fixes. 1) Registrars being required to verify Authority in delegated to nameservers (will this break any appreciated valid models?) before activating/changing delegation for zone.<REAL FIX> If this is all there is to it, than I see no reason why Registrar laziness and desire for profit$ should take precedence over ops changes across the board. Is it possible/practical to perpertrate this kind of hijak without registrar cooperation by first seeding resolver's caches and then changing NS on authoritative so that future caches will resolve from seeded resolvers? Is it possible to not even need to change the zone served NS/SOA and to use the hijaking values from the get-go? 2) Stricter settings as regards to all lame delegations -- SERVFAIL by default without recursion/caching attempts? 3) Paranoid checking for situations such as these by having recursing nameservers attempt to periodically check for suspicous NS and glue from the parent zone's POV and compare it to cache, trashing cached records if they do not like result. 4) Rate limiting? Since at this point I am out of my depth, I think I'll stop here after a simple question. Is all the local limitations on TTL values a good thing?
On Tue, 2005-03-29 at 08:49, Joe Maimon wrote:
TIC: Apparently DNS was designed to be TOO reliable and failure resistant.
Ya, sometimes security and functionality don't mix all that well. ;-)
As I understand from reading the referenced cert thread, there is the workaround which is disabling open recursion and then there are the potential fixes.
From an admin perspective, this is the way to go. This is a real easy fix with Bind via "allow-recursion". I don't play with MS DNS that often, but the last time I looked recursion was an on/off switch. So of the MS DNS box is Internet accessible, you are kind of hosed.
1) Registrars being required to verify Authority in delegated to nameservers (will this break any appreciated valid models?) before activating/changing delegation for zone.<REAL FIX>
Back in the InterNIC days this was SOP. This security check got lost when things went commercial. Not sure if it would be possible to get it back at this point. Too many registrars out there to try and enforce it. IMHO lack of verification is only part of the problem (that has been going on for years). What has made this more of an issue is registrars that offer immediate response time to changes. This makes it far easier to spammers to move to other stolen resources as required.
Is it possible/practical to perpertrate this kind of hijak without registrar cooperation by first seeding resolver's caches and then changing NS on authoritative so that future caches will resolve from seeded resolvers? Is it possible to not even need to change the zone served NS/SOA and to use the hijaking values from the get-go?
Possibly. I ran into a bug/feature with Bind back in the 8.x days which causes the resolver to go back to the last know authoritative server when a TTL expires. On this plus side, this helps to reduce traffic on the root name servers. On the down side, if the remote name server still claims authority you will never find the new resource. I ran into the problem moving a client from one ISP to another while the old ISP was acting vindictive and refused to remove the old records. This of course caused problems for their clients because when the TTLs expired they kept going back to the old resource. Only way to clear it is a name server restart at every domain looking up your info. When I reported this the bug/feature was changed but I noticed a while back (late 8.x maybe 9.0) that it is back. So if the purp can get you to the wrong server only once it may be possible to keep you there.
2) Stricter settings as regards to all lame delegations -- SERVFAIL by default without recursion/caching attempts?
See my last post. IMHO there are too many broken but legitimate name servers out there for this to be functional for most environments.
Is all the local limitations on TTL values a good thing?
In this case, absolutely! With the default Bind setting, a TTL of 3600000 will get quietly truncated to a week. This means a trashed cache will fix itself in one week rather than six. HTH, Chris
When I reported this the bug/feature was changed but I noticed a while back (late 8.x maybe 9.0) that it is back. So if the purp can get you to the wrong server only once it may be possible to keep you there.
It was actually fixed in 9.2.3rc1. 1429. [bug] Prevent the cache getting locked to old servers. See this thread: http://marc.theaimsgroup.com/?t=111057230600004&r=1&w=4 Of course I still don't think its a bug, and it forced people to remember to actually finish the job when they moved their DNS around. But whatever, its easier than doing a rndc flushname name (which finally got put in). sam
On Sun, Mar 27, 2005 at 11:36:26AM -0500, Joe Maimon wrote:
Suresh Ramasubramanian wrote:
On Sat, 26 Mar 2005 17:52:56 -0500 (EST), Sean Donelan <sean@donelan.com> wrote:
<snip>
Thank $DEITY for large ISPs running open resolvers on fat pipes .. those do come in quite handy in a resolv.conf sometimes, when I run into this sort of behavior.
--srs
Slightly OT to parent thread...on the subject of open dns resolvers.
Common best practices seem to suggest that doing so is a bad thing. DNS documentation and http://www.dnsreport.com appear to view this negatively.
er... common best practice for YOU... perhaps. dnsreport.com is apparently someone who agrees w/ you. and i know why some COMMERCIAL operators want to squeeze every last lira from the services they offer... but IMRs w/ unrestricted access are a good a valuable tool for the Internet community at large. IMR? - you know, an Interative Mode Resolver aka caching server.
Joe
--bill
bmanning@vacation.karoshi.com wrote:
On Sun, Mar 27, 2005 at 11:36:26AM -0500, Joe Maimon wrote:
<snip>
er... common best practice for YOU... perhaps. dnsreport.com is apparently someone who agrees w/ you. and i know why some COMMERCIAL operators want to squeeze every last lira from the services they offer... but IMRs w/ unrestricted access are a good a valuable tool for the Internet community at large.
IMR? - you know, an Interative Mode Resolver aka caching server.
Joe
--bill
Thanks for the feedback, bill and all else who have responded. Just want to clarify -- Thats NOT my position, any resolvers (not like thats a great many big important ones like others here can attest to) I have run were not purposefully closed off from anyone (who was not being abusive). Security is critical, but I am from the school that advocates leaving open that which * may be usefull to others * does not cost me {much} - cost is in terms of {money | cpu | ram | bw | mgmt | what have you} * takes extra effort to close off * Has no recent history of badness (insert your definition for "recent") * Is easily verifiable (you should know real quick if your DNS cache is poisoned) * avoids issues on how to make things work now that you have screwed it all up by denying resolving to all [insert all corner cases here] (simply as an example) Easy to make a road, hard to make a prison.
* Joe Maimon:
Slightly OT to parent thread...on the subject of open dns resolvers.
Common best practices seem to suggest that doing so is a bad thing.
There was some malware which contained hard-coded IP addresses of a few open DNS resolvers (probably in an attempt to escape from DNS-based walled gardens). If one of your DNS resolvers was among them, I'm sure you'd closed it to the general public, too -- and made sure that your others were closed as well, just in case.
On the other hand, there are a lot of reasons why a DNS operator may return different answers to their own users of their resolvers. Reverse proxy caching is very common. Just about all WiFi folks use cripple DNS as part of their log on. Or my favorite, quarantining infected computers to get the attention of their owners.
sean, solving a layer two problem (mac address) at layer four will bite you in the long run.
Thank $DEITY for large ISPs running open resolvers on fat pipes .. those do come in quite handy in a resolv.conf sometimes, when I run into this sort of behavior.
problem is many walled garden providers, e.g. t-mo, block 53. randy
On Sun, 27 Mar 2005, Randy Bush wrote:
Thank $DEITY for large ISPs running open resolvers on fat pipes .. those do come in quite handy in a resolv.conf sometimes, when I run into this sort of behavior.
problem is many walled garden providers, e.g. t-mo, block 53.
The world could be a better place if there were fewer people who stole service, or if the technologists could come up with more secure systems. In the mean time, we make do with the tools we have available.
problem is many walled garden providers, e.g. t-mo, block 53. The world could be a better place if there were fewer people who stole service, or if the technologists could come up with more secure systems.
ok, tell me. how does allowing my laptop in the united red rug to access the global dns threaten the t-mo hotspot, united, ...? oh, and then there was the appended glorious one. randy --- From: Randy Bush <randy@psg.com> Date: Mon, 31 Jan 2005 18:47:29 -0800 To: global services cust support <xxx@ual.com> Subject: wireless in narita red carpet [ please pass to whoever does tech support for the internet service you provide in the narita red carpet lounge ] i came through narita on monday 2005.01.31, changing from bangkok to get to seatac. i am an internet engineer since the arpanet, and heavily into network security. i had a very scary and useless experience in the red carpet at narita. you provide free wireless, but ports other than 25, 80, 110, 443, ... are blocked. so no ssh or other vpns. i.e. YOU FORCE WIRELESS USERS TO BE INSECURE. so, if i was so inclined, i could sit there and tap everyone's email etc. this is very un-good. randy
25, 80, 110, 443, ... are blocked. so no ssh or other vpns. i.e. YOU FORCE WIRELESS USERS TO BE INSECURE. so, if i was so inclined, i could sit there and tap everyone's email etc.
I thought everyone ran an ssh server on port 443 by now. It's the easiest way to get through these overbearing firewalls. -- Regards, John R. Levine, IECC, POB 727, Trumansburg NY 14886 +1 607 330 5711 johnl@iecc.com, Mayor, http://johnlevine.com, Member, Provisional board, Coalition Against Unsolicited Commercial E-mail
John Levine wrote:
I thought everyone ran an ssh server on port 443 by now. It's the easiest way to get through these overbearing firewalls.
Inbound: -------- Agreed. As we all know, applications running on web servers are the easiest way to get into an organization. Run as many routers and firewalls as you like, people will just cut through them. Some easy questions are; - How easy is it to break in, applicatively? [secure code & architecture, pen-test, etc. and not just when the site goes live] - What do you do to protect the application? [application filtering on some level - not many good solutions, sniffer/resets, inline/drop, reverse proxies, etc.] - Once through the application, what do you do to protect the server? [hardening, ports, services, FW] - DB security? What's that? - Once on the server, what do you do to make sure the machine cannot get to the rest of your network? Is your solution local or network based? [PFW? VLAN?] That's an ancient beaten to death issue that people just piss all over. Web applications today are simply the door into your organization and your network. This is all costy, but you could do some of these things without any additional costs above an hour or two of your time. I state the obvious again: protect your web servers! Outbound: --------- Try and make sure only HTTP/SSL communication goes through ports 80/443, respectively. Most worth-while corporate firewalls today support this type of application filtering. It won't help you with spyware like (imo) Kazaa (or legit software) that goes over HTTP, but you get my point. Aside to a nice way to circumvent firewalls to go and IRC or use private mail servers, we also lately see many botnet C&C's using these ports. It may only be half relevant to nanog, and for that I apologize, but I take the chance to remind people of how important this all is on *ANY* opportunity. Gadi.
participants (18)
-
Alex Bligh
-
bmanning@vacation.karoshi.com
-
Chris Brenton
-
Christopher L. Morrow
-
Edward Lewis
-
Florian Weimer
-
Gadi Evron
-
Jeff Kell
-
Joe Abley
-
Joe Maimon
-
John Levine
-
John Payne
-
Niels Bakker
-
Randy Bush
-
Sam Hayes Merritt, III
-
Sean Donelan
-
Simon Waters
-
Suresh Ramasubramanian