Hey, SiteFinder is back, again...
www.consumeraffairs.com/news04/2007/11/verizon_search.html November 3, 2007 Subscribers to Verizon's high-powered fiber-optic Internet service (FiOS) are reporting that when they mistype a Web site address, they get redirected to Verizon's own search engine page -- even if they don't have Verizon's search page set as their default. ,,,,, You can guess most of the rest. I guess we didn't get that wooden stake in deep enough last Tuesday... -- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
I know this is just anecdotal, but I have Verizon FIOS in Northern Virginia and I have not seen sitefinder pop up. I just verified with a few sites to make sure. allan On Nov 3, 2007, at 11:40 PM, David Lesher wrote:
www.consumeraffairs.com/news04/2007/11/verizon_search.html
November 3, 2007
Subscribers to Verizon's high-powered fiber-optic Internet service (FiOS) are reporting that when they mistype a Web site address, they get redirected to Verizon's own search engine page -- even if they don't have Verizon's search page set as their default.
On 11/3/07, Allan Liska <allan@allan.org> wrote:
I know this is just anecdotal, but I have Verizon FIOS in Northern Virginia and I have not seen sitefinder pop up. I just verified with a few sites to make sure.
http://www.irbs.net/internet/nanog/0607/0139.html oops, I was right (kinda). -chris
On Nov 4, 2007, at 1:52 AM, Christopher Morrow wrote:
On 11/3/07, Allan Liska <allan@allan.org> wrote:
I know this is just anecdotal, but I have Verizon FIOS in Northern Virginia and I have not seen sitefinder pop up. I just verified with a few sites to make sure.
http://www.irbs.net/internet/nanog/0607/0139.html
oops, I was right (kinda).
Verizon != VeriSign, despite what people think. A single provider doing this is not equivalent to the root servers doing it. You can change providers, you can't change "." in DNS. Plus lots of providers do this, in the US and abroad. Lastly, it's trivial to get around, unless your provider is transparently intercepting / redirecting port 53. -- TTFN, patrick
Patrick W. Gilmore wrote:
Verizon != VeriSign, despite what people think.
A single provider doing this is not equivalent to the root servers doing it. You can change providers, you can't change "." in DNS.
Charter has been doing this for quite some time. If you have security/network/diagnostic tools where you need a DNS failure to get a valid result, you're out of luck, they resolve everything, and you only know something happened if you were doing http to start with. Mistype a telnet, ssh, ftp, mail, etc hostname and you're in for some considerable confusion until you figure out what's going on... Jeff
On Sun, Nov 04, 2007 at 08:32:25AM -0500, Patrick W. Gilmore wrote:
A single provider doing this is not equivalent to the root servers doing it. You can change providers, you can't change "." in DNS.
This is true, but Verisign wasn't doing it on root servers, IIRC, but on the .com and .net TLD servers. Not that that's any better. The last time I heard a discussion of this topic, though, I heard someone make the point that there's a big difference between authority servers and recursing resolvers, which is the same sort of point as above. That is, if you do this in the authority servers for _any_ domain (., .com, .info, or .my.example.org for that matter), it's automatically evil, because of the meaning of "authority". One could argue that it is less evil to do this at recursive servers, because people could choose not to use that service by installing their own full resolvers or whatever. I don't know that I accept the argument, but let's be clear at least in the difference between doing this on authority servers and recursing resolvers. A -- ---- Andrew Sullivan 204-4141 Yonge Street Afilias Canada Toronto, Ontario Canada <andrew@ca.afilias.info> M2P 2A8 +1 416 646 3304 x4110
Andrew Sullivan (andrew) writes:
The last time I heard a discussion of this topic, though, I heard someone make the point that there's a big difference between authority servers and recursing resolvers, which is the same sort of point as above. That is, if you do this in the authority servers for _any_ domain (., .com, .info, or .my.example.org for that matter), it's automatically evil, because of the meaning of "authority". One could argue that it is less evil to do this at recursive servers, because people could choose not to use that service by installing their own full resolvers or whatever. I don't know that I accept the argument, but let's be clear at least in the difference between doing this on authority servers and recursing resolvers.
Fully agreed. In some ways, if your ISP is stupid enough to want to do this, and they don't ever want to return NXDOMAIN to their customer's resolvers, and their ToS specify that they do it, well, they're welcome. But the moment you start to mess around with the authority that is being delegated to you, then it's Evil. I think ICANN should probably come out and specify that doing wildcard matchin on TLD delegations is Not A Good thing. Phil
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content? [Assuming we ever get to such; I know ASD is in line to deploy just after perpetual motion and honest politicians..] -- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
On Nov 5, 2007, at 8:23 AM, David Lesher wrote:
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content?
If what Verizon is doing is rewriting NXDOMAIN at their caching servers, DNSSEC will _not_ help. Caching servers do the validation and the insertion of the search engine IP addresses in the response would occur after the validation. Regards, -drc
On Mon, 5 Nov 2007 11:17:29 -0800 David Conrad <drc@virtualized.org> wrote:
On Nov 5, 2007, at 8:23 AM, David Lesher wrote:
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content?
If what Verizon is doing is rewriting NXDOMAIN at their caching servers, DNSSEC will _not_ help. Caching servers do the validation and the insertion of the search engine IP addresses in the response would occur after the validation.
Depends on whether or not the endpoints delegate DNSSEC validation to Verizon. They don't have to. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Nov 5, 2007, at 11:54 AM, Steven M. Bellovin wrote:
On Nov 5, 2007, at 8:23 AM, David Lesher wrote:
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content?
If what Verizon is doing is rewriting NXDOMAIN at their caching servers, DNSSEC will _not_ help. Caching servers do the validation and the insertion of the search engine IP addresses in the response would occur after the validation.
Depends on whether or not the endpoints delegate DNSSEC validation to Verizon. They don't have to.
Right. People can run their own caching servers and can set up those servers to do DNSSEC validation after setting up (and maintaining) trust anchors for any DNSSEC signed zone they might want to validate. Of course, if they do this, the NXDOMAIN redirection won't be an issue since the customer will be bypassing the caching server that is doing the redirection... As an aside, I note that Verizon is squatting on address space allocated to APNIC. From the self-help web page offered to opt out of this "service" (specific to the particular hardware customers might be using, e.g., http://netservices.verizon.net/portal/link/help/ item?case=c32535), they state: "5. Change the last octet of the Primary & Secondary DNS Server addresses to 14. Example: You look up the DNS information and the server numbers are: 123.123.123.12 Primary DNS 123.123.123.12 Secondary DNS You would change the addresses to the following when statically assigning them to the computer or modem/router. 123.123.123.14 Primary DNS 123.123.123.14 Secondary DNS Note that the .14 is the special set of servers that will opt you out of the DSN Assistance program." 123.0.0.0/8 is delegated to APNIC who have allocated it to CNC Group in China: % whois -h whois.apnic.net 123.123.123.0 % [whois.apnic.net node-1] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 123.112.0.0 - 123.127.255.255 netname: CNCGROUP-BJ descr: CNCGROUP Beijing province network descr: China Network Communications Group Corporation descr: No.156,Fu-Xing-Men-Nei Street, descr: Beijing 100031 country: CN ... Regards, -drc
David Conrad wrote:
As an aside, I note that Verizon is squatting on address space allocated to APNIC. From the self-help web page offered to opt out of this "service" (specific to the particular hardware customers might be using, e.g., http://netservices.verizon.net/portal/link/help/item?case=c32535), they state:
[ snip example ] I believe you're reading into that too much - they're not squatting on anything in 123/8, they're just using "123.123.123" as example first octets to demonstrate the concept of changing the last octet. IE, their "standard" DNS servers always live at x.y.z.12 and their "opt-out" DNS servers for the same customers live at x.y.z.14. I suspect that the example is confusing enough that plenty of customers are making the same mistake you did in reading it, and entering "123.123.123.14", but I don't believe they're actually USING those IPs, as it is (if not clearly enough) labeled as "Example". Regards, Tim Wilde -- Tim Wilde twilde@cymru.com
Do common endpoints (Windows Vista/XP, MacOS X 10.4/5) support DNSSEC Validation? If not, then do people have a choice? Regards Bora On 11/5/07 11:54 AM, "Steven M. Bellovin" <smb@cs.columbia.edu> wrote:
On Mon, 5 Nov 2007 11:17:29 -0800 David Conrad <drc@virtualized.org> wrote:
On Nov 5, 2007, at 8:23 AM, David Lesher wrote:
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content?
If what Verizon is doing is rewriting NXDOMAIN at their caching servers, DNSSEC will _not_ help. Caching servers do the validation and the insertion of the search engine IP addresses in the response would occur after the validation.
Depends on whether or not the endpoints delegate DNSSEC validation to Verizon. They don't have to.
On Nov 5, 2007, at 2:13 PM, Bora Akyol wrote:
Do common endpoints (Windows Vista/XP, MacOS X 10.4/5) support DNSSEC Validation? If not, then do people have a choice?
Yes and no. If you run your own caching server and that caching server supports DNSSEC and you enable DNSSEC and set up/maintain the trust anchors, then yes. So yes, pedantically speaking, there is a choice. Pragmatically speaking, I doubt this is really an option for any but the geekiest and/or terminally paranoid. Even the first bit of the previous "if" statement is probably beyond most... Regards, -drc P.S. From experience, running your own caching server can result in problems when connecting via T-Mobile hotspot and some hotel authentication abominations... (sigh).
David Conrad wrote:
On Nov 5, 2007, at 2:13 PM, Bora Akyol wrote:
Do common endpoints (Windows Vista/XP, MacOS X 10.4/5) support DNSSEC Validation? If not, then do people have a choice?
Yes and no.
Of course, nobody supports the "Evil bit" today, so some change would be necessary one way or the other to deal with this. One wonders whether Verizon's behavior is enough to cause Microsoft to turn on a caching resolver. One issue Dave didn't raise is that firewalls often block DNS requests from OTHER than caching resolvers. Cough. So, how much is that NXDOMAIN worth to you? Eliot
On 11/5/07, Eliot Lear <lear@cisco.com> wrote:
Cough. So, how much is that NXDOMAIN worth to you?
So, here's the problem really... NXDOMAIN is being judged as a 'problem'. It's really only a 'problem' for a small number of APPLICATIONS on the Internet. One could even argue that in a web-browser the 'is nxdomain a problem' is still up to the browser to decide how best to answer the USER of that browser/application. Many, many applications expect dns to be the honest broker, to let them know if something exists or not and they make their minds up for the upper layer protocols accordingly. DNS is fundamentally a basic plumbing bit of the Internet. There are things built around it operating sanely and according to generally accepted standards. Switching a behavior because you believe it to be 'better' for a large and non-coherent population is guaranteed to raise at least your support costs, if not your customer-base's ire. Assuming that all the world is a web-browser is at the very least naive and at worst wantonly/knowingly destructive/malfeasant. MarkA and others have stated: "Just run a cache-resolver on your local LAN/HOST/NET", except that's not within the means of joe-random-sixpack, nor is it within the abilities of many enterprise/SMB folks, talking from experience chatting up misbehaving enterprise/banking/SMB customers first hand. What's to keep the ISP from answering: provider-server.com when they ask for Yahoo.com or Google.com or akamai-deployed-server.com aside from (perhaps) a threat of lawyers calling? Anyway, hopefully someone gets their head on straight about this before other problems arise. -Chris
On Mon, 5 Nov 2007 23:46:08 -0800 "Christopher Morrow" <christopher.morrow@gmail.com> wrote:
On 11/5/07, Eliot Lear <lear@cisco.com> wrote:
Cough. So, how much is that NXDOMAIN worth to you?
So, here's the problem really... NXDOMAIN is being judged as a 'problem'. It's really only a 'problem' for a small number of APPLICATIONS on the Internet. One could even argue that in a web-browser the 'is nxdomain a problem' is still up to the browser to decide how best to answer the USER of that browser/application. Many, many applications expect dns to be the honest broker, to let them know if something exists or not and they make their minds up for the upper layer protocols accordingly.
DNS is fundamentally a basic plumbing bit of the Internet. There are things built around it operating sanely and according to generally accepted standards. Switching a behavior because you believe it to be 'better' for a large and non-coherent population is guaranteed to raise at least your support costs, if not your customer-base's ire. Assuming that all the world is a web-browser is at the very least naive and at worst wantonly/knowingly destructive/malfeasant.
MarkA and others have stated: "Just run a cache-resolver on your local LAN/HOST/NET", except that's not within the means of joe-random-sixpack, nor is it within the abilities of many enterprise/SMB folks, talking from experience chatting up misbehaving enterprise/banking/SMB customers first hand. What's to keep the ISP from answering: provider-server.com when they ask for Yahoo.com or Google.com or akamai-deployed-server.com aside from (perhaps) a threat of lawyers calling?
Hey -- I can so run a cache/resolver... More seriously: you're right; most people can't and won't. But a majority of customers in that space are using small NATs. Those certainly can; in fact, they often do. It's just that today, they simply talk to their upstreams, rather than starting from the root and going down. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Since this is verizon, one wonders why this has never been tried on wrong, non-working phone numbers? Visit your local chevy dealer, no interest for 12 months! We're sorry, the number you have reached.... is it illegal? How long before they'll just make you sit thru a few seconds of pitch before connecting any call? Or any website? How hard is it to stick up a quick bit of flash (e.g.) and then fade to the page you requested? I don't think this is quite slippery-slopism. If you've been in this business 20+ years, a long time, you remember having computers you owned and weren't designed to efficiently flash ads at you, no "Free Trial of" this and "would you like to upgrade now?" that, etc. It's as if there's a magical constant at work in personal computing: The number of minutes per hour of productive work is constant, despite technological improvements. For many years it was limited by the number of reboots, now as systems have become more reliable it's become limited by the number of ads and similar distractions you have to wade through to get anything done. It really all comes down to the same problem, a flat-rate pricing model, and marketeers realizing they can exploit this mercilessly at no incremental cost (spam, "site finder", whatever.) Without any pricing feedback in the loop all you can really do is try to implement more and and more somewhat arbitrary rules (and ways of enforcing them) to try to control behavior, and by whose say-so? One is basically forced into a role analogous to the neighborhood association or zoning board perhaps telling people what they can and cannot do with their property (granted the latter seems to work in a similarly charged environment.) This message brought to you by... -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwide Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
In article <E64EBBA5-3520-4E6A-9F00-6A884C383FE7@virtualized.org> you write:
On Nov 5, 2007, at 8:23 AM, David Lesher wrote:
What affect will Allegedly Secure DNS have on such provider hijackings, both of DNS and crammed-in content?
If what Verizon is doing is rewriting NXDOMAIN at their caching servers, DNSSEC will _not_ help. Caching servers do the validation and the insertion of the search engine IP addresses in the response would occur after the validation.
Regards, -drc
All you have to do is move the validation to a machine you control to detect this garbage. dnssec-enable yes; dnssec-validation yes; forward only; forwarders { <Verizon's caching servers>; }; dnssec-lookaside . trust-anchor <dlv registry>; All lookups which Verizon has interfered with from signed zones will fail. Mark
Mark, On Nov 5, 2007, at 5:31 PM, Mark Andrews wrote:
All you have to do is move the validation to a machine you control to detect this garbage.
You probably don't need to bother with DNSSEC validation to stop the Verizon redirection. All you need do is run a caching server.
dnssec-enable yes; dnssec-validation yes; forward only; forwarders { <Verizon's caching servers>; };
Why bother forwarding?
dnssec-lookaside . trust-anchor <dlv registry>;
You forgot the bit where everybody you want to do a DNS lookup on signs (and maintains) their zones and trusts and registers with <dlv registry> (of which there is exactly one that I know of and that one has 17 entries in it the last I looked). You also didn't mention that everyone doing this will reference the DLV registry on every non- cached lookup. Puts a _lot_ of trust (both security wise and operationally) in <dlv registry>...
All lookups which Verizon has interfered with from signed zones will fail.
Yeah, and Verizon customers would get a timeout (after how long?) instead of a more quickly returned A (or maybe a AAAA) RR to a Verizon controlled search engine. Not really sure the cure is better than the disease. Also not sure what the point is -- most common typos are already squatted upon and validly registered to a adsense pay-per-click web page, typically a search engine (e.g., www.baknofamerica.com). Seems to me the slimeballs have won yet again... Regards, -drc
Mark,
On Nov 5, 2007, at 5:31 PM, Mark Andrews wrote:
All you have to do is move the validation to a machine you control to detect this garbage.
You probably don't need to bother with DNSSEC validation to stop the Verizon redirection. All you need do is run a caching server.
Yep.
dnssec-enable yes; dnssec-validation yes; forward only; forwarders { <Verizon's caching servers>; };
Why bother forwarding?
It was just to prove that you could detect this coming out of a ISP's servers.
dnssec-lookaside . trust-anchor <dlv registry>;
You forgot the bit where everybody you want to do a DNS lookup on signs (and maintains) their zones and trusts and registers with <dlv registry> (of which there is exactly one that I know of and that one has 17 entries in it the last I looked). You also didn't mention that everyone doing this will reference the DLV registry on every non- cached lookup. Puts a _lot_ of trust (both security wise and operationally) in <dlv registry>...
There are also other lists of trust anchors. With 17 entries there arn't a lot of queries that need to be made to have the entire name space covered by cached NSEC records which DLV will use.
All lookups which Verizon has interfered with from signed zones will fail.
Yeah, and Verizon customers would get a timeout (after how long?) instead of a more quickly returned A (or maybe a AAAA) RR to a Verizon controlled search engine. Not really sure the cure is better than the disease.
But then you can log a complaint that DNSSEC doesn't work using their caching resolvers. Or this just gives you the heads up to find the web form to change the servers returned by DHCP. There is contributed code to do this linkage for BIND. Or to manually update the forwarders. i.e. it's useful for those who use ISP's that havn't yet gone over to the dark side. :-)
Also not sure what the point is -- most common typos are already squatted upon and validly registered to a adsense pay-per-click web page, typically a search engine (e.g., www.baknofamerica.com). Seems to me the slimeballs have won yet again...
That's a different issue on a different battle front. Mark
Regards, -drc -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: Mark_Andrews@isc.org
I think ICANN should probably come out and specify that doing wildcard matchin on TLD delegations is Not A Good thing.
You mean like http://www.icann.org/committees/security/sac015.htm ? Regards, -drc
On Mon, Nov 05, 2007 at 10:54:05AM -0500, Andrew Sullivan <andrew@ca.afilias.info> wrote a message of 29 lines which said:
One could argue that it is less evil to do this at recursive servers, because people could choose not to use that service by installing their own full resolvers or whatever.
It depends. There are three possible ways for an access provider to do it, in order of ascending nastiness: 1) Provide, by default, DNS recursors which do the mangling but also provide another set of recursors which do the right thing (and the user can choose, for instance via a dedicated Web interface for his account). 2) Provide DNS recursors which do the mangling. Power users can still install BIND on their laptop and talk directly to the root name servers, then wasting resources. (Variant: they can add an ORNS in their resolving configuration file.) 3) Provide DNS recursors which do the mangling *and* block users, either by filtering out port 53 or by giving them a RFC 1918 address with no NAT for this port. I've seen 1) and 2) in the wild and I am certain I will see 3) one day or the other.
On Mon, 5 Nov 2007 17:16:11 +0100 Stephane Bortzmeyer <bortzmeyer@nic.fr> wrote:
On Mon, Nov 05, 2007 at 10:54:05AM -0500, Andrew Sullivan <andrew@ca.afilias.info> wrote a message of 29 lines which said:
One could argue that it is less evil to do this at recursive servers, because people could choose not to use that service by installing their own full resolvers or whatever.
It depends.
There are three possible ways for an access provider to do it, in order of ascending nastiness:
Perhaps it is time for resolver libraries to have the ability to equate certain IP addresses with NXDOMAIN. At least that way we can recognize that it is happening and fix our own servers on am individual basis. Sort of a DNS blacklist. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
Am 05.11.2007 um 17:16 schrieb Stephane Bortzmeyer:
3) Provide DNS recursors which do the mangling *and* block users, either by filtering out port 53 or by giving them a RFC 1918 address with no NAT for this port.
I've seen 1) and 2) in the wild and I am certain I will see 3) one day or the other.
Just recently in NYC, the hotel "internet" connection did intercept any UDP traffic to *:53, redirecting it to their resolver. Which did not only serve their own A records for names that should have returned NXDOMAIN, but also returned "better" answers than you normally would get (requesting pages from www.weather.com delivered pages from www.accuweather.com ). Of course it even did that after I had paid and clicked through their walled garden site. Stefan -- Stefan Bethke <stb@lassitu.de> Fon +49 170 346 0140
I believe it's been said here many times before, but when in public venues, the only way to be sure about anything in regards to traffic filtering and manipulation is to VPN into your corporate network and bypass all that. Unfortuanately, it makes streaming the latest episode of Heroes a little jerky. Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Stefan Bethke Sent: Monday, November 05, 2007 11:38 PM To: Stephane Bortzmeyer Cc: nanog@merit.edu Subject: Re: Hey, SiteFinder is back, again... Am 05.11.2007 um 17:16 schrieb Stephane Bortzmeyer:
3) Provide DNS recursors which do the mangling *and* block users, either by filtering out port 53 or by giving them a RFC 1918 address with no NAT for this port.
I've seen 1) and 2) in the wild and I am certain I will see 3) one day or the other.
Just recently in NYC, the hotel "internet" connection did intercept any UDP traffic to *:53, redirecting it to their resolver. Which did not only serve their own A records for names that should have returned NXDOMAIN, but also returned "better" answers than you normally would get (requesting pages from www.weather.com delivered pages from www.accuweather.com ). Of course it even did that after I had paid and clicked through their walled garden site. Stefan -- Stefan Bethke <stb@lassitu.de> Fon +49 170 346 0140
On Nov 5, 2007, at 10:54 AM, Andrew Sullivan wrote:
On Sun, Nov 04, 2007 at 08:32:25AM -0500, Patrick W. Gilmore wrote:
A single provider doing this is not equivalent to the root servers doing it. You can change providers, you can't change "." in DNS.
This is true, but Verisign wasn't doing it on root servers, IIRC, but on the .com and .net TLD servers. Not that that's any better.
Touché. Guess I wasn't awake when I wrote that. But .com/.net is still bad (as you say).
The last time I heard a discussion of this topic, though, I heard someone make the point that there's a big difference between authority servers and recursing resolvers, which is the same sort of point as above. That is, if you do this in the authority servers for _any_ domain (., .com, .info, or .my.example.org for that matter), it's automatically evil, because of the meaning of "authority". One could argue that it is less evil to do this at recursive servers, because people could choose not to use that service by installing their own full resolvers or whatever. I don't know that I accept the argument, but let's be clear at least in the difference between doing this on authority servers and recursing resolvers.
I would argue against such a blanket statement. Doing this in an authority for a TLD is bad, because most people don't have a choice of TLD. (Or at least think they don't.) But if I want to put in a wildcard for *.ianai.net, then there is nothing evil about that. In fact, I've been doing so for years (just 'cause I'm lazy), and no one has even noticed. It is my domain, I should be allowed to do whatever I want with it as long as I pay my $10/year and don't use it to abuse someone else. Hijacking user requests on caching name servers is very, very bad, because 1) the user probably doesn't know they are being hijacked, and 2) even if the user did, most wouldn't know how to get around it. So you're back to the TLD authority problem, there is no choice in the matter. -- TTFN, patrick
When Verisign hijacked the wildcard DNS space for .com/.net, they encoded the Evil Bit in the response by putting Sitefinder's IP address as the IP address. In theory you could interpret that as damage and route around it, or at least build ACLs to block any traffic to that IP address except for TCP/80 and TCP/UDP/53. But if random ISPs are going to do that at random locations in their IP address space, and possibly serve their advertising from servers that also have useful information, it's really difficult to block. Does anybody know _which_ protocols Verizon's web-hijacker servers are supporting? Do they at least reject ports 443, 22, 23, etc.? In contrast, Microsoft's IE browser responds to DNS no-domain responses by pointing to a search engine, and I think the last time I used IE it let you pick your own search engine or turn it off if you didn't like MS's default. That's reasonable behaviour for an application, though it's a bit obsequious for my taste.
On Mon, Nov 05, 2007 at 11:52:02AM -0500, Patrick W. Gilmore wrote:
authority for a TLD is bad, because most people don't have a choice of TLD. (Or at least think they don't.)
I don't think that's the reason; I think the reason is that someone who needs to rely on Name Error can't do it, if the authority server is set up in such a way as to hand out falsehoods.
But if I want to put in a wildcard for *.ianai.net, then there is nothing evil about that. In fact, I've been doing so for years (just 'cause I'm lazy), and no one has even noticed. It is my domain, I should be allowed to do whatever I want with it as long as I pay my $10/year and don't use it to abuse someone else.
I'm not sure I agree. I think that it's probably true that, if you have a wildcard that actually resolves so that everyone can use the services they thought they were trying to talk to, there's no basis for complaint (to the extent one thinks wildcards are a good idea). But if you're doing wildcarding so that people get all manner of strange results if they happen not to be arriving on port 80, then I think it's evil in any case. I _also_ think it's evil to serve wildcards on authority servers for largeish (100s, anyway) zones, in almost every case. If the domain gets big enough that you have that many hosts, then others' ability to diagnose surprises depends partly on their ability to get meaningful answers about what things are and are not out there on the net. For very small domains, perhaps there is some argument that the user community is so small that the benefit outweighs the costs. But in truth, if I had my 'druthers, I'd go back in time and eliminate the wildcard feature from the outset, at least for the public Internet. (I can see an argument in split-view contexts, note.) And no, it isn't "your domain". This is one of the pervasive myths of the namespace -- one that has been expanding as privatisation of the DNS has become the norm. The truth is that namespaces are rented, and are subject to all manner of terms and conditions. If you don't believe me, read your contract with your registrar. There are current conditions about labels' relations to other labels, for example, in all gTLDs (these are the UDRP policies). There are rules about what you may and may not register in .aero or .pro, and what you must and must not do with the resulting domain once you've been approved. Many country codes have rules about residency, and if you move you will find you lose your domain as well. Policy -- or, I suppose, politics -- is what constrains TLDs from enforcing more stringent additional rules. I can't make up my mind whether a "no wildcard, ever" policy would in fact be a good one to have. But it is surely open, and something that could be imposed on gTLD regisrtations with sufficient support inside ICANN. (There are some rather tricky regulations in this area, though.)
Hijacking user requests on caching name servers is very, very bad, because 1) the user probably doesn't know they are being hijacked, and 2) even if the user did, most wouldn't know how to get around it. So you're back to the TLD authority problem, there is no choice in the matter.
This is the response I expected, but I have to say that I'm frustrated by the answer, even during the alternate hours when I agree with it. What we're really saying in this case (and I mean "we", because I say similar things often enough) is that consumer choice is an uninteresting lever, because most consumers are mindless sinks who'll take whatever's given to them. If that's the case, why is everyone furious when various kinds of heavy regulations are proposed? We can't have libertarian paradise and guaranteed correct behaviour simultaneously. Libertarians claimed historically that this dilemma could be solved by market mechanisms. If the market mechanism won't actually work, though, what alterantive correction do you have to propose beyond "some government sets the rules, and enforces them"? Isn't that regulation? A -- ---- Andrew Sullivan 204-4141 Yonge Street Afilias Canada Toronto, Ontario Canada <andrew@ca.afilias.info> M2P 2A8 +1 416 646 3304 x4110
On Sat, 3 Nov 2007, Christopher Morrow wrote:
http://www.irbs.net/internet/nanog/0607/0139.html
oops, I was right (kinda).
I don't think we're going to put the genie back in the bottle, despite the best efforts of some IETFers. I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas. And for all the other non-Web protocols which get confused, can treat that artificially generated crap/answers like NXDOMAIN. Yes, I know it sounds like the evil bit; but if these folks are so convinced people really want this crap/address correction...
On Sun, 4 Nov 2007 11:52:11 -0500 (EST) Sean Donelan <sean@donelan.com> wrote:
And for all the other non-Web protocols which get confused, can treat that artificially generated crap/answers like NXDOMAIN. Yes, I know it sounds like the evil bit; but if these folks are so convinced people really want this crap/address correction...
They're not convinced their customers want it; they simply say that publicly. They're convinced that (a) they're going to make money from it from equally sleazy advertisers/marketdroids, and (b) their customers will be either too clueless or too sheeplike to do anything. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Sean Donelan wrote:
I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas.
Yes, it sounds like the evil bit. Why would anyone bother to set it? Eliot
On Sun, 4 Nov 2007, Eliot Lear wrote:
Sean Donelan wrote:
I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas.
Yes, it sounds like the evil bit. Why would anyone bother to set it?
Two reasons 1) By standardizing the process, it removes the excuse for using various hacks and duct tape. 2) Because the villian in Bond movies don't view themselves as evil. Google is happy to pre-check the box to install their Toolbar, OpenDNS is proud they redirect phishing sites with DNS lookups, Earthlink says it improves the customer experience, and so on. If they don't set the bit using the standardized process, then they lose their fig leaf.
Sean,
Yes, it sounds like the evil bit. Why would anyone bother to set it?
Two reasons
1) By standardizing the process, it removes the excuse for using various hacks and duct tape.
2) Because the villian in Bond movies don't view themselves as evil. Google is happy to pre-check the box to install their Toolbar, OpenDNS is proud they redirect phishing sites with DNS lookups, Earthlink says it improves the customer experience, and so on.
Forgive my skepticism, but what I would envision happening is resolver stacks adding a switch that would be on by default, and would translate the response back to NXDOMAIN. At that point we would be right back where we started, only after a lengthy debate, an RFC, a bunch of code, numerous bugs, and a bunch of "I told you sos". Or put another way: what is a client resolver supposed to do in the face of this bit? Eliot
Sean,
Yes, it sounds like the evil bit. Why would anyone bother to set it?
Two reasons
1) By standardizing the process, it removes the excuse for using various hacks and duct tape.
2) Because the villian in Bond movies don't view themselves as evil. Google is happy to pre-check the box to install their Toolbar, OpenDNS is proud they redirect phishing sites with DNS lookups, Earthlink says it improves the customer experience, and so on.
Forgive my skepticism, but what I would envision happening is resolver stacks adding a switch that would be on by default, and would translate the response back to NXDOMAIN. At that point we would be right back where we started, only after a lengthy debate, an RFC, a bunch of code, numerous bugs, and a bunch of "I told you sos".
The other half of this is that it probably isn't *appropriate* to encourage abuse of the DNS in this manner, and if you actually add a framework to do this sort of thing, it amounts to tacit (or explicit) approval, which will lead to even more sites doing it. Consider where it could lead. Pick something that's already sketchy, such as hotel networks. Creating the perfect excuse for them to map every domain name to 10.0.0.1, force it through a web proxy, and then have their tech support people tell you that "if you're having problems, make sure you set the browser-uses-evilbit-dns". And that RFC mandate to not do things like this? Ignored. It's already annoying to try to determine what a hotel means if they say they have "Internet access." Reinventing the DNS protocol in order to intercept odd stuff on the Web seems to me to be overkill and bad policy. Could someone kindly explain to me why the proxy configuration support in browsers could not be used for this, to limit the scope of damage to the web browsing side of things? I realize that the current implementations may not be quite ideal for this, but wouldn't it be much less of a technical challenge to develop a PAC or PAC-like framework to do this in an idealized fashion, and then actually do so? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Nov 5, 2007, at 7:40 AM, Joe Greco wrote:
Reinventing the DNS protocol in order to intercept odd stuff on the Web seems to me to be overkill and bad policy. Could someone kindly explain to me why the proxy configuration support in browsers could not be used for this, to limit the scope of damage to the web browsing side of things? I realize that the current implementations may not be quite ideal for this, but wouldn't it be much less of a technical challenge to develop a PAC or PAC-like framework to do this in an idealized fashion, and then actually do so?
Because that would require user intervention. Even with a willing userbase, you will never get 100% adoption, and that will affect your revenues. IOW: Because it won't make as much $$. In general, I don't think "make more money" is a bad motivation. (Hell, it's one of my main motivations.) But it has to be tempered by the "greater good", or we end up with an unworkable system, and then everyone makes less money in the long run. IMHO, of course. -- TTFN, patrick
* Sean Donelan:
I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas.
And for all the other non-Web protocols which get confused, can treat that artificially generated crap/answers like NXDOMAIN.
It's not that simple, you need three states: original, rewritten NODATA, and rewritten NXDOMAIN. Perhaps a general "original RCODE" is necessary; I haven't checked this.
On Sun, 4 Nov 2007 11:52:11 -0500 (EST) Sean Donelan <sean@donelan.com> wrote:
I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas.
Yes, let's let the IETF go off for 7 years to debate and try to put into an RFC something else that won't actually be used. Sorry Sean, you've lost me on this one. :-) John
In article <200711051726.lA5HQpft019903@larry.centergate.com> you write:
On Sun, 4 Nov 2007 11:52:11 -0500 (EST) Sean Donelan <sean@donelan.com> wrote:
I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these "address correction" and "domain parking" and "domain tasting" people to use for their keen "Web 2.0" ideas.
Yes, let's let the IETF go off for 7 years to debate and try to put into an RFC something else that won't actually be used. Sorry Sean, you've lost me on this one. :-)
John
You already have the bits for SE (and other signed infrastructure zones) that allow you to detect when this sort of garbage is pulled. All you have to do is deploy a DNSSEC aware resolver. Mark
Hi, Based on the procedures they document to opt-out, doesn't look like Sitefinder-like authoritative wildcarding. Looks more like caching server NXDOMAIN rewriting. If so, easy to get around: just run your own caching server. Also means you can't defeat this using DNSSEC (if it was actually deployed). Regards, -drc On Nov 3, 2007, at 8:40 PM, David Lesher wrote:
www.consumeraffairs.com/news04/2007/11/verizon_search.html
November 3, 2007
Subscribers to Verizon's high-powered fiber-optic Internet service (FiOS) are reporting that when they mistype a Web site address, they get redirected to Verizon's own search engine page -- even if they don't have Verizon's search page set as their default.
,,,,,
You can guess most of the rest.
I guess we didn't get that wooden stake in deep enough last Tuesday...
-- A host is a host from coast to coast.................wb8foz@nrk.com & no one will talk to a host that's close........[v].(301) 56-LINUX Unless the host (that isn't close).........................pob 1433 is busy, hung or dead....................................20915-1433
participants (23)
-
Allan Liska
-
Andrew Sullivan
-
Barry Shein
-
Bill Stewart
-
Bora Akyol
-
Christopher Morrow
-
D'Arcy J.M. Cain
-
David Conrad
-
David Lesher
-
Eliot Lear
-
Florian Weimer
-
Frank Bulk - iNAME
-
Jeff Kell
-
Joe Greco
-
John Kristoff
-
Mark Andrews
-
Patrick W. Gilmore
-
Phil Regnauld
-
Sean Donelan
-
Stefan Bethke
-
Stephane Bortzmeyer
-
Steven M. Bellovin
-
Tim Wilde