VeriSign's rapid DNS updates in .com/.net
VeriSign Naming and Directory Services (VNDS) currently generates new versions of the .com/.net zones files twice per day. VNDS is scheduled to deploy on September 8, 2004 a new feature that will enable VNDS to update the .com/.net zones more frequently to reflect the registration activity of the .com/.net registrars in near real time. After the rapid DNS update is implemented, the elapsed time from registrars' add or change operations to the visibility of those adds or changes in all 13 .com/.net authoritative name servers is expected to average less than five minutes. The rapid update process will batch domain name adds and domain name changes every few seconds. The serial number in the .com/.net zones' SOA records will increase with each batch of changes applied. As described in a message to the NANOG list in January [1], these serial numbers are now based on UTC time encoded as the number of seconds since the UNIX epoch (00:00:00 GMT, 1 January 1970). VNDS will continue to publish .com/.net zone files twice per day as part of the TLD Zone File Access Program. [2] These zone files will continue to reflect the state of the .com/.net registry database at the moment zone generation begins. VNDS does not anticipate any negative consequences of deployment of rapid updates to the .com/.net zones. However, as a courtesy we are providing the Internet community with 60 days advance notice of the change to the update process. Some questions and answers about rapid updates for .com/.net are available at http://www.verisign.com/nds/naming/rapid_update/faq.html. Matt -- Matt Larson <mlarson@verisign.com> VeriSign Naming and Directory Services [1] http://www.merit.edu/mail.archives/nanog/2004-01/msg00115.html [2] http://www.verisign.com/nds/naming/tld/
At 03:20 PM 7/9/2004, you wrote:
time. After the rapid DNS update is implemented, the elapsed time from registrars' add or change operations to the visibility of those adds or changes in all 13 .com/.net authoritative name servers is expected to average less than five minutes.
Very cool! Kudos! This is good news from Verisign on NANOG for a change. :) Does this also apply to domains with other registrars? From your message wording above, it appears that is the case which is great news. Does this apply to authoritative name server changes as well? Also, does this apply to customers who have had their domains suspended due to non-payment? That is always tough for our support desk to tell a customer they need to pay their bill to registrar X then wait 24-48 hours. If this will end that mess too, that's even better. -Robert Tellurian Networks - The Ultimate Internet Connection http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Good will, like a good name, is got by many actions, and lost by one." - Francis Jeffrey
Very cool! Kudos! This is good news from Verisign on NANOG for a change. :) Does this also apply to domains with other registrars? From your message wording above, it appears that is the case which is great news. Does this apply to authoritative name server changes as well? Also, does this apply to customers who have had their domains suspended due to non-payment? That is always tough for our support desk to tell a customer they need to pay their bill to registrar X then wait 24-48 hours. If this will end that mess too, that's even better.
Seconded. This is very cool and something I think everyone has wanted for a long time. [Devil's Advocate Hat On] So domain hijacking can now take place in seconds in the middle of the night? [Devil's Advocate Hat Off] And you can fix hijacked domains in seconds!! DJ
On Fri, 9 Jul 2004 Valdis.Kletnieks@vt.edu wrote:
On Fri, 09 Jul 2004 16:00:30 EDT, Deepak Jain said:
And you can fix hijacked domains in seconds!!
<Devil's Advocate Hat On>
Or social-engineer somebody to "fix" a "hijacked" domain in seconds.. :)
<Hat Off>
all still dependent on the 'its hijackable' to begin with, right? So what changed really?
On Fri, 9 Jul 2004, Deepak Jain wrote:
all still dependent on the 'its hijackable' to begin with, right? So what changed really?
The window to be notified and respond probably just shrunk by an enormous factor. Everything is hijackable.
I wasn't aware you got a notification upon hijack...
The window to be notified and respond probably just shrunk by an enormous factor. Everything is hijackable.
I wasn't aware you got a notification upon hijack...
You may... you may not. If you don't its definitely a hijack. If you did and you were able to prevent it, its not a hijack. It really depends on the registrar I think. As far as cancelling domains purchased with jacked credit cards... Verisign doesn't get a refund from ICANN or whoever if the domain is cancelled after the first two weeks or something... so why should Verisign cancel the domain when it helps their total-domains-registered rankings and THEY had to pay for it. DJ
On Fri, 09 Jul 2004 20:37:18 -0000, "Christopher L. Morrow" said:
all still dependent on the 'its hijackable' to begin with, right? So what changed really?
"Hmm... that phone call 2 hours ago sounded fishy.. I better re-double-check" Working scam for 1 hour 50 minutes with 5 minute updates, good chance of being stopped before deployment with 12 hour updates. Yes, on the flip side, the hijacking is *stopped* sooner - but for many classes of attacks that involve control of a nameserver, a few minutes can be enough....
It is cool, but where is any value in this (I mean - 5 minutes) rapid updates for .com and other base domains? I wish rapid DNS when running enterprise zone (with dynamic updates) or when running dynamic-dns service (for those who use dynalic IP's); but for .com and .net, it is just a public relation useless feature - registration time is 1 year, 5 minutes vs 1/2 day - do not makes any difference. (I am not saying that it is bad; I just do not see any reason for the celebration - anyway, DNS system have caches and delays measured in hours, and when you register xxxx.com you are doing it for a few years). ----- Original Message ----- From: "Matt Larson" <mlarson@verisign.com> To: <nanog@merit.edu> Sent: Friday, July 09, 2004 12:20 PM Subject: VeriSign's rapid DNS updates in .com/.net VeriSign Naming and Directory Services (VNDS) currently generates new versions of the .com/.net zones files twice per day. VNDS is scheduled to deploy on September 8, 2004 a new feature that will enable VNDS to update the .com/.net zones more frequently to reflect the registration activity of the .com/.net registrars in near real time. After the rapid DNS update is implemented, the elapsed time from registrars' add or change operations to the visibility of those adds or changes in all 13 .com/.net authoritative name servers is expected to average less than five minutes. The rapid update process will batch domain name adds and domain name changes every few seconds. The serial number in the .com/.net zones' SOA records will increase with each batch of changes applied. As described in a message to the NANOG list in January [1], these serial numbers are now based on UTC time encoded as the number of seconds since the UNIX epoch (00:00:00 GMT, 1 January 1970). VNDS will continue to publish .com/.net zone files twice per day as part of the TLD Zone File Access Program. [2] These zone files will continue to reflect the state of the .com/.net registry database at the moment zone generation begins. VNDS does not anticipate any negative consequences of deployment of rapid updates to the .com/.net zones. However, as a courtesy we are providing the Internet community with 60 days advance notice of the change to the update process. Some questions and answers about rapid updates for .com/.net are available at http://www.verisign.com/nds/naming/rapid_update/faq.html. Matt -- Matt Larson <mlarson@verisign.com> VeriSign Naming and Directory Services [1] http://www.merit.edu/mail.archives/nanog/2004-01/msg00115.html [2] http://www.verisign.com/nds/naming/tld/
On Jul 10, 2004, at 1:19 PM, Alexei Roudnev wrote:
It is cool, but where is any value in this (I mean - 5 minutes) rapid updates for .com and other base domains? I wish rapid DNS when running enterprise zone (with dynamic updates) or when running dynamic-dns service (for those who use dynalic IP's); but for .com and .net, it is just a public relation useless feature - registration time is 1 year, 5 minutes vs 1/2 day - do not makes any difference.
It makes a big difference to people who sell web/mail/etc services to people that includes the domain name. It means that someone who pays for a new website through an automated system doesn't have to wait 12-24 hours for it to be live, just a few minutes. It also means that changes can be made to host records quickly which is important for people who don't plan well or have unexpected changes that they want propagated. I'm appreciative of this change -- but fyi, they aren't the only TLD operators doing this, there are quite a few doing near-instant changes to their respective zones. The only thing I would still want would be the ability to create multiple host records of the same name but with various values. At least the opposite, mutliple host names to the same value is now allowed. That's good enough for me. :) -davidu
Hmm... May be, you are correct - if you sell service to the 'consumers' (inexperienced customers), they do not expect any delays between 'payment completed' and 'I can see my brand new domain WWW.HOW-COOL-I-AM.COM. And TTL's/caches do not prevent you from this, because you did not requested this domain before. This is still just Public relation, but very useful, I agree. PS. I know about other operator, I just wanted to verify, who appreciate this improvement - I agree that it is good for average consumer market (I want to show my new WEB to my friends NOW, while I am on weekend downloading my photos, and I do not want to know about 24h, IP hops, DNS cliens, TTLs and so on ... ). One more step in making Internet the same 'simple to use' reality as houses, cars, TV....
On Jul 10, 2004, at 1:19 PM, Alexei Roudnev wrote:
It is cool, but where is any value in this (I mean - 5 minutes) rapid updates for .com and other base domains? I wish rapid DNS when running enterprise zone (with dynamic updates) or when running dynamic-dns service (for those who use dynalic IP's); but for .com and .net, it is just a public relation useless feature - registration time is 1 year, 5 minutes vs 1/2 day - do not makes any difference.
It makes a big difference to people who sell web/mail/etc services to people that includes the domain name.
It means that someone who pays for a new website through an automated system doesn't have to wait 12-24 hours for it to be live, just a few minutes.
It also means that changes can be made to host records quickly which is important for people who don't plan well or have unexpected changes that they want propagated.
I'm appreciative of this change -- but fyi, they aren't the only TLD operators doing this, there are quite a few doing near-instant changes to their respective zones.
The only thing I would still want would be the ability to create multiple host records of the same name but with various values. At least the opposite, mutliple host names to the same value is now allowed. That's good enough for me. :)
-davidu
David A.Ulevitch wrote:
I'm appreciative of this change -- but fyi, they aren't the only TLD operators doing this, there are quite a few doing near-instant changes to their respective zones.
I just registered a new .org and it had visibility from external NS not more than 15 minutes later (I would have paid closer attention to just how long it took, but didn't even think to check on it until reading this thread). Maybe I just got lucky and hit their update window (I registered ~ 3:15AM UTC on 11-July-2004 fwiw). Anyone know the status of .org updates?
On Jul 10, 2004, at 7:35 PM, Mike Lewinski wrote:
David A.Ulevitch wrote:
I'm appreciative of this change -- but fyi, they aren't the only TLD operators doing this, there are quite a few doing near-instant changes to their respective zones.
I just registered a new .org and it had visibility from external NS not more than 15 minutes later (I would have paid closer attention to just how long it took, but didn't even think to check on it until reading this thread).
Maybe I just got lucky and hit their update window (I registered ~ 3:15AM UTC on 11-July-2004 fwiw). Anyone know the status of .org updates?
Nope, .org is run this way also (since the handover to udns, if I remember right. I don't know of a comprehensive list of tld's in this setup but I would say that the list is only growing...I learn of tlds running in this fashion every once in a while[1]. -davidu 1: not to imply any connection to when I notice it and when it is actually implemented. ;)
On Sat, 10 Jul 2004, David A.Ulevitch wrote:
It also means that changes can be made to host records quickly which is important for people who don't plan well or have unexpected changes that they want propagated.
I'm appreciative of this change -- but fyi, they aren't the only TLD operators doing this, there are quite a few doing near-instant changes to their respective zones.
.biz, .info etc do this as well. It is an excellent policy, and a convenient thing not to wait several hours for your new .com domain to appear online immediately. The disadvantage is, of course, that several abusers who register domains at a rapid clip with these two tlds, setting < 1 minute TTL on these and pointing these domain names to IPs that are basically compromised boxes or virus infected boxes, will now also start using .com / .net There should be some way of fixing this, like requiring registrars to do more due diligence when registering domains, maybe, and some better / faster procedures to take down [say] phisher domains with fake contact info. Well yes, there is already a process, but it could sure do with more streamlining. regards --srs
On Fri, 9 Jul 2004, Matt Larson wrote:
VeriSign Naming and Directory Services (VNDS) currently generates new versions of the .com/.net zones files twice per day. VNDS is scheduled to deploy on September 8, 2004 a new feature that will enable VNDS to update the .com/.net zones more frequently to reflect the registration activity of the .com/.net registrars in near real time. After the rapid DNS update is implemented, the elapsed time from registrars' add or change operations to the visibility of those adds or changes in all 13 .com/.net authoritative name servers is expected to average less than five minutes.
Questions/Comments: 1. Currently SLD deligation info for .com/.net TLDs seems to be updated about twice a day and new entire TLD dns zone is published as one bulk operation. These changes seems to be synced pretty well to changes in whois database as seen at whois.crsnic.net, so listing of nameservers in whois seems almost always correct. Is it my understanding that after this change SLD dns delegation will not be synced to nameserver listing in whois? 2. Is it only changes in SLD delegation (listing of nameservers or ips of nameservers) that will be effected? Does that mean that changes to domain such as moving domain from one registrar to another, delition of domain will still be done once per day? Related - what about status codes as submitted by registrar? In particular, would change of status that causes domain to temporarily or permanently not be delegated (but keeps listing of nameservers in whois) also be processed immediatly?
VNDS will continue to publish .com/.net zone files twice per day as part of the TLD Zone File Access Program. [2] These zone files will continue to reflect the state of the .com/.net registry database at the moment zone generation begins.
3. Is it my understanding that with this change those who participate in bulk whois program will not be able to see entire history of dns delegation changes for the domain? In that case, you remove value of participation in bulk TLD zone downloads for certain kinds of research activity and in addition may actually be breaking service agreement for providing this kind of data. To cover that "hole" you need provide a way to not only download entire TLD zone but also changes done to domain since last time entire TLD zone file has been published (to give an example what I'm asking is ability to download "UPDATES" as in routevews directory rather then entire bgp dump from "RIBS" directory). Please note that being able to find entire history of domain delegation changes is important in quite a number of cases, for example when you need to show that either your dns registrar or isp screwed up (and then corrected itself but does not want to admit it because that may cause them to pay compensation per SLA) or to show improper unathorized use of the domain, when its suspect that domain may have been hijacked (but dns has been changed for half an hour and then returned back) or when you're tracking domains used by spammers that change info from one zombie computer to another every 10-30 minutes (you want to be able to create entire list of zombies associated with such a domain and report these to ISPs, not just one or two taken once or twice per day, because otherwise spammers would just register different domain when that reported one is deactivated but they will still keep use of the same zombies)
VNDS does not anticipate any negative consequences of deployment of rapid updates to the .com/.net zones. However, as a courtesy we are providing the Internet community with 60 days advance notice of the change to the update process.
4. Last comment is I believe that such public announcement of changes should to go other mail lists and not just nanog which covers primarily those concerned with network routing in US and Canada, but not necessarily with dns operations at your ISP. I'm subscribed to at least three dns specific mail lists and have not seen anything there. The onece I remember by name are isp-dns.com, the other is bind-users, third one is I think dns list at RIPE. I'm not suggesting you make announcement on exactly those lists (or only on those lists + nanog), but if Verisign is trying to have better involvement with community and making viable prior notices worldwide of changes it is making to dns system, some investigation on where is it best to make such notices that it would reach largest number of persons concerned with dns technical support worldwide should be done.
Some questions and answers about rapid updates for .com/.net are available at http://www.verisign.com/nds/naming/rapid_update/faq.html.
[1] http://www.merit.edu/mail.archives/nanog/2004-01/msg00115.html
Additionally I notice that on the page you included as reference to TLD zone file information on Verisign website (link [2] above) does not seem to contain any reference to this upcoming change (or link to your own FAQ - another link above) or ability for public to comment on such things. -- William Leibzon Elan Networks william@elan.net
Matt, others, I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason. It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone. Reducing TTLs across the board will be the obvious *soloution*. Yet, the DNS architecture is built around effective caching! Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way? Can we still mitigate that trend by education of marketeers and users? Daniel
Good point! You can reduce TTLs to such a point that the servers will become preoccupied with doing something other than providing answers. Ray
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
Well, a naive calculation, based on reducing the TTL to 15 mins from 24 hours to match Verisign's new update times, would suggest that the number of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if you factor in for the Nyquist interval). Any there any resources out there there that have information on global DNS statistics? ie. the average TTL currently in use. But I guess it remains to be seen if this will have a knock on effect like that described below. Verisign are only doing this for the nameserver records at present time - it just depends on whether expection for such rapid changes gets pushed on down. Sam On Thu, 22 Jul 2004, Ray Plzak wrote:
Good point! You can reduce TTLs to such a point that the servers will become preoccupied with doing something other than providing answers.
Ray
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
Hang fire.. I dont see any reference to adjusting the TTL in the verisign announcement. They say they will update the zones every 5 minutes from the registry data. These are not the same things (or did I miss that bit?) Also, isnt a lot of this dependent on the NS records in the second level gtlds which is hosted by the ISPs.. so this part doesnt change? Steve On Thu, 22 Jul 2004, Sam Stickland wrote:
Well, a naive calculation, based on reducing the TTL to 15 mins from 24 hours to match Verisign's new update times, would suggest that the number of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if you factor in for the Nyquist interval).
Any there any resources out there there that have information on global DNS statistics? ie. the average TTL currently in use.
But I guess it remains to be seen if this will have a knock on effect like that described below. Verisign are only doing this for the nameserver records at present time - it just depends on whether expection for such rapid changes gets pushed on down.
Sam
On Thu, 22 Jul 2004, Ray Plzak wrote:
Good point! You can reduce TTLs to such a point that the servers will become preoccupied with doing something other than providing answers.
Ray
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
On 22.07 12:26, Stephen J. Wilcox wrote:
I dont see any reference to adjusting the TTL in the verisign announcement.
Correct.
They say they will update the zones every 5 minutes from the registry data.
These are not the same things (or did I miss that bit?)
Correct.
Also, isnt a lot of this dependent on the NS records in the second level gtlds which is hosted by the ISPs.. so this part doesnt change?
Correct. What I am concerned about is the pressure to lower TTLs across the board if the increase in zone update speed creates expectations that it alone cannot fulfill. I observe this being sold as "instantaneous updates" instead of "instantaneous additions". When this becomes clear the pressure will be to deliver what the salespeople promised. This will result inthe obvious "soloution": Lower TTLs everywhere. I am not sure the DNS will remain stable if TTLs are lowered to a couple of seconds throughout. I am suggesting clearer marketing: "Quick additions: Yes. Quick changes/deletions: No." Note that I am not concerned about *judicious* lowering of TTLs in preparation for changes or to provide services such as akamai. It is more a general trend of many independent actors serving nor real purpose that worries me. Caveat emptor. Daniel
On Thu, 22 Jul 2004, Daniel Karrenberg wrote:
What I am concerned about is the pressure to lower TTLs across the board if the increase in zone update speed creates expectations that it alone cannot fulfill.
I observe this being sold as "instantaneous updates" instead of "instantaneous additions". When this becomes clear the pressure will be to deliver what the salespeople promised. This will result inthe obvious "soloution": Lower TTLs everywhere.
What you're suggesting is that once Verisign marketing force moves in, this will cause pressure ("imitating Verisign", who really does it?) on marketing department of ISPs and DNS hosting providers to offer same level service? I agree with you there, but I think most you will see are marketing by companies offering outsourced dns servers, such as what is done by Enom and or Zonedit. In these cases however, most such dns service providers already do offer to their customers lower then normal TTL. And greater majority of domains are not hosted on such provider but with ISP that is taking care of the customer's entire DNS configuration and does not need to change ip that often. I don't think this puts pressure on such providers to change TTL. So while some increase in genereal TTL may happen, I dont think it will be overwhelming so as to cause serious alarm. -- William Leibzon Elan Networks william@elan.net
Before a big panic starts, they can restore it back to the way it was if there is an event of such proportion to totally hoze the entire network or any major portion of it, until they fix any major issue with these changes.... -Henry --- Sam Stickland <sam_ml@spacething.org> wrote:
Well, a naive calculation, based on reducing the TTL to 15 mins from 24 hours to match Verisign's new update times, would suggest that the number of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if you factor in for the Nyquist interval).
Any there any resources out there there that have information on global DNS statistics? ie. the average TTL currently in use.
But I guess it remains to be seen if this will have a knock on effect like that described below. Verisign are only doing this for the nameserver records at present time - it just depends on whether expection for such rapid changes gets pushed on down.
Sam
On Thu, 22 Jul 2004, Ray Plzak wrote:
Good point! You can reduce TTLs to such a point
become preoccupied with doing something other than
that the servers will providing answers.
Ray
-----Original Message----- From: owner-nanog@merit.edu
[mailto:owner-nanog@merit.edu] On Behalf Of
Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
I think I ought to qualify my earlier email - I certainly didn't mean to suggest that this would happen. I meant to merely comment on what the expected increase in load might be if we did see a trend towards lower TTLs. Any trend towards lower TTLs would be outside of Verisign's control anyhow, and if it did happen, it would no doubt be a gradual effect. Which brings me back to my original question - does anyone know of any stastics for TTL values? Sam On Thu, 22 Jul 2004, Henry Linneweh wrote:
Before a big panic starts, they can restore it back to the way it was if there is an event of such proportion to totally hoze the entire network or any major portion of it, until they fix any major issue with these changes....
-Henry
--- Sam Stickland <sam_ml@spacething.org> wrote:
Well, a naive calculation, based on reducing the TTL to 15 mins from 24 hours to match Verisign's new update times, would suggest that the number of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if you factor in for the Nyquist interval).
Any there any resources out there there that have information on global DNS statistics? ie. the average TTL currently in use.
But I guess it remains to be seen if this will have a knock on effect like that described below. Verisign are only doing this for the nameserver records at present time - it just depends on whether expection for such rapid changes gets pushed on down.
Sam
On Thu, 22 Jul 2004, Ray Plzak wrote:
Good point! You can reduce TTLs to such a point
become preoccupied with doing something other than
that the servers will providing answers.
Ray
-----Original Message----- From: owner-nanog@merit.edu
[mailto:owner-nanog@merit.edu] On Behalf Of
Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
I got forwarded this URL from Patrick McManus. I haven't had a chance to read the paper myself yet so I won't comment on it. I've included the link and the abstract below. A choice quote is "these results suggest that the performance of DNS is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic, low-TTL A-record bindings should not degrade DNS performance." http://nms.lcs.mit.edu/papers/dns-imw2001.html Abstract: This paper presents a detailed analysis of traces of DNS and associated TCP traffic collected on the Internet links of the MIT Laboratory for Computer Science and the Korea Advanced Institute of Science and Technology (KAIST). The first part of the analysis details how clients at these institutions interact with the wide-area DNS system, focusing on performance and prevalence of failures. The second part evaluates the effectiveness of DNS caching. In the most recent MIT trace, 23% of lookups receive no answer; these lookups account for more than half of all traced DNS packets since they are retransmitted multiple times. About 13% of all lookups result in an answer that indicates a failure. Many of these failures appear to be caused by missing inverse (IP-to-name) mappings or NS records that point to non-existent or inappropriate hosts. 27% of the queries sent to the root name servers result in such failures. The paper presents trace-driven simulations that explore the effect of varying TTLs and varying degrees of cache sharing on DNS cache hit rates. The results show that reducing the TTLs of address (A) records to as low as a few hundred seconds has little adverse effect on hit rates, and that little benefit is obtained from sharing a forwarding DNS cache among more than 10 or 20 clients. These results suggest that the performance of DNS is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic, low-TTL A-record bindings should not degrade DNS performance. Sam On Thu, 22 Jul 2004, Sam Stickland wrote:
I think I ought to qualify my earlier email - I certainly didn't mean to suggest that this would happen. I meant to merely comment on what the expected increase in load might be if we did see a trend towards lower TTLs.
Any trend towards lower TTLs would be outside of Verisign's control anyhow, and if it did happen, it would no doubt be a gradual effect. Which brings me back to my original question - does anyone know of any stastics for TTL values?
Sam
On Thu, 22 Jul 2004, Henry Linneweh wrote:
Before a big panic starts, they can restore it back to the way it was if there is an event of such proportion to totally hoze the entire network or any major portion of it, until they fix any major issue with these changes....
-Henry
--- Sam Stickland <sam_ml@spacething.org> wrote:
Well, a naive calculation, based on reducing the TTL to 15 mins from 24 hours to match Verisign's new update times, would suggest that the number of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if you factor in for the Nyquist interval).
Any there any resources out there there that have information on global DNS statistics? ie. the average TTL currently in use.
But I guess it remains to be seen if this will have a knock on effect like that described below. Verisign are only doing this for the nameserver records at present time - it just depends on whether expection for such rapid changes gets pushed on down.
Sam
On Thu, 22 Jul 2004, Ray Plzak wrote:
Good point! You can reduce TTLs to such a point
become preoccupied with doing something other than
that the servers will providing answers.
Ray
-----Original Message----- From: owner-nanog@merit.edu
[mailto:owner-nanog@merit.edu] On Behalf Of
Daniel Karrenberg Sent: Thursday, July 22, 2004 3:12 AM To: Matt Larson Cc: nanog@merit.edu Subject: Re: VeriSign's rapid DNS updates in .com/.net
Matt, others,
I am a quite concerned about these zone update speed improvements because they are likely to result in considerable pressure to reduce TTLs **throughout the DNS** for little to no good reason.
It will not be long before the marketeers will discover that they do not deliver what they (implicitly) promise to customers in case of **changes and removals** rather than just additions to a zone.
Reducing TTLs across the board will be the obvious *soloution*.
Yet, the DNS architecture is built around effective caching!
Are we sure that the DNS as a whole will remain operational when (not if) this happens in a significant way?
Can we still mitigate that trend by education of marketeers and users?
Daniel
duane wessels' presentation at the last eugene nanog meeting distinguished between two kinds of traffic received at f-root during his sampling work: crap: 97.9%; non-crap: 2.1%. the "crap" category includes requestors who do not seem to cache the responses they hear, thus rendering the actual TTL moot. therefore if there were a drop in TTL for root-zone data, it would only be a multiplier against 2.1% of f-root's present volume. but i agree with daniel. the reason verisign is doing this has got to be because ultradns does it, and .ORG therefore has marketing hoopla that .COM/.NET lacks, and parity was needed. the primary beneficiaries of this new functionality are spammers and other malfeasants, and the impact of having it in many TLD's will be to put downward pressure on TTL's. this all needs to be looked at very carefully. -- Paul Vixie
On 22.07 17:08, Paul Vixie wrote:
.... therefore if there were a drop in TTL for root-zone data, it would only be a multiplier against 2.1% of f-root's present volume.
I am not worried so much about the root servers here because of the reasons you cite. The root server system is engineered to cope with hugely excessive loads already. I am worried about all the other root servers that have to deal with much lesser query loads and might feel the impact of lowered TTLs much more.
... and the impact of having it in many TLD's will be to put downward pressure on TTL's. this all needs to be looked at very carefully.
Yes, we need to keep an eye on this and argue against lowering TTLs across the board for little good reasion.
----- Original Message ----- From: "Daniel Karrenberg" <daniel.karrenberg@ripe.net> To: "Paul Vixie" <vixie@vix.com> Cc: <nanog@merit.edu> Sent: Thursday, July 22, 2004 3:05 PM Subject: Re: VeriSign's rapid DNS updates in .com/.net
On 22.07 17:08, Paul Vixie wrote:
.... therefore if there were a drop in TTL for root-zone data, it would only be a multiplier against 2.1% of f-root's present volume.
I am not worried so much about the root servers here because of the reasons you cite. The root server system is engineered to cope with hugely excessive loads already. I am worried about all the other root servers that have to deal with much lesser query loads and might feel the impact of lowered TTLs much more.
... and the impact of having it in many TLD's will be to put downward pressure on TTL's. this all needs to be looked at very carefully.
Yes, we need to keep an eye on this and argue against lowering TTLs across the board for little good reasion.
Infospace / Authorize Net and their successors have their ttl's set for 10 minutes and that just plain goofy. Plus, TTL's at 600 or below have always been the calling card for a spammer; . . . er not that I am accusing them of spamming, rather they are just straining dns queries. Peter
On Thu, 22 Jul 2004, Paul Vixie wrote:
the primary beneficiaries of this new functionality are spammers and other malfeasants
It appears your glass is half empty rather than half full. The primary beneficiaries are all current and future .com/.net domain holders: timely and predictable zone updates from one's parent are a good and useful feature. Mistakes can be fixed more rapidly and zone administrators who know what they are doing can effect changes quickly. On Fri, 23 Jul 2004, Paul Vixie wrote:
but when someone says, later, that the .COM zone generator ought to use a ttl template of 300 rather than 86400 in order that changes and deletions can get the same speedy service as additions, i hope that icann will say "no."
Paul, as you know, the TTL of parent-side NS RRsets when the data sought is in the immediate child zone is largely irrelevant because of credibility, which I described in http://www.merit.edu/mail.archives/nanog/2004-07/msg00255.html. I also stated in that message that VeriSign has no intention of changing the current 48-hour TTL on delegation NS RRsets in .com/.net. On Thu, 22 Jul 2004, Daniel Karrenberg wrote:
I am not worried so much about the root servers here because of the reasons you cite. The root server system is engineered to cope with hugely excessive loads already. I am worried about all the other DNS servers that have to deal with much lesser query loads and might feel the impact of lowered TTLs much more.
If a zone owner lowers a TTL and causes an increase in load, most of the foot being shot off is his or her own: the zone's own name servers will bear the brunt of the increased query load. I agree with Daniel's earlier statement that this is an education issue. Does anyone want to co-author an Internet-Draft on the topic of choosing appropriate TTLs? Matt -- Matt Larson <mlarson@verisign.com> VeriSign Naming and Directory Services
If a zone owner lowers a TTL and causes an increase in load, most of the foot being shot off is his or her own: the zone's own name servers will bear the brunt of the increased query load.
Maybe, but don't forget that when BIND9 and DJBDNS caches find expired nameserver address (A) records they don't trust any cached data and start them back at the roots. And in the case of BIND9, it sends both A and A6 queries for each nameserver in the list. For example, microsoft.com's five nameservers have A records with TTL of one hour. Worst case we might expect every BIND9 cache to send 10 queries to the roots (then the TLDs) every hour, just for these nameserver addresses. Duane W.
On Fri, 23 Jul 2004, Duane Wessels wrote:
Maybe, but don't forget that when BIND9 and DJBDNS caches find expired nameserver address (A) records they don't trust any cached data and start them back at the roots. And in the case of BIND9, it sends both A and A6 queries for each nameserver in the list.
Do they really send A6 queries? Haven't we decided to go back to AAAA now? -- William Leibzon Elan Networks william@elan.net
the primary beneficiaries of this new functionality are spammers and other malfeasants,
I think this is a true statement. I think it is important to keep in mind that registry operators "compete" for TLD franchises, and where those "competitions" occur, this statement is not belived to be true. Eric
On Thu, Jul 22, 2004 at 08:27:45PM +0000, Eric Brunner-Williams in Portland Maine wrote:
the primary beneficiaries of this new functionality are spammers and other malfeasants,
I think this is a true statement.
Has anyone done any studies to prove this conjecture? If this was true, maybe those registries who do perform this particular service today ought to slow down their update frequency. Mark -- Mark Kosters markk@verisignlabs.com Verisign Applied Research
On Thu, 22 Jul 2004 17:04:24 EDT, Mark Kosters said:
Has anyone done any studies to prove this conjecture? If this was true, maybe those registries who do perform this particular service today ought to slow down their update frequency.
And lose share to the one who doesn't slow down? I seem to remember the biggest reason for the flood away from the monopoly registrar when *that* floodgate opened was that the other registrars promised updates "this day rather than this month". (And yes, the whole .com/.net/.org/.biz landscape is enough of a mess that the comment applies to "registries" as well as "registrars" - a local radio station has 'wrov.cc' because 'wrov.com' is a domain in Korea)...
Mark, I've been looking at spam in blogs, that is paxil et al domain names that are POSTed into blogs as comments. An example (from http://wampum.wabanaki.net/archives/000794.html, a post on this very subject) follows this reply to you. Some number of URLs are presented to engines that index this blog, and as long as the data generated from those indexings (rankings) has value, or the GET captured pages are cached by the indexing engines, value is transfered from the host blog to the producers of ratings, or the producers of means to obtain an increase in ratings, or the rated domain name. One example I used earlier was a domain name owned by a major pharmacutical company, and inserted in as many blogs as I cared to look at. For want of a better term, I feel like I'm looking at an ad network (zombie writer population) that performs ad placements (from xdsl puddles in Italy or elsewhere) for buyers. It isn't banner-ads that are being placed, but a latent index ranking that will be harvested within some few number of days after placement. Here is one viewed from an apache logfile: customer72-236.mni.ne.jp - - [22/Jul/2004:13:31:53 -0400] "POST /cgi-bin/mt-comments.cgi?entry_id=339 HTTP/1.0" 200 1713 Entry 393 was posted on July 15, 2003, a little over a year ago. The attempted POST is ment not be detected by any means other than exhaustive indexing of some weblog. I think I'm looking at a click-through model that is defined by a theft of advertizing value, whether banners for eyeballs, or tags for ranking. I'm getting redundant, but I've got two early readers pulling my fingers off the keyboard and onto their texts. As long as the names are either indexed, or resolve, the covert ad works. Thinking about reducing the persistence of resolution of covert placed names has caused me to think about spam and agility. For my part, it is, as you pointed out, conjecture. I'm too busy trying to get my little registrar business off the deck to perform "studies". But as I look at the example (below), it seems interesting to think about the resolution of the names and the delivery of the names (in spam) as potentially a synchronous event. That's why "instant ad" seems abuse prone to me, and "instant mod" even more so. There appear to be 15 URLs embedded in the comment below, which I selected simply for having "levitra" in it. As always, YMMV, and yes, I worked for an ad network (Engage/Flycast/CMGI), and there is no 1x1 tracking gif anywhere in this message. Eric --- begin --- COMMENT: AUTHOR: http://www.fabuloussextoys.com EMAIL: dafdsfa@hotmail.com IP: 81.152.188.36 URL: http://www.fabuloussextoys.com DATE: 06/08/2004 09:16:22 AM The actor who plays http://www.888.com Connor in Angel will not bereturning for the http://www.mobilesandringtones.com fifth season of Angel. The actor will guest star in one http://www.celebtastic.com episode at the start of the http://www.ringtonespy.com season. The producers decided not to http://www.levitra-express.com pick up the actor's contract http://www.williamhill.co.uk for another season, as the character didn't have a http://www.cialis-express.com place to fit into the new story arc. Vincent is the second actor to http://www.adultfriendfinder.com leave the show, as producers also http://www.unbeatablemobiles.co.uk dropped Charisma Carpenter http://www.mobilequicksale.com from the cast. It is widely believed these two http://www.unbeatablecellphones.com actors have been dropped to make http://www.adultfriendfinder.com way for the two additions to Angel's http://www.lookforukhotels.com cast next season. James http://www.dating999.com Marsters is to join the cast ht! tp://www.adultfriendfinder.com of Angel next season, --- end ---
because i have sometimes been accused of being unfair to markk, i checked. markk@verisignlabs.com (Mark Kosters) writes:
the primary beneficiaries of this new functionality are spammers and other malfeasants,
I think this is a true statement.
Has anyone done any studies to prove this conjecture?
at dictionary.reference.com we see the following: | con·jec·ture P Pronunciation Key (kn-jkchr) | n. | | 1. Inference or judgment based on inconclusive or incomplete evidence; | guesswork. | | 2. A statement, opinion, or conclusion based on guesswork: The commentators | made various conjectures about the outcome of the next election. as the author of the statement in question, and based on the definition shown, it's just not conjecture.
If this was true, maybe those registries who do perform this particular service today ought to slow down their update frequency.
as others have pointed out, spammers will always find a way to spam, and while the number of cases where the beneficiary is not a spammer is small, it's not zero. so we have to do it. but when someone says, later, that the .COM zone generator ought to use a ttl template of 300 rather than 86400 in order that changes and deletions can get the same speedy service as additions, i hope that icann will say "no." wrt the mit paper on why small ttl's are harmless, i recommend that y'all actually read it, the whole thing, plus some of the references, rather than assuming that the abstract is well supported by the body. -- Paul Vixie
On Thu, 22 Jul 2004, Eric Brunner-Williams in Portland Maine wrote:
the primary beneficiaries of this new functionality are spammers and other malfeasants,
I think this is a true statement. I think it is important to keep in mind that registry operators "compete" for TLD franchises, and where those "competitions" occur, this statement is not belived to be true.
In other words, Verisign is unhappy that spammers are now registering primarily .biz domains and Verisign is no longer getting getting share of their business? -- William Leibzon Elan Networks william@elan.net
In other words, Verisign is unhappy that spammers are now registering primarily .biz domains and Verisign is no longer getting getting share of their business?
Do you want me to answer that wearing my hired-by-NeuStar-to-write-.biz hat or my fired-by-NeuStar-for-trying-to-policy-.biz hat? Or my almost-anybody-but-NSI/VGRS hat? ;-)
participants (20)
-
Alexei Roudnev
-
Christopher L. Morrow
-
Daniel Karrenberg
-
David A.Ulevitch
-
Deepak Jain
-
Duane Wessels
-
Eric Brunner-Williams in Portland Maine
-
Henry Linneweh
-
Mark Kosters
-
Matt Larson
-
Mike Lewinski
-
Paul Vixie
-
Pete Schroebel
-
Ray Plzak
-
Robert Boyle
-
Sam Stickland
-
Stephen J. Wilcox
-
Suresh Ramasubramanian
-
Valdis.Kletnieks@vt.edu
-
william(at)elan.net