History of 4.2.2.2. What's the story?
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any. Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet? I know that it was originally at BBN, what I'm looking for is things like: How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time. There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration? So, if anyone has any stories about 4.2.2.2, I'd love to hear them. Thanks, Sean -- Microsoft treats objects like women, man... -- Kevin Fenzi, paraphrasing the Dude, 1998 Sean Reifschneider, Member of Technical Staff <jafo@tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
I think around 10 years ago Slashdot had a few stories (and still do, actually) about how great these resolvers were. I think that propelled quite a bit of their growth and popularity. On 2/14/2010 1:16 AM, Sean Reifschneider wrote:
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any.
Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet?
I know that it was originally at BBN, what I'm looking for is things like:
How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time. There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration?
So, if anyone has any stories about 4.2.2.2, I'd love to hear them.
Thanks, Sean
4.2.2.2 is stunted just like any other resolvers that use only the USG root. A more useful resolver is ASLAN [199.5.157.128] which is an inclusive namespace resolver which shows users a complete map of the internet, not just what ICANN wants them to see. ----- Original Message ----- From: "Steve Ryan" <auser@mind.net> To: <nanog@nanog.org> Sent: Sunday, February 14, 2010 6:43 AM Subject: Re: History of 4.2.2.2. What's the story?
I think around 10 years ago Slashdot had a few stories (and still do, actually) about how great these resolvers were. I think that propelled quite a bit of their growth and popularity.
On 2/14/2010 1:16 AM, Sean Reifschneider wrote:
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any.
Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet?
I know that it was originally at BBN, what I'm looking for is things like:
How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time. There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration?
So, if anyone has any stories about 4.2.2.2, I'd love to hear them.
Thanks, Sean
On 14. feb. 2010, at 19.43, John Palmer (NANOG Acct) wrote:
4.2.2.2 is stunted just like any other resolvers that use only the USG root. A more useful resolver is ASLAN [199.5.157.128] which is an inclusive namespace resolver which shows users a complete map of the internet, not just what ICANN wants them to see.
So you don't think that 4.2.2.2, being easier then 199.5.157.128 to remember, has something to do with that? -- Joachim Tingvold joachim@tingvold.com
On 2/14/10 11:43 AM, John Palmer (NANOG Acct) wrote:
4.2.2.2 is stunted just like any other resolvers that use only the USG root. A more useful resolver is ASLAN [199.5.157.128] which is an inclusive namespace resolver which shows users a complete map of the internet, not just what ICANN wants them to see.
I feel a headache coming on... Is this more of the fun from years ago where everyone thought it would be great to create a bunch of custom TLDs then try and convince everyone to use their name servers to 'enable' these (for lack of a better word) site-local domains? I tried the OpenDNS koolaid, and well, was horribly disappointed. -- Brielle Bruns The Summit Open Source Development Group http://www.sosdg.org / http://www.ahbl.org
On Sun, Feb 14, 2010 at 12:43:12PM -0600, John Palmer (NANOG Acct) <nanog2@adns.net> wrote a message of 42 lines which said:
A more useful resolver is ASLAN [199.5.157.128] which is an inclusive namespace resolver which shows users a complete map of the internet,
There are many crooks which sell dummy TLDs. At least, they make an effort to have more than two name servers for the root. But 199.5.157.128 is better, it does not just add dummy TLDs, it adds every possible TLD: % dig @199.5.157.128 A www.TJTYRMYYT67DFR453.FFDD5GCXXFFRA8O ; <<>> DiG 9.5.1-P3 <<>> @199.5.157.128 A www.TJTYRMYYT67DFR453.FFDD5GCXXFFRA8O ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53344 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.TJTYRMYYT67DFR453.FFDD5GCXXFFRA8O. IN A ;; ANSWER SECTION: www.TJTYRMYYT67DFR453.FFDD5GCXXFFRA8O. 7195 IN A 199.5.157.33 ;; AUTHORITY SECTION: . 87988 IN NS b.worldroot.net. . 87988 IN NS a.worldroot.net. ;; Query time: 146 msec ;; SERVER: 199.5.157.128#53(199.5.157.128) ;; WHEN: Sun Feb 14 21:28:54 2010 ;; MSG SIZE rcvd: 125
Since I'm watching B5 again on DVD.... I was there at the dawning of the age of 4.2.2.1 :) We did it, and we I mean Brett McCoy and my self. But most of the credit/blame goes to Brett... I helped him, but at the time I was mostly working on getting out Mail relays working right. This was about 12 years ago, about 1998, I left Geunitity in 2000, and am back at BBN/Raytheon now. I remember we did most of the work after we moved out of Cambridge and into Burlington. Genuity/GTEI/Planet/BBN owned 4/8. Brett went looking for an IP that was simple to remember, I think 4.4.4.4 was in use by neteng already. But it was picked to be easy to remember, I think jhawk had put a hold on the 4.2.2.0/24 block, we got/grabbed 3 address 4.2.2.1, 4.2.2.2, and 4.2.2.3 so people had 3 address to go to. At the time people had issues with just using a single resolver. We also had issues with both users and registers since clearly they aren't geographically diverse, trying to explain routing tricks to people KNOW all IPs come in and are routed as Class A/B/C blocks is hard. NIC.Near.Net which was our primary DNS server for years before I transferred to planet from BBN. It wasn't even in 4/8, I think it was 128.89 (BBN Corp space), but I'm not sure. BBN didn't start to use 4/8 till the Planet build out, and NIC.near.net predates that by at least 10 years. I still have the power cord from NIC.near.net in my basement. That machine grew organically with every service known to mankind running on it, and special one-off things for customers on it. It took us literally YEARS to get that machine turned off, when we finally got it off I took the power cord so no one would help us by turning it back on, I gave the cord to Chris Yetman, who was the director of operations and told him if a customer screams he has the power to turn it back on. A year or so later, he gave the cord back to me. Yes we set up 4.2.2.1 as a public resolver. We figured trying to filter it was larger headache than just making it public. It was always pretty robust due to the BIND code, thanks to ISC, and the fact it was always IPV4 AnyCast. I don't know about now, but originally it was IPV4 AnyCast. Each server advertised a routes for 4.2.2.1, .2, and .3 at different costs and the routers would listen to the routes. Originally the start up code was, basically: advertise route to 4.2.2.1, 4.2.2.2, and 4.2.2.3 run bind in foreground mode drop route to 4.2.2.1, 4.2.2.2, and 4.2.2.3 then we had a Tivoli process that tried to restart bind, but rate limited the restarts. But that way if the bind died the routes would drop. johno On Feb 14, 2010, at 4:16 AM, Sean Reifschneider wrote:
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any.
Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet?
I know that it was originally at BBN, what I'm looking for is things like:
How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time. There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration?
So, if anyone has any stories about 4.2.2.2, I'd love to hear them.
Thanks, Sean -- Microsoft treats objects like women, man... -- Kevin Fenzi, paraphrasing the Dude, 1998 Sean Reifschneider, Member of Technical Staff <jafo@tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
* John Levine:
It was always pretty robust due to the BIND code, thanks to ISC, and the fact it was always IPV4 AnyCast.
$ asp 4.2.2.2 # look it up in routeviews 4.0.0.0/9 ASN 3356, path 3549 -> 3356
Wow, that's a heck of an anycast block.
You can do anycast with your IGP, too. 8-)
On 02/14/2010 07:16 AM, John Orthoefer wrote:
Since I'm watching B5 again on DVD....
Awesome. Thanks for taking the time to reply, I really enjoyed the story. Have fun with the B5. The only time I watched it was on a VHS borrowed from a friend. It was a 3'x3' cabinet full of them. :-) Sean -- Sean Reifschneider, Member of Technical Staff <jafo@tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
In message <182E6E76-F12A-41D9-800A-E5E40F3C3B7D@direwolf.com>, John Orthoefer writes:
Genuity/GTEI/Planet/BBN owned 4/8. Brett went looking for an IP that = was simple to remember, I think 4.4.4.4 was in use by neteng already. = But it was picked to be easy to remember, I think jhawk had put a hold = on the 4.2.2.0/24 block, we got/grabbed 3 address 4.2.2.1, 4.2.2.2, and = 4.2.2.3 so people had 3 address to go to. At the time people had = issues with just using a single resolver. We also had issues with both = users and registers since clearly they aren't geographically diverse, = trying to explain routing tricks to people KNOW all IPs come in and are = routed as Class A/B/C blocks is hard.
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Feb 14, 2010, at 5:17 PM, Mark Andrews wrote:
In message <182E6E76-F12A-41D9-800A-E5E40F3C3B7D@direwolf.com>, John Orthoefer writes:
Genuity/GTEI/Planet/BBN owned 4/8. Brett went looking for an IP that = was simple to remember, I think 4.4.4.4 was in use by neteng already. = But it was picked to be easy to remember, I think jhawk had put a hold = on the 4.2.2.0/24 block, we got/grabbed 3 address 4.2.2.1, 4.2.2.2, and = 4.2.2.3 so people had 3 address to go to. At the time people had = issues with just using a single resolver. We also had issues with both = users and registers since clearly they aren't geographically diverse, = trying to explain routing tricks to people KNOW all IPs come in and are = routed as Class A/B/C blocks is hard.
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence.
It's an open recursive name server, it is free, has no SLA, and is not critical infrastructure. Besides, it is quicker / better to use your local ISP's RNS. If something goes wrong, you can fall back to OpenDNS or L3, and, of course, yell at the _company_you_are_paying_ when their stuff doesn't work. :) -- TTFN, patrick
On Sun, 2010-02-14 at 17:20 -0500, Patrick W. Gilmore wrote:
Besides, it is quicker / better to use your local ISP's RNS. If something goes wrong, you can fall back to OpenDNS or L3, and, of course, yell at the _company_you_are_paying_ when their stuff doesn't work. :)
The best advice I have read all day. I have recently been on a few networks that will not allow 4.2.2.2 to resolve for the clients. Cisco tech support tells their customers (us) to use it when testing. Perhaps this is not such a good practice. Patrick is correct. Use your own stuff and yell when it does not work.
On Sun, Feb 14, 2010 at 2:37 PM, Richard Golodner <rgolodner@infratection.com> wrote:
Cisco tech support tells their customers (us) to use it when testing. Perhaps this is not such a good practice.
No doubt because they are easier to remember than Cisco's own two "public" DNS resolvers : 64.102.255.44 128.107.241.185 Scott.
At the time I was involved it did have an SLA, and was considered critical infrastructure for Genuitity customers. Once we started to deploy 4.2.2.1, we gave customers time to swap over, but we started turning off our existing DNS servers. One reason we did it was that we kept having to deploy more servers, and getting customers to swing there hosts over to the new machines was all but impossible. With NetNews, and SMTP we used a Cisco Distributed Director. But we needed another solution for DNS. johno On Feb 14, 2010, at 5:20 PM, Patrick W. Gilmore wrote:
It's an open recursive name server, it is free, has no SLA, and is not critical infrastructure.
On Feb 14, 2010, at 6:55 PM, John Orthoefer wrote:
At the time I was involved it did have an SLA, and was considered critical infrastructure for Genuitity customers. Once we started to deploy 4.2.2.1, we gave customers time to swap over, but we started turning off our existing DNS servers.
Sorry for the confusion, I should have said "for non-customers of L3". I was responding the statement that the name servers were controlled by "*one* external route". If you are a customer, IGP matters, not BGP, and SLAs obviously are a different situation. For people who are not customers, SLAs are unusual. -- TTFN, patrick
One reason we did it was that we kept having to deploy more servers, and getting customers to swing there hosts over to the new machines was all but impossible. With NetNews, and SMTP we used a Cisco Distributed Director. But we needed another solution for DNS.
johno
On Feb 14, 2010, at 5:20 PM, Patrick W. Gilmore wrote:
It's an open recursive name server, it is free, has no SLA, and is not critical infrastructure.
On 2010-02-14, at 17:17, Mark Andrews wrote:
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence.
Are you asserting architectural control over what Level3 decide to do with their own servers, Mark? :-) If their goal is distribute a service for the benefit of their own customers, then keeping all anycast nodes associated with that service on-net seems entirely sensible. Joe
In message <10BE7B64-46FF-46D8-A428-268897413EB4@hopcount.ca>, Joe Abley writes :
On 2010-02-14, at 17:17, Mark Andrews wrote:
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence.
Are you asserting architectural control over what Level3 decide to do = with their own servers, Mark? :-)
No. The reason for multiple nameservers is to remove single points of failures. Using three consecutive addresses doesn't remove single points of failure in the routing system.
If their goal is distribute a service for the benefit of their own = customers, then keeping all anycast nodes associated with that service = on-net seems entirely sensible.
Which only helps if *all* customers of those servers are also on net.
Joe -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Feb 14, 2010, at 5:43 PM, Mark Andrews wrote:
In message <10BE7B64-46FF-46D8-A428-268897413EB4@hopcount.ca>, Joe Abley writes :
On 2010-02-14, at 17:17, Mark Andrews wrote:
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence.
Are you asserting architectural control over what Level3 decide to do = with their own servers, Mark? :-)
No. The reason for multiple nameservers is to remove single points of failures. Using three consecutive addresses doesn't remove single points of failure in the routing system.
If their goal is distribute a service for the benefit of their own = customers, then keeping all anycast nodes associated with that service = on-net seems entirely sensible.
Which only helps if *all* customers of those servers are also on net.
All _customers_ are. People using a service which was not announced or support are not customers. -- TTFN, patrick
On 2010-02-14, at 17:43, Mark Andrews <marka@isc.org> wrote:
Using three consecutive addresses doesn't remove single points of failure in the routing system.
That depends on how the routes for those destinations are chosen, and what routing system you're talking about. For distribution of a service using anycast inside a single AS, and with one route per service, it makes no difference whether the addresses are adjacent. Two /24 routes are no more stable than two /32 routes within an IGP. There's no prefix filtering convention to accommodate, here.
If their goal is distribute a service for the benefit of their own = customers, then keeping all anycast nodes associated with that service = on-net seems entirely sensible.
Which only helps if *all* customers of those servers are also on net.
Whether it helps depends on what Level3's goals are. This is not public infrastructure; this is a service operated by a commercial company. For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more. Joe
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network. Frank === <snip> For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more. Joe
On Feb 16, 2010, at 10:24 PM, Frank Bulk wrote:
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network.
What you describe - authorities used by people off your network to resolve A records with IP addresses outside your network - is not what Joe was describing. What the recursive name server your end users queried to resolve names, the IP address in their desktop's control panel, outside your network? I can see a small ISP using its upstream's recursive name server. But to the rest of the world, most small ISPs look like a part of their upstream's network. -- TTFN, patrick
=== <snip>
For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more.
Joe
Our nameservers handle both the authoritative and recursive traffic, but we use ACLs to restrict recursive queries to just our users. If I understand your second sentence correctly, then yes, our DHCP server hands out the DNS servers, of which one of the three is outside our own network. Frank -----Original Message----- From: Patrick W. Gilmore [mailto:patrick@ianai.net] Sent: Tuesday, February 16, 2010 9:33 PM To: NANOG list Subject: Re: History of 4.2.2.2. What's the story? On Feb 16, 2010, at 10:24 PM, Frank Bulk wrote:
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network.
What you describe - authorities used by people off your network to resolve A records with IP addresses outside your network - is not what Joe was describing. What the recursive name server your end users queried to resolve names, the IP address in their desktop's control panel, outside your network? I can see a small ISP using its upstream's recursive name server. But to the rest of the world, most small ISPs look like a part of their upstream's network. -- TTFN, patrick
=== <snip>
For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more.
Joe
On 2010-02-16, at 20:35, Frank Bulk wrote:
Our nameservers handle both the authoritative and recursive traffic,
As general advice, and as an aside, don't do that. (Aside from anything else, you're inserting authoritative answers in a lookup path that might not be found by a parent delegation, e.g. if you host a domain for someone who subsequently takes it elsewhere without telling you.)
If I understand your second sentence correctly, then yes, our DHCP server hands out the DNS servers, of which one of the three is outside our own network.
Thanks for the counter-example. Joe
On Feb 16, 2010, at 11:35 PM, Frank Bulk wrote:
Our nameservers handle both the authoritative and recursive traffic, but we use ACLs to restrict recursive queries to just our users.
Speaking strictly about the recursive servers (others have covered the auth + recusive on one box thing), thank you for the ACLs. Open RNSes are difficult to secure against being used as an amplification attack vector.
If I understand your second sentence correctly, then yes, our DHCP server hands out the DNS servers, of which one of the three is outside our own network.
While I am all for redundancy, and believe having authorities off-net is useful and good, I am not sure the same holds for RNSes. I like putting authoritative servers on multiple ASes because if my AS[*] dies, I may have good reason to want the hostnames to still resolve. The could very well have significance even when the AS is down (e.g. A records pointing to addresses outside my AS, backup MX records, etc.). But if my AS is down, my users cannot get to anything so what use is having a server happily working where they cannot reach it? Especially one firewalled so only they can use it? I cannot come up with a realistic failure mode where the user has good connectivity to the "outside world", but multiple, geographically & topologically disparate servers inside the AS are all unreachable. On the other hand, I can easily come up with several failure modes where the external RNSes are b0rk'ed, causing either your users or the rest of the Internet harm. In summary, could someone educate me on the benefits of having RNSes outside your network? -- TTFN, patrick [*] Since I Am Not An ISP, this is the hypothetical or general "my AS", not my actual AS.
-----Original Message----- From: Patrick W. Gilmore [mailto:patrick@ianai.net] Sent: Tuesday, February 16, 2010 9:33 PM To: NANOG list Subject: Re: History of 4.2.2.2. What's the story?
On Feb 16, 2010, at 10:24 PM, Frank Bulk wrote:
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network.
What you describe - authorities used by people off your network to resolve A records with IP addresses outside your network - is not what Joe was describing. What the recursive name server your end users queried to resolve names, the IP address in their desktop's control panel, outside your network?
I can see a small ISP using its upstream's recursive name server. But to the rest of the world, most small ISPs look like a part of their upstream's network.
-- TTFN, patrick
=== <snip>
For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more.
Joe
On 17/02/2010 20:51, Tomas L. Byrnes wrote:
[Tomas L. Byrnes] We were a small regional ISP with only one main POP at the time.
off-net resolvers means that your continued customer satisfaction (and therefore your continued reliable cash-flow) is completely dependent on maintaining a good working relationship between your company and the company which operates the resolvers. If - for whatever reason - they decide to shut off services to your customers, your business will take a serious impact. Nick
-----Original Message----- From: Nick Hilliard [mailto:nick@foobar.org] Sent: Wednesday, February 17, 2010 12:56 PM To: Tomas L. Byrnes Cc: NANOG list Subject: Re: History of 4.2.2.2. What's the story?
On 17/02/2010 20:51, Tomas L. Byrnes wrote:
[Tomas L. Byrnes] We were a small regional ISP with only one main POP at the time.
off-net resolvers means that your continued customer satisfaction (and therefore your continued reliable cash-flow) is completely dependent on maintaining a good working relationship between your company and the company which operates the resolvers. If - for whatever reason - they decide to shut off services to your customers, your business will take a serious impact.
Nick
[Tomas L. Byrnes] We had, and maintained, a good relationship, and the services were reciprocal. The resolvers were only tertiary anyway. YMMV, it worked for us. Different times.
On Feb 17, 2010, at 3:51 PM, Tomas L. Byrnes wrote:
In summary, could someone educate me on the benefits of having RNSes outside your network?
[Tomas L. Byrnes] We were a small regional ISP with only one main POP at the time.
If you are single homed, you -are- your upstream's network, er, AS. I was careful to use "AS" and not "network" or "ISP" in my post - except the last line. :) -- TTFN, patrick
One main POP does not mean single homed. We had multiple upstreams, entrance facilities, and peers. We just had one facility where it all was, and our remote users were often dialing into third party banks based on reciprocity agreements when they were out of area. It was 12 years ago. Consolidation has rendered a lot of the collaboration from those days moot.
-----Original Message----- From: Patrick W. Gilmore [mailto:patrick@ianai.net] Sent: Wednesday, February 17, 2010 1:11 PM To: NANOG list Subject: Re: History of 4.2.2.2. What's the story?
On Feb 17, 2010, at 3:51 PM, Tomas L. Byrnes wrote:
In summary, could someone educate me on the benefits of having RNSes outside your network?
[Tomas L. Byrnes] We were a small regional ISP with only one main POP at the time.
If you are single homed, you -are- your upstream's network, er, AS. I was careful to use "AS" and not "network" or "ISP" in my post - except the last line. :)
-- TTFN, patrick
We actively sought reciprocal secondaries, and offered and received reciprocal query hosts, from other regional ISPs when I was CTO @ ADN. We saw it as "strengthening the regional Internet". So our users used CTSnet as their tertiary NS, and CTSNet used ours, FE. Of course, not CTS/CARI and ADN are all AIS, so the point is moot.
-----Original Message----- From: Frank Bulk [mailto:frnkblk@iname.com] Sent: Tuesday, February 16, 2010 7:25 PM To: 'Joe Abley' Cc: nanog@nanog.org Subject: RE: History of 4.2.2.2. What's the story?
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network.
Frank
=== <snip>
For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more.
Joe
Our upstream ISP also has such a reciprocal secondary, too. Frank -----Original Message----- From: Tomas L. Byrnes [mailto:tomb@byrneit.net] Sent: Tuesday, February 16, 2010 10:26 PM To: frnkblk@iname.com; Joe Abley Cc: nanog@nanog.org Subject: RE: History of 4.2.2.2. What's the story? We actively sought reciprocal secondaries, and offered and received reciprocal query hosts, from other regional ISPs when I was CTO @ ADN. We saw it as "strengthening the regional Internet". So our users used CTSnet as their tertiary NS, and CTSNet used ours, FE. Of course, not CTS/CARI and ADN are all AIS, so the point is moot.
-----Original Message----- From: Frank Bulk [mailto:frnkblk@iname.com] Sent: Tuesday, February 16, 2010 7:25 PM To: 'Joe Abley' Cc: nanog@nanog.org Subject: RE: History of 4.2.2.2. What's the story?
We do. It's at our upstream provider, just in case we had an upstream connectivity issue or some internal meltdown that prevented those in the outside world to hit our (authoritative) DNS servers. Of course, that's most helpful for DNS records that resolve to IPs *outside* our network.
Frank
=== <snip>
For what it's worth, I have never heard of an ISP, big or small, deciding to place resolvers used by their customers in someone else's network. Perhaps I just need to get out more.
Joe
On Sun, Feb 14, 2010 at 2:17 PM, Mark Andrews <marka@isc.org> wrote:
I don't care what internal routing tricks are used, they are still under the *one* external route and as such subject to single points of failure and as such don't have enough independence.
Where has Level 3 ever claimed that these servers were ever for *external* use? As a Level 3 customer who uses these servers, I'm seeing multiple *internal* routes to these servers. Of course, if 4/8 disappears from the global routing tables then Level 3 has a bit bigger problem than their DNS resolvers not being accessible from non-customers. I'd also be interested in knowing where you consider the "single points of failure" for their announcement of 4/8 is, but that's probably for another thread... Scott.
In message <f1dedf9c1002141446p892aeacy74273f94d6e2a097@mail.gmail.com>, Scott Howard writes:
I'd also be interested in knowing where you consider the "single points of failure" for their announcement of 4/8 is, but that's probably for another thread...
You mean you have never seen traffic following a route annuncement go into a black hole. :-)
Scott. -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
I'll add to what Johno writes. I worked on the anycast routing side to the server side which he describes. The 4.2.0.0/16 prefix was set aside by John Hawkinson in our reservation system under the label "Numerology" since he had the wisdom to see that the numbers in themselves could be valuable. He really wanted 4.4.4.4. Unfortunately someone had already taken 4.4.0.0/16 for some of our first DSL assignments (when it still seemed suprising that anyone would need tens of thousands of IP addresses at a shot). The first two /16s in 4/8 were already used for infrastucture. I don't necessarily recall the service being intended for non-customers (hence no care about seeing multiple paths outside the AS which originates it.) The real gains were: - More graceful failover - Shorter trips to resolvers (quicker lookups) - Ability to split load w/o re-configuring clients That's the story. Others did it before and since but jhawk really deserves the credit for squatting on super-easy to type and remember addresses. I use it to this day for a quick thing to ping when I need to test connectivity. Cheers, Tony On Sun, Feb 14, 2010 at 09:16:13AM -0500, John Orthoefer wrote:
Since I'm watching B5 again on DVD....
I was there at the dawning of the age of 4.2.2.1 :)
We did it, and we I mean Brett McCoy and my self. But most of the credit/blame goes to Brett... I helped him, but at the time I was mostly working on getting out Mail relays working right. This was about 12 years ago, about 1998, I left Geunitity in 2000, and am back at BBN/Raytheon now. I remember we did most of the work after we moved out of Cambridge and into Burlington.
Genuity/GTEI/Planet/BBN owned 4/8. Brett went looking for an IP that was simple to remember, I think 4.4.4.4 was in use by neteng already. But it was picked to be easy to remember, I think jhawk had put a hold on the 4.2.2.0/24 block, we got/grabbed 3 address 4.2.2.1, 4.2.2.2, and 4.2.2.3 so people had 3 address to go to. At the time people had issues with just using a single resolver. We also had issues with both users and registers since clearly they aren't geographically diverse, trying to explain routing tricks to people KNOW all IPs come in and are routed as Class A/B/C blocks is hard.
NIC.Near.Net which was our primary DNS server for years before I transferred to planet from BBN. It wasn't even in 4/8, I think it was 128.89 (BBN Corp space), but I'm not sure. BBN didn't start to use 4/8 till the Planet build out, and NIC.near.net predates that by at least 10 years.
I still have the power cord from NIC.near.net in my basement. That machine grew organically with every service known to mankind running on it, and special one-off things for customers on it. It took us literally YEARS to get that machine turned off, when we finally got it off I took the power cord so no one would help us by turning it back on, I gave the cord to Chris Yetman, who was the director of operations and told him if a customer screams he has the power to turn it back on. A year or so later, he gave the cord back to me.
Yes we set up 4.2.2.1 as a public resolver. We figured trying to filter it was larger headache than just making it public.
It was always pretty robust due to the BIND code, thanks to ISC, and the fact it was always IPV4 AnyCast.
I don't know about now, but originally it was IPV4 AnyCast. Each server advertised a routes for 4.2.2.1, .2, and .3 at different costs and the routers would listen to the routes. Originally the start up code was, basically: advertise route to 4.2.2.1, 4.2.2.2, and 4.2.2.3 run bind in foreground mode drop route to 4.2.2.1, 4.2.2.2, and 4.2.2.3
then we had a Tivoli process that tried to restart bind, but rate limited the restarts. But that way if the bind died the routes would drop.
johno
On Feb 14, 2010, at 4:16 AM, Sean Reifschneider wrote:
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any.
Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet?
I know that it was originally at BBN, what I'm looking for is things like:
How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time. There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration?
So, if anyone has any stories about 4.2.2.2, I'd love to hear them.
Thanks, Sean -- Microsoft treats objects like women, man... -- Kevin Fenzi, paraphrasing the Dude, 1998 Sean Reifschneider, Member of Technical Staff <jafo@tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
On Sun, Feb 14, 2010 at 02:16:30AM -0700, Sean Reifschneider wrote:
I've wondered about this for years, but only this evening did I start searching for details. And I really couldn't find any.
Can anyone point me at distant history about how 4.2.2.2 came to be, in my estimation, the most famous DNS server on the planet?
I don't think anyone else can help you determine your estimaation...
I know that it was originally at BBN, what I'm looking for is things like:
4/8 was originally BBN. Anycasted DNS resolvers came to many networks somewhen 98-00 [I can't be more precise as my archive of 1994-2007 work and events is naturally out of my reach, being that employer's data]. But I seem to recall that was Rodeny's babye form the Genuity days.
How the IP was picked. (I'd guess it was one of the early DNS servers, and the people behind it realized that if there was one IP address that really needed to be easy to remember, it was the DNS server, for obvious reasons). Was it always meant to be a public resolver? How it continued to remain an open resolver, even in the face of amplifier attacks using DNS resolvers. Perhaps it has had rate-limiting on it for a long time.
That is a question for folks at L3. Any publicly-sharable data might be interesting presentation-fodder.
There's a lot of conjecture about it using anycast, anyone know anything about it's current configuration?
Why "conjecture"? Examining the /32s from inside and outside of 3356 clearly shows the whole set still is, and those who have been customers or worked with the 3356 folks over the years know it has historically been as well. Cheers, Joe -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
On 02/14/2010 07:41 AM, Joe Provo wrote:
I don't think anyone else can help you determine your estimaation...
Sorry, I was being kind of flippant and paying homage to the "Peggy Hill" character in _King_of_the_Hill_.
That is a question for folks at L3. Any publicly-sharable data might be interesting presentation-fodder.
Good idea, I'll have to see if I have any links into L3 that can help.
Why "conjecture"? Examining the /32s from inside and outside of 3356
I said conjecture because every person I found in my searches said things like "I think it might be anycasted" or "they could be using anycast". Until this thread, I didn't see any that spoke with authority on the subject. Thanks for the reply. Sean -- Sean Reifschneider, Member of Technical Staff <jafo@tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
On Sun, Feb 14, 2010 at 1:19 PM, Sean Reifschneider <jafo@tummy.com> wrote:
Why "conjecture"? Examining the /32s from inside and outside of 3356
I said conjecture because every person I found in my searches said things like "I think it might be anycasted" or "they could be using anycast". Until this thread, I didn't see any that spoke with authority on the subject.
http://www.traceroute.org (and/or http://lg.level3.net, etc) will show pretty readily confirm that it's anycast. They will also show that in some parts of the world the various 4.2.2.1-6 addresses go to different locations. eg, from Level 3 in London I'm seeing 4.2.2.1, .3 and .5 going to London, but .2, .4 and .6 all go to Frankfurt. Personally I've moved away from using 4.2.2.1 and .2 after we had a few issues with them, especially in Europe. 4.2.2.5 and .6 seem to be far more stable, although obviously that might vary depending on region. Scott.
participants (19)
-
Brielle Bruns
-
Florian Weimer
-
Frank Bulk
-
Joachim Tingvold
-
Joe Abley
-
Joe Provo
-
John Levine
-
John Orthoefer
-
John Palmer (NANOG Acct)
-
Mark Andrews
-
Nick Hilliard
-
Patrick W. Gilmore
-
Richard Golodner
-
Scott Howard
-
Sean Reifschneider
-
Stephane Bortzmeyer
-
Steve Ryan
-
Tomas L. Byrnes
-
Tony Tauber