Hi! We currently live in times where is actually fun to go IPv6-only. In my case, as in: running a FreeBSD kernel compiled without the IPv4-stack. A few years back doing such thing was mostly disappointing, but nowadays is actually quite doable and entertaining. So, the other day I decided to take this experiment to the next level by disconnecting my local resolver from IPv4 as well. Then things started to break. LinkedIn, Bing, Openstreetmap... Although they all work great on IPv6-only, now they no longer did. It turns out that there underlying CDN's with domain names such as ‘l-msedge.net’ and ‘trafficmanager.net’ (Microsoft) or 'fastly.net', that reside on authoritative name servers that *only* have an IPv4 address. I guess my question is simple: Why? Are there good architectural reason for this? Or is it just something that is overlooked and forgotten about? I would love to find out! Thank you. -- Marco This is also fun by the way. Look at that nice banner on https://clintonwhitehouse2.archives.gov/ :-)
Marco Davids via NANOG <nanog@nanog.org> writes:
It turns out that there underlying CDN's with domain names such as ‘l-msedge.net’ and ‘trafficmanager.net’ (Microsoft) or 'fastly.net', that reside on authoritative name servers that *only* have an IPv4 address.
Fastly does have IPv6 enabled authoritative DNS server but it looks like it's not the default. I ran into this some time ago with deb.debian.org on an IPv6 only Debian VM with a locally installed resolver. I opened a ticket which was closed in record time: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=961296 After some ranting and shouting it now works but a couple of days ago I ran in the same problem while trying to install something via pip. fles.pythonhosted.org is also using fastly.
I guess my question is simple: Why?
I'm asking myself the same question.
Are there good architectural reason for this? Or is it just something that is overlooked and forgotten about?
I don't think it was overlooked or forgotten. More along "We have always done it this way", "We had problems enabling IPv6 (ages ago)" or something else you can find on https://ipv6excuses.com/. Jens -- ---------------------------------------------------------------------------- | Delbrueckstr. 41 | 12051 Berlin, Germany | +49-151-18721264 | | http://blog.quux.de | jabber: jenslink@quux.de | --------------- | ----------------------------------------------------------------------------
Hi Jens, Op 22-10-21 om 14:03 schreef Jens Link:
I ran into this some time ago with deb.debian.org on an IPv6 only Debian VM with a locally installed resolver. I opened a ticket which was closed in record time: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=961296
Just for the record; your issue is slightly different: You wrote: "deb.debian.org is a CNAME for debian.map.fastly.net. There are no AAAA records for fastly.net so any DNS querys from an IPv6 only resolver will not work." At the moment debian.map.fastly.net has an AAAA-record though. The thing is; the authoritative name servers of fastly.net are only willing to hand out that AAAA-record via IPv4. So it still doesn't work with the (locally installed) IPv6-only resolver ;-) Cheers, -- Marco
On second thoughts... I seem to have been confused by the 'no AAAA records for fastly.net' (as a DNS-purist: that should have said "ns[1234].fastly.net" instead, to make it relevant). ;-)
I ran into this some time ago with deb.debian.org
Right. So please ignore:
Just for the record; your issue is slightly different:
You wrote:
"deb.debian.org is a CNAME for debian.map.fastly.net. There are no AAAA records for fastly.net so any DNS querys from an IPv6 only resolver will not work."
-- Marco
Hello, client side IPv6-only is one thing, but IPv6-only recursive DNS resolution is probably so niche that content providers and CDN's do not particularly care at this point in time. On the other hand, there is probably no good reason to run authoritative DNS servers without IPv6 connectivity. Lukas
On 10/22/21 14:03, Jens Link wrote:
I don't think it was overlooked or forgotten. More along
"We have always done it this way", "We had problems enabling IPv6 (ages ago)" or something else you can find on https://ipv6excuses.com/.
I think it's a combination of both... they tried back in the day, it broke, and they "parked" it for later. When Marketing and The World were happy to see that www.insert-favorite-content-url-here.com had AAAA and IPv6 PTR records, who cared whether boring, little-known FQDN's were remembered or not. And then, all the engineers moved on to some other gig, where they did better because, why not? Mark.
On Fri, 22 Oct 2021, 13:03 Jens Link, <lists@quux.de> wrote:
I ran into this some time ago with deb.debian.org on an IPv6 only Debian VM with a locally installed resolver. I opened a ticket which was closed in record time: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=961296
After some ranting and shouting it now works
I'm going to post the relevant message here:
Sometimes I wonder why I report bugs..
But your answer was the answer I was expecting. Thanks for noting.
So I can summarize this as "The Debian Project doesn't care if IPv6 is working"?
Jens, you went into that ticket looking for a fight, in a place staffed by largely unpaid volunteers, choosing to belittle their efforts and then attempting to shame them into action. You even chose to mark the bug severity higher than the default, despite you having chosen that mirror for your install. https://www.debian.org/Bugs/Developer#severities Marco explained to you that the mirror network has plenty of selections to make, but you choose to make a fuss about one supported option not working to a standard the rest of the community pretty much agrees is nowhere near attainable at this time. Using netselect https://packages.debian.org/stable/net/netselect to choose a reachable mirror with the lowest latency would have easily mitigated this issue for you. DNS64 and NAT64 are going to be with us for a very long time, and if you refuse to support IPv4 even through a translation layer then it is clear you are acting against the interests of further IPv6 adoption by associating IPv6 issues with zealotry. The apathy sometimes associated with IPv6 support today is because of this perceived high effort low reward nature of confrontation. I would strongly advise you apologise to Marco for your grandstanding, and adopt a more constructive way of furthering your ideology. The NANOG code of conduct clearly states: https://www.nanog.org/about/code-conduct/
In the spirit of mutual respect and collaboration, NANOG does not tolerate any unwelcome behavior, including but not limited to: * Aggressively pushing your own services, products, or causes. [...]
Please join the rest of us in advocating for IPv6 adoption, rather than the current bullying tactics you seem to be choosing that wins the battle and loses the war. We are all friends* here. You can be a great asset in this effort we should all seek. M *FSVO friends, obvs.
Hi everyone, goedenmiddag Marco! On Fri, Oct 22, 2021 at 01:40:42PM +0200, Marco Davids via NANOG wrote:
We currently live in times where is actually fun to go IPv6-only. In my case, as in: running a FreeBSD kernel compiled without the IPv4-stack.
Indeed, this is fun experimentation. Shaking the (source code) trees through excercises like these is a valuable way to identify gaps.
It turns out that there underlying CDN's with domain names such as ‘l-msedge.net’ and ‘trafficmanager.net’ (Microsoft) or 'fastly.net', that reside on authoritative name servers that *only* have an IPv4 address.
As some observant readers noticed (hint: https://ip6.nl/#!deb.debian.org), Fastly is working hard with select customers and friends to support IPv6 for everyone.
I guess my question is simple: Why?
Are there good architectural reason for this? Or is it just something that is overlooked and forgotten about?
The universal deployment of IPv6 appears to be a multi-decennial multigenerational project. Allow me to shed some light on various aspects. One of the challenges faced by those wishing to deploy IPv6 (compared to IPv4) is how from a BGP Default-Free Zone perspective, IPv4 and IPv6 are not alike at all! The Internet's IPv6 routing topology is vastly different from the IPv4 topology. The above phenomenon is perfectly understandable following from the fact that IPv4 predates IPv6 - and IP networks grow as they grow. In a perfect world the IPv6 network would grow perfectly congruent alongside the global IPv4 network. In this perfect world indeed IPv6 can "just be enabled", and used whenever available! Unfortunately the reality of the situation is far more chaotic! For example if you look at PeeringDB's 'netixlan' table, large discrepancies between the number of absent IPv4 entries and absent IPv6 entries are visible: $ curl -s https://peeringdb.com/api/netixlan | jq '.' | fgrep -c '"ipaddr4": null' 1286 $ curl -s https://peeringdb.com/api/netixlan | jq '.' | fgrep -c '"ipaddr6": null' 8160
From the above it's implied that the density of the 'IPv4 mesh' is much higher than the density and diversity of the 'IPv6 mesh', simply because more operators present more IPv4 traffic-exchange opportunities to other operators - compared to IPv6. This has performance implications.
Another aspect that flabbergasts me anno 2021 is how there *still* are BGP peering disputes between (more than two) major global internet service providers in which IPv6 is 'held hostage' as part of slow commercial negotiations. Surely end-to-end IPv6 connectivity should be a priority? Anyway, back to your question: content delivery networks who leverage all possible technical knobs and buttons to increase performance (such as BGP traffic engineering) might be reluctant to offer IPv6 services "as if they are the same as IPv4". More study is required. Tl;DR - work in progress! :-) Kind regards, Job ps. Have you tried running an IPv6-only RPKI validator? About 1.4% of RPKI VRPs appears to be 'missing' in IPv6-only environments :-/
On 10/22/21 11:13 AM, Job Snijders via NANOG wrote:
Another aspect that flabbergasts me anno 2021 is how there *still* are BGP peering disputes between (more than two) major global internet service providers in which IPv6 is 'held hostage' as part of slow commercial negotiations. Surely end-to-end IPv6 connectivity should be a priority?
Even the DNS root servers are not 100% reachable via IPv6. I would think IANA would have some standard about reachability for root operators. FWIW, I just was able to change my home office internet (I reside in the most densely populated county of Florida). The new provider sold me a dual stack connection, however when they came to deliver it, there was no IPv6 as promised. After spending almost a week playing phone tag, I finally got some one with clue. I was told they have no support if IPv6 and no plans to ever support IPv6 as there is no way to monetize it. This leaves me in the same position as my prior circuit via the local cable co. (no plans to offer IPv6) but at least it's faster than the 2 meg up cable service. Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter. -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net
On Friday, 22 October, 2021 16:45, "Bryan Fields" <Bryan@bryanfields.net> said:
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
I don't think it'll ever make money, but I think it will reduce costs. CGNAT boxes cost money, operating them costs money, dealing with the support fallout from them costs money. Especially in the residential space, where essentially if the customer calls you, ever, you just blew years' worth of margin. My residential ISP here in the UK routes me (and every other subscriber) a /56 without being asked. (Their supplied CPE router just puts the first /64 on the LAN and refuses to process PD requests to hand out any of the other /64s, but baby steps...) Cheers, Tim.
On 10/22/21 18:08, tim@pelican.org wrote:
I don't think it'll ever make money, but I think it will reduce costs. CGNAT boxes cost money, operating them costs money, dealing with the support fallout from them costs money. Especially in the residential space, where essentially if the customer calls you, ever, you just blew years' worth of margin.
The problem is accurately modelling cost reduction using native IPv6 in lieu of CG-NAT is hard when the folk that need convincing are the CFO's. They are more used to "spend 1 to get 2". Convincing them to "save 2 by spending 1" - not as easy as one may think. Mark.
Implementing IPv6 reduces costs for CGNAT. You will have (twice?) less traffic flow through CGNAT, so cheaper hardware and less IPv4 address space. Isn't it? 22.10.21 20:19, Mark Tinka пише:
On 10/22/21 18:08, tim@pelican.org wrote:
I don't think it'll ever make money, but I think it will reduce costs. CGNAT boxes cost money, operating them costs money, dealing with the support fallout from them costs money. Especially in the residential space, where essentially if the customer calls you, ever, you just blew years' worth of margin.
The problem is accurately modelling cost reduction using native IPv6 in lieu of CG-NAT is hard when the folk that need convincing are the CFO's.
They are more used to "spend 1 to get 2". Convincing them to "save 2 by spending 1" - not as easy as one may think.
Mark.
With a kicking ass pitch -----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Mark Tinka Sent: November 26, 2021 5:52 AM To: nanog@nanog.org Subject: Re: IPv6 and CDN's On 11/3/21 22:13, Max Tulyev wrote:
Implementing IPv6 reduces costs for CGNAT. You will have (twice?) less traffic flow through CGNAT, so cheaper hardware and less IPv4 address space. Isn't it?
How to express that in numbers CFO can take to the bank? Mark.
Well … YMMV. We’ve been running v6 for years, and it didn’t really make a dent in spend or boxes or rate of v4 depletion. Big part of the problem in our neck of the woods is millions of v4-only terminals … as well as large customer/gov bids requiring tons of v4 address space.
On Nov 26, 2021, at 07:04, Jean St-Laurent via NANOG <nanog@nanog.org> wrote:
With a kicking ass pitch
-----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Mark Tinka Sent: November 26, 2021 5:52 AM To: nanog@nanog.org Subject: Re: IPv6 and CDN's
On 11/3/21 22:13, Max Tulyev wrote:
Implementing IPv6 reduces costs for CGNAT. You will have (twice?) less traffic flow through CGNAT, so cheaper hardware and less IPv4 address space. Isn't it?
How to express that in numbers CFO can take to the bank?
Mark.
Care to explain because the alternative seems pretty self-evident. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Jose Luis Rodriguez" <jlrodriguez@gmail.com> To: "Jean St-Laurent" <jean@ddostest.me> Cc: nanog@nanog.org Sent: Friday, November 26, 2021 8:16:53 AM Subject: Re: IPv6 and CDN's Well … YMMV. We’ve been running v6 for years, and it didn’t really make a dent in spend or boxes or rate of v4 depletion. Big part of the problem in our neck of the woods is millions of v4-only terminals … as well as large customer/gov bids requiring tons of v4 address space.
On Nov 26, 2021, at 07:04, Jean St-Laurent via NANOG <nanog@nanog.org> wrote:
With a kicking ass pitch
-----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Mark Tinka Sent: November 26, 2021 5:52 AM To: nanog@nanog.org Subject: Re: IPv6 and CDN's
On 11/3/21 22:13, Max Tulyev wrote:
Implementing IPv6 reduces costs for CGNAT. You will have (twice?) less traffic flow through CGNAT, so cheaper hardware and less IPv4 address space. Isn't it?
How to express that in numbers CFO can take to the bank?
Mark.
Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4 https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti... Build around that maybe? Jean From: Mike Hammett <nanog@ics-il.net> Sent: November 26, 2021 11:56 AM To: Jose Luis Rodriguez <jlrodriguez@gmail.com> Cc: nanog@nanog.org; Jean St-Laurent <jean@ddostest.me> Subject: Re: IPv6 and CDN's Care to explain because the alternative seems pretty self-evident. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com _____
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:
Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones.
*Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4*
https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti...
Build around that maybe?
This really hits my bs meter big time. I can't see how nat'ing is going to cause a 40% performance hit during connections. The article also mentions http2 (and later v3) which definitely make big improvements so I'm suspecting that the author is conflating them. Mike
On Fri, Nov 26, 2021 at 6:07 PM Michael Thomas <mike@mtcc.com> wrote:
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:
Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones.
*Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4*
https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti...
Build around that maybe?
This really hits my bs meter big time. I can't see how nat'ing is going to cause a 40% performance hit during connections. The article also mentions http2 (and later v3) which definitely make big improvements so I'm suspecting that the author is conflating them.
Mike
Ok, take the same ipv6 is faster claim from facebook https://www.internetsociety.org/blog/2015/04/facebook-news-feeds-load-20-40-...
On 11/26/21 3:11 PM, Ca By wrote:
On Fri, Nov 26, 2021 at 6:07 PM Michael Thomas <mike@mtcc.com> wrote:
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:
Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones.
*Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4*
https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti...
Build around that maybe?
This really hits my bs meter big time. I can't see how nat'ing is going to cause a 40% performance hit during connections. The article also mentions http2 (and later v3) which definitely make big improvements so I'm suspecting that the author is conflating them.
Mike
Ok, take the same ipv6 is faster claim from facebook
https://www.internetsociety.org/blog/2015/04/facebook-news-feeds-load-20-40-...
Still really thin with details of why. At least this says that they are NAT'ing v4 at *their* edge. But 99% of the lag of filling your newsfeed is their backend and transport, not connection times so who knows what they are actually measuring. Most NAT'ing is done at the consumer end by your home router in any case. Mike
Might also be due to Happy Eyeballs 2 artificial IPv4 A resolution delay. https://www.researchgate.net/publication/343467289_Reducing_User_Perceived_L... I don’t know the current Apple IOS HE2 postpone delay. JC Bisecco De : NANOG <nanog-bounces+jc=jclb.net@nanog.org> De la part de Michael Thomas Envoyé : samedi 27 novembre 2021 00:20 À : Ca By <cb.list6@gmail.com> Cc : nanog@nanog.org Objet : Re: IPv6 and CDN's On 11/26/21 3:11 PM, Ca By wrote: On Fri, Nov 26, 2021 at 6:07 PM Michael Thomas <mike@mtcc.com<mailto:mike@mtcc.com>> wrote: On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote: Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4 https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti... Build around that maybe? This really hits my bs meter big time. I can't see how nat'ing is going to cause a 40% performance hit during connections. The article also mentions http2 (and later v3) which definitely make big improvements so I'm suspecting that the author is conflating them. Mike Ok, take the same ipv6 is faster claim from facebook https://www.internetsociety.org/blog/2015/04/facebook-news-feeds-load-20-40-... Still really thin with details of why. At least this says that they are NAT'ing v4 at *their* edge. But 99% of the lag of filling your newsfeed is their backend and transport, not connection times so who knows what they are actually measuring. Most NAT'ing is done at the consumer end by your home router in any case. Mike
We now have apple and fb saying ipv6 is faster than ipv4. If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4. Hence, an easy argument to tell your CFO that you need IPv6 for your CDN. Xmas is coming so the budget season. Who knows. You might get lucky this year. From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Michael Thomas Sent: November 26, 2021 6:20 PM To: Ca By <cb.list6@gmail.com> Cc: nanog@nanog.org Subject: Re: IPv6 and CDN's On 11/26/21 3:11 PM, Ca By wrote: On Fri, Nov 26, 2021 at 6:07 PM Michael Thomas <mike@mtcc.com <mailto:mike@mtcc.com> > wrote: On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote: Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4 https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti... Build around that maybe? This really hits my bs meter big time. I can't see how nat'ing is going to cause a 40% performance hit during connections. The article also mentions http2 (and later v3) which definitely make big improvements so I'm suspecting that the author is conflating them. Mike Ok, take the same ipv6 is faster claim from facebook https://www.internetsociety.org/blog/2015/04/facebook-news-feeds-load-20-40-... Still really thin with details of why. At least this says that they are NAT'ing v4 at *their* edge. But 99% of the lag of filling your newsfeed is their backend and transport, not connection times so who knows what they are actually measuring. Most NAT'ing is done at the consumer end by your home router in any case. Mike
On 11/26/21 4:15 PM, Jean St-Laurent wrote:
We now have apple and fb saying ipv6 is faster than ipv4.
If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4.
Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.
Netflix is already v6 ready. The biggest obstacle is probably aws because that's where a lot of the long tail of the internet resides. Lobbying them would get the most bang for the buck. Mike
AWS has been gradually improving support and adding features. They just announced this service, which might help with adoption: https://aws.amazon.com/about-aws/whats-new/2021/11/aws-nat64-dns64-communica... On Fri., Nov. 26, 2021, 19:28 Michael Thomas, <mike@mtcc.com> wrote:
On 11/26/21 4:15 PM, Jean St-Laurent wrote:
We now have apple and fb saying ipv6 is faster than ipv4.
If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4.
Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.
Netflix is already v6 ready. The biggest obstacle is probably aws because that's where a lot of the long tail of the internet resides. Lobbying them would get the most bang for the buck.
Mike
On 11/26/21 4:30 PM, Oliver O'Boyle wrote:
AWS has been gradually improving support and adding features. They just announced this service, which might help with adoption:
https://aws.amazon.com/about-aws/whats-new/2021/11/aws-nat64-dns64-communica...
That's a start, I guess. Before all they had was some weird VPN something or other. Let me guess though: they are monetizing their market failure. Mike
On Fri., Nov. 26, 2021, 19:28 Michael Thomas, <mike@mtcc.com> wrote:
On 11/26/21 4:15 PM, Jean St-Laurent wrote:
We now have apple and fb saying ipv6 is faster than ipv4.
If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4.
Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.
Netflix is already v6 ready. The biggest obstacle is probably aws because that's where a lot of the long tail of the internet resides. Lobbying them would get the most bang for the buck.
Mike
But CFOs like monetization. Was that thread about IPv6 or CFO? From: Michael Thomas <mike@mtcc.com> Sent: November 26, 2021 7:37 PM To: Oliver O'Boyle <oliver.oboyle@gmail.com> Cc: Jean St-Laurent <jean@ddostest.me>; Ca By <cb.list6@gmail.com>; North American Network Operators' Group <nanog@nanog.org> Subject: Re: IPv6 and CDN's That's a start, I guess. Before all they had was some weird VPN something or other. Let me guess though: they are monetizing their market failure.
On 11/26/21 4:39 PM, Jean St-Laurent wrote:
But CFOs like monetization. Was that thread about IPv6 or CFO?
Amazon's in this case. They are monetizing their lack of v6 support requiring you go through all kinds of expensive hoops instead of doing the obvious and routing v6 packets. Mike
*From:*Michael Thomas <mike@mtcc.com> *Sent:* November 26, 2021 7:37 PM *To:* Oliver O'Boyle <oliver.oboyle@gmail.com> *Cc:* Jean St-Laurent <jean@ddostest.me>; Ca By <cb.list6@gmail.com>; North American Network Operators' Group <nanog@nanog.org> *Subject:* Re: IPv6 and CDN's
That's a start, I guess. Before all they had was some weird VPN something or other. Let me guess though: they are monetizing their market failure.
On Fri., Nov. 26, 2021, 19:41 Michael Thomas, <mike@mtcc.com> wrote:
On 11/26/21 4:39 PM, Jean St-Laurent wrote:
But CFOs like monetization. Was that thread about IPv6 or CFO?
Amazon's in this case. They are monetizing their lack of v6 support requiring you go through all kinds of expensive hoops instead of doing the obvious and routing v6 packets.
They're getting better at it, at least. They also recently added v6 support in their NLBs and you can get a /56 for every VPC for direct access. I don't think they offer BYO v6 yet, as they do for v4, but it will come. Mike
*From:* Michael Thomas <mike@mtcc.com> <mike@mtcc.com> *Sent:* November 26, 2021 7:37 PM *To:* Oliver O'Boyle <oliver.oboyle@gmail.com> <oliver.oboyle@gmail.com> *Cc:* Jean St-Laurent <jean@ddostest.me> <jean@ddostest.me>; Ca By <cb.list6@gmail.com> <cb.list6@gmail.com>; North American Network Operators' Group <nanog@nanog.org> <nanog@nanog.org> *Subject:* Re: IPv6 and CDN's
That's a start, I guess. Before all they had was some weird VPN something or other. Let me guess though: they are monetizing their market failure.
On Fri, Nov 26, 2021 at 6:51 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
They're getting better at it, at least. They also recently added v6 support in their NLBs and you can get a /56 for every VPC for direct access. I don't think they offer BYO v6 yet, as they do for v4, but it will come.
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us. Scott
On Sat., Nov. 27, 2021, 10:46 Scott Morizot, <tmorizot@gmail.com> wrote:
On Fri, Nov 26, 2021 at 6:51 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
They're getting better at it, at least. They also recently added v6 support in their NLBs and you can get a /56 for every VPC for direct access. I don't think they offer BYO v6 yet, as they do for v4, but it will come.
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us.
Wonderful! When did they start offering that?
Scott
On Sat, Nov 27, 2021 at 5:05 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
On Sat., Nov. 27, 2021, 10:46 Scott Morizot, <tmorizot@gmail.com> wrote:
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us.
Wonderful! When did they start offering that?
I believe it was announced back in the first half of 2020. As I recall it was limited to certain regions at the time of the original announcement (and being AWS it probably still has some region and/or resource specific availability limitations).
On Sat., Nov. 27, 2021, 12:59 Gary Buhrmaster, <gary.buhrmaster@gmail.com> wrote:
On Sat, Nov 27, 2021 at 5:05 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
On Sat., Nov. 27, 2021, 10:46 Scott Morizot, <tmorizot@gmail.com> wrote:
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us.
Wonderful! When did they start offering that?
I believe it was announced back in the first half of 2020.
As I recall it was limited to certain regions at the time of the original announcement (and being AWS it probably still has some region and/or resource specific availability limitations).
Likely. But if it was announced in 2020 then the rollout is either complete, or mostly complete, by now.
On 11/27/21 7:46 AM, Scott Morizot wrote:
On Fri, Nov 26, 2021 at 6:51 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
They're getting better at it, at least. They also recently added v6 support in their NLBs and you can get a /56 for every VPC for direct access. I don't think they offer BYO v6 yet, as they do for v4, but it will come.
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us.
I thought it had to be some virtual private cloud setup? To get the long tail it needs to be a lot more simple. Like "here is the AAAA record" after autoconf. Mike
On Sat., Nov. 27, 2021, 13:34 Michael Thomas, <mike@mtcc.com> wrote:
On 11/27/21 7:46 AM, Scott Morizot wrote:
On Fri, Nov 26, 2021 at 6:51 PM Oliver O'Boyle <oliver.oboyle@gmail.com> wrote:
They're getting better at it, at least. They also recently added v6 support in their NLBs and you can get a /56 for every VPC for direct access. I don't think they offer BYO v6 yet, as they do for v4, but it will come.
Since we are deploying BYO IPv6 in AWS, I can assure you they do offer it now. That was a blocker for us.
I thought it had to be some virtual private cloud setup? To get the long tail it needs to be a lot more simple. Like "here is the AAAA record" after autoconf.
Well, VPC is the only deployment model now. EC2 Classic is long gone (though some long-time legacy customers may still have it as an option). If you create an account, you get a default VPC. You can use it or create another with a few clicks. Prefixes get assigned upon creation but you can add more afterwards. It's actually pretty straightfoward. Setting up basic DNS in Route53 is also pretty straightforward. There are no real barriers up to this point.
Mike
On 11/27/21 02:41, Michael Thomas wrote:
Amazon's in this case. They are monetizing their lack of v6 support requiring you go through all kinds of expensive hoops instead of doing the obvious and routing v6 packets.
Individual CDN's and content providers have better control over how they deploy IPv6, vs. ISP's who have far less capital, warm bodies and innovation DNA. I'm arguing for the latter. Mark.
On Nov 27, 2021, at 06:04 , Mark Tinka <mark@tinka.africa> wrote:
On 11/27/21 02:41, Michael Thomas wrote:
Amazon's in this case. They are monetizing their lack of v6 support requiring you go through all kinds of expensive hoops instead of doing the obvious and routing v6 packets.
Individual CDN's and content providers have better control over how they deploy IPv6, vs. ISP's who have far less capital, warm bodies and innovation DNA.
I'm arguing for the latter.
Mark.
I honestly think that in Amazon’s case, it’s because they’ve cocked up v4 so badly in their attempts to squeeze every drop of life out of every v4 address they have that they have built a nightmare network that makes it utterly difficult to do the obvious and simply route v6 packets because they can’t even do that with v4 if they wanted to. Admittedly, this is based only on comments and descriptions I’ve heard from others about the Amazon network, including customers, Amazon SEs, Amazon staff, etc., but it does seem to fit the available observations rather well. Owen
On Nov 27, 2021, at 06:05 , Mark Tinka <mark@tinka.africa> wrote:
On 11/27/21 02:39, Jean St-Laurent via NANOG wrote:
But CFOs like monetization. Was that thread about IPv6 or CFO?
In 2021, what's the difference?
Mark.
Even in 2021, one improves network capabilities while the other counts beans. Which is which is left as an exercise to the reader. Owen
On 11/27/21 02:15, Jean St-Laurent via NANOG wrote:
We now have apple and fb saying ipv6 is faster than ipv4.
If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4.
Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.
Could work for CDN's... but what about the CFO of an ISP or MNO? Mark.
On Nov 27, 2021, at 06:05 , Mark Tinka <mark@tinka.africa> wrote:
On 11/27/21 02:15, Jean St-Laurent via NANOG wrote:
We now have apple and fb saying ipv6 is faster than ipv4.
If we can onboard Amazon, Netflix, Google and some others, then it is a done deal that ipv6 is indeed faster than ipv4.
Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.
Could work for CDN's... but what about the CFO of an ISP or MNO?
Mark.
Shouldn’t the argument that we can reduce our CGN spend by 50-80% work there? Especially when you couple it with the argument that IPv6 deployment will likely reduce CGN related support calls which are one of the biggest expenses for most ISP/MNOs? Owen
On Fri, Nov 26, 2021 at 3:07 PM Michael Thomas <mike@mtcc.com> wrote:
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote: Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4
This really hits my bs meter big time.
If I had to guess, this is an example of correlation is not causation. Folks with IPv6 tend to be on savvier service providers who have better performance for both IPv4 and IPv6. To find out for sure, you'd have to do an experiment where same-user-same-server connections are split between IPv4 and IPv6 and then measure the performance difference. I don't know if anyone has done that but these particular articles look like someone is just looking at the high-level metrics. Those won't hold any statistical validity because they're not actually random samples. Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/
On 11/27/21 12:16 PM, William Herrin wrote:
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote: Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4 This really hits my bs meter big time. If I had to guess, this is an example of correlation is not causation. Folks with IPv6 tend to be on savvier service providers who have better performance for both IPv4 and IPv6. To find out for sure, you'd have to do an experiment where same-user-same-server connections are split between IPv4 and IPv6 and then measure the performance difference. I don't know if anyone has done that but these particular
On Fri, Nov 26, 2021 at 3:07 PM Michael Thomas <mike@mtcc.com> wrote: articles look like someone is just looking at the high-level metrics. Those won't hold any statistical validity because they're not actually random samples.
I agree. It's pretty suspect that they didn't give the reason it was happening. I mean, why the incuriosity? Mike
On Sat, Nov 27, 2021 at 12:18 PM William Herrin <bill@herrin.us> wrote:
On Fri, Nov 26, 2021 at 3:07 PM Michael Thomas <mike@mtcc.com> wrote:
On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote: Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4
This really hits my bs meter big time.
If I had to guess, this is an example of correlation is not causation. Folks with IPv6 tend to be on savvier service providers who have better performance for both IPv4 and IPv6. To find out for sure, you'd have to do an experiment where same-user-same-server connections are split between IPv4 and IPv6 and then measure the performance difference. I don't know if anyone has done that but these particular articles look like someone is just looking at the high-level metrics. Those won't hold any statistical validity because they're not actually random samples.
We wrote 110+ tests for flent.org to test services under load, you can use -4 or -6, and all the plots against all different sorts of test conditions, can be compared against each other easily. Example of use: flent --socket-stats --step-size=.05 -l 300 -H fremont.starlink.taht.net -4 -t ipv4 rrul flent --socket-stats --step-size=.05 -l 300 -H fremont.starlink.taht.net -6-t ipv6 rrul flent-gui *.flent.gz # and add-other-datafiles There are also several tests in the flent suite (like rrul46) that try ipv4 and 6 at exactly the same time. You typically run into (ISP) bottleneck or (DC) host bandwidth limits first, unless you also distribute the generated load across multiple machines (I use pdsh for this, I'd like to know of tools in modern clouds to fire off a load generator like this, or of other load generators). We have also been using the irtt tool at a very high resolution (3ms) to map networks' jitter and latency at "idle" with rather interesting results for 3/4/5g, wifi and starlink, but haven't been breaking down that data between ipv4 and ipv6 as yet. Other useful statistics to perhaps gather at scale would be TCP_INFO rtt, loss, marking, retransmit stats from customer-facing apache or other proxy servers, and break those down between ipv4 and ipv6 and by AS.
Regards, Bill Herrin
-- William Herrin bill@herrin.us https://bill.herrin.us/
-- I tried to build a better future, a few times: https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org Dave Täht CEO, TekLibre, LLC
With that specific line directly from Apple: "And when IPv6 is in use, the median connection setup is 1.4 times faster than IPv4. This is primarily due to reduced NAT usage and improved routing." There it is, Improved routing. Jean From: Jean St-Laurent <jean@ddostest.me> Sent: November 26, 2021 4:44 PM To: 'Mike Hammett' <nanog@ics-il.net>; 'Jose Luis Rodriguez' <jlrodriguez@gmail.com> Cc: 'nanog@nanog.org' <nanog@nanog.org> Subject: RE: IPv6 and CDN's Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones. Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4 https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-ti... Build around that maybe? Jean From: Mike Hammett <nanog@ics-il.net <mailto:nanog@ics-il.net> > Sent: November 26, 2021 11:56 AM To: Jose Luis Rodriguez <jlrodriguez@gmail.com <mailto:jlrodriguez@gmail.com> > Cc: nanog@nanog.org <mailto:nanog@nanog.org> ; Jean St-Laurent <jean@ddostest.me <mailto:jean@ddostest.me> > Subject: Re: IPv6 and CDN's Care to explain because the alternative seems pretty self-evident. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com _____
On 11/26/21 23:47, Jean St-Laurent via NANOG wrote:
With that specific line directly from Apple:
"And when IPv6 is in use, the median connection setup is 1.4 times faster than IPv4. This is primarily due to reduced NAT usage and improved routing."
There it is, Improved routing.
Perhaps you mean "improved forwarding". In an environment that is heavily peered, what is the visual difference in experience for a customer connecting to a site at 1ms vs. 1.4ms? Across the sea, assume 140ms between Cape Town - London (Omicron, anyone?), what is the visual difference between 140ms vs. 196ms? Okay, bad example, but I can probably get an MS-MPC from Juniper that can claw back 1.3X of that 1.4X advantage :-). Besides, locally-peered traffic is likely to exceed long-haul traffic, in many markets. Mark.
On 26/11/2021 22:47, Jean St-Laurent via NANOG wrote:
"And when IPv6 is in use, the median connection setup is 1.4 times faster than IPv4. This is primarily due to reduced NAT usage and improved routing."
Oh I believe IPv6 is faster but because of completely different reasons. Modern faster connections more likely have IPv6 while old low-bandwidth circuits may provide v4 only. Some users may also use VPN which is almost always v4 only. Their VPN may do funny routing, hair-pinning and similar behavior thus impacting their performance. -- Grzegorz Janoszka
Actually, I think it’s in the fine print here… “Connection setup is 1.4 times faster”. I can believe that NAT adds almost 40% overhead to the connection setup (3-way handshake) and some of the differences in packet handling in the fast path between v4 and v6 could contribute the small remaining difference. I doubt it is due to different connections, since we’re talking about measurements against dual-stack sites reached from dual-stack end-users, very likely traversing similar paths. Owen
On Nov 27, 2021, at 14:02 , Grzegorz Janoszka <grzegorz@janoszka.pl> wrote:
On 26/11/2021 22:47, Jean St-Laurent via NANOG wrote:
"And when IPv6 is in use, the median connection setup is 1.4 times faster than IPv4. This is primarily due to reduced NAT usage and improved routing."
Oh I believe IPv6 is faster but because of completely different reasons. Modern faster connections more likely have IPv6 while old low-bandwidth circuits may provide v4 only.
Some users may also use VPN which is almost always v4 only. Their VPN may do funny routing, hair-pinning and similar behavior thus impacting their performance.
-- Grzegorz Janoszka
On 11/27/21 2:22 PM, Owen DeLong via NANOG wrote:
Actually, I think it’s in the fine print here…
“Connection setup is 1.4 times faster”. I can believe that NAT adds almost 40% overhead to the connection setup (3-way handshake) and some of the differences in packet handling in the fast path between v4 and v6 could contribute the small remaining difference.
I doubt it is due to different connections, since we’re talking about measurements against dual-stack sites reached from dual-stack end-users, very likely traversing similar paths.
40% in isolation is pretty meaningless. If it's 40% of .1% overall it's called a rounding error. Mike
Well, 1.4x faster is a bit of an odd metric. I presume that means that connection set up times measured were on average 1/1.4 times as long for IPv6 as they were for IPv4, but there are other possible interpretations. So really, that’s a convoluted way of saying it takes 29% less time to set up an IPv6 connection than an IPv4 connection on average. I can believe that is likely in a scenario where one is dealing with IPv4 NAT overhead. It’s still probably rounding error for any real world purpose, since we’re probably talking about something that normally takes between 50 and 150 ms, so if it takes 1.4 times as long in IPv4, that’d be 70-210 ms, so still mostly under 1/5th of a second, which is not below human perception, but likely below human notice. Owen
On Nov 27, 2021, at 14:30 , Michael Thomas <mike@mtcc.com> wrote:
On 11/27/21 2:22 PM, Owen DeLong via NANOG wrote:
Actually, I think it’s in the fine print here…
“Connection setup is 1.4 times faster”. I can believe that NAT adds almost 40% overhead to the connection setup (3-way handshake) and some of the differences in packet handling in the fast path between v4 and v6 could contribute the small remaining difference.
I doubt it is due to different connections, since we’re talking about measurements against dual-stack sites reached from dual-stack end-users, very likely traversing similar paths.
40% in isolation is pretty meaningless. If it's 40% of .1% overall it's called a rounding error.
Mike
On Sat, Nov 27, 2021, 17:36 Owen DeLong via NANOG <nanog@nanog.org> wrote:
Well, 1.4x faster is a bit of an odd metric. I presume that means that connection set up times measured were on average 1/1.4 times as long for IPv6 as they were for IPv4, but there are other possible interpretations.
So really, that’s a convoluted way of saying it takes 29% less time to set up an IPv6 connection than an IPv4 connection on average.
I can believe that is likely in a scenario where one is dealing with IPv4 NAT overhead.
Why isn't this just inconsistent paths between V6 and V4/nat? (Divergent topologies)
On Nov 27, 2021, at 17:21 , Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sat, Nov 27, 2021, 17:36 Owen DeLong via NANOG <nanog@nanog.org <mailto:nanog@nanog.org>> wrote: Well, 1.4x faster is a bit of an odd metric. I presume that means that connection set up times measured were on average 1/1.4 times as long for IPv6 as they were for IPv4, but there are other possible interpretations.
So really, that’s a convoluted way of saying it takes 29% less time to set up an IPv6 connection than an IPv4 connection on average.
I can believe that is likely in a scenario where one is dealing with IPv4 NAT overhead.
Why isn't this just inconsistent paths between V6 and V4/nat? (Divergent topologies)
At least in most of my real world experience, they don’t tend to diverge all that much. Further, post-initiation performance seems to be largely on par v4<->v6 and without NAT, I see faster v4 connection startup times. V6 appears to still have a slight advantage, but it’s more like 5-10% than 30%. Owen
On 11/26/21 16:16, Jose Luis Rodriguez wrote:
Well … YMMV. We’ve been running v6 for years, and it didn’t really make a dent in spend or boxes or rate of v4 depletion. Big part of the problem in our neck of the woods is millions of v4-only terminals … as well as large customer/gov bids requiring tons of v4 address space.
I can very easily see why "IPv6 saves you on CG-NAT capex might not be entirely true" in cases such as these. On paper, it all adds up. Mark.
On 11/27/21 17:07, Masataka Ohta wrote:
Because lengthy IPv6 addresses mean a lot more opex than IPv4.
I disagree - it can be more opex if you want to run both together, but less so if you choose one; largely IPv6, but also largely IPv4 if you don't intend to be in the game for the rest of your life. My point was that there might not obviously be a linear relationship between less CG-NAT and more native IPv6, that makes a material difference to the CFO's Excel spreadsheet, off the bat. Mark.
On Nov 27, 2021, at 7:39 PM, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mark Tinka wrote:
On 11/27/21 17:07, Masataka Ohta wrote: Because lengthy IPv6 addresses mean a lot more opex than IPv4. I disagree
Try to type in raw IPv6 addresses.
People are likely to use a technology originally developed because IPv4 had the same perception problem: DNS.
Fred Baker wrote:
Because lengthy IPv6 addresses mean a lot more opex than IPv4. I disagree
Try to type in raw IPv6 addresses.
People are likely to use a technology originally developed because > IPv4 had the same perception problem: DNS.
Here in nanog, we are talking about network operations, considerable part of which can not rely on DNS. Masataka Ohta
On 11/28/21 06:43, Masataka Ohta wrote:
Here in nanog, we are talking about network operations, considerable part of which can not rely on DNS.
And yet Facebook were unable to access their kit to fix their recent outage because of it (or, lack of it). There was a time when knowing the IP(v4) address of every interface of every router in your network was cool. I have never had to care about that in close to 15 years. Right up there with losing interest in making software modems work in Linux, when it was a thing :-). If you want to remain stuck in the past, we don't have to join you. Mark.
Mark Tinka wrote:
Here in nanog, we are talking about network operations, considerable part of which can not rely on DNS.
And yet Facebook were unable to access their kit to fix their recent outage because of it (or, lack of it).
Exactly. That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
There was a time when knowing the IP(v4) address of every interface of every router in your network was cool.
I surely acknowledge your point that it is impossible to do so with MAC address based IPv6 addresses, which is why IPv6 opex is so high. But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures.
I have never had to care about that in close to 15 years. Right up there with losing interest in making software modems work in Linux, when it was a thing :-).
So, you are saying you haven't faced real operational problems to loss DNS information for these 15 years. Congratulations for your luck!
If you want to remain stuck in the past, we don't have to join you.
Surely, the recent disaster of facebook happened in the recent past. So what? Masataka Ohta
On 11/28/21 14:58, Masataka Ohta wrote:
Exactly.
That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
Yes, but that does not mean that DNS is not valuable, or cannot be hardened. Everything can break, even an IPv4 interface on a router port. Good practice in network operations is what keeps these kinds of problems at bay. I mean, why else do we have lists like these? I am certain Facebook have hardened their DNS infrastructure, and that particular failure scenario should not recur, given all the clever people there, and around them.
There was a time when knowing the IP(v4) address of every interface of every router in your network was cool.
I surely acknowledge your point that it is impossible to do so with MAC address based IPv6 addresses, which is why IPv6 opex is so high.
But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures.
That's just satisfying one's mental (or emotional) nits. Routers (and customers) don't care about how anally we assign address space. As long as it is compliant, does not conflict, and is correctly routed. That we cannot transpose our IPv4 mental/emotional habits on to IPv6 does not make IPv6 more complicated. It just makes us more stuck in our ways. After all, DHCPv6 still does not offer a default gateway.
So, you are saying you haven't faced real operational problems to loss DNS information for these 15 years.
Congratulations for your luck!
I am sure I have had a DNS issue of some sort or other in the past 15 years. The fact that I can't remember what it was is telling.
Surely, the recent disaster of facebook happened in the recent past.
So what?
And they have learned from it, and I dare say, fixed it. Facebook will neither be disposing of DNS any time soon, nor will they be dropping IPv6. Mark.
Mark Tinka wrote:
That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
Yes, but that does not mean that DNS is not valuable, or cannot be hardened.
As a person who proposed anycast DNS servers, against which facebook operated their DNS, I'm so sure you are right.
I am certain Facebook have hardened their DNS infrastructure, and that particular failure scenario should not recur, given all the clever people there, and around them.
All I can see is that there were a lot of stupid people in facebook. I really hope less of them still remain there. Masataka Ohta
On 11/28/21 15:33, Masataka Ohta wrote:
As a person who proposed anycast DNS servers, against which facebook operated their DNS, I'm so sure you are right.
Facebook's mistake on this is an easily fixable one. We've all been there. Nothing groundbreaking.
All I can see is that there were a lot of stupid people in facebook.
I really hope less of them still remain there.
Can't help you there... Mark.
Mark Tinka wrote:
As a person who proposed anycast DNS servers, against which facebook operated their DNS, I'm so sure you are right.
Facebook's mistake on this is an easily fixable one.
Certainly, but, merely because it is an easily avoided one.
We've all been there.
People who really understand DNS, including but not limited to anycast one, have never been there. Masataka Ohta
On 11/28/21 16:13, Masataka Ohta wrote:
Certainly, but, merely because it is an easily avoided one.
None of the us came out the womb knowing anything. We learned as we went along. And we keep learning, right until our death. To expect experience before it is experienced has always been unreasonable.
People who really understand DNS, including but not limited to anycast one, have never been there.
Anycast was the result of things that broke. And even with Anycast, lots of things break, still. Continuity of service != never having been there. Mark.
It certainly sounds like you’ve never operated a network at scale if you think knowing the IP address of something reduces Operational expense. The only way to truly reduce Opex at scale is automation. Shane
On Nov 28, 2021, at 9:13 AM, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mark Tinka wrote:
As a person who proposed anycast DNS servers, against which facebook operated their DNS, I'm so sure you are right. Facebook's mistake on this is an easily fixable one.
Certainly, but, merely because it is an easily avoided one.
We've all been there.
People who really understand DNS, including but not limited to anycast one, have never been there.
Masataka Ohta
sronan@ronan-online.com wrote:
It certainly sounds like you’ve never operated a network at scale if you think knowing the IP address of something reduces Operational expense.
It's Mark, not me, who said: : There was a time when knowing the IP(v4) address of every interface : of every router in your network was cool.
The only way to truly reduce Opex at scale is automation.
Automation by what? DNS? Masataka Ohta
Mark Tinka wrote:
It's Mark, not me, who said:
: There was a time when knowing the IP(v4) address of every interface : of every router in your network was cool.
In case you missed the nuance, I haven't had to do this in over 20 years.
Say it to Shane, not me. That you two can not communicate well is not my problem. Masataka Ohta
man. 29. nov. 2021 02.12 skrev Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp>:
The only way to truly reduce Opex at scale is automation.
Automation by what? DNS?
Masataka Ohta
Most of our customers are provisioned by Radius. The remaining are configured by scripting using Netconf. We use DNS to document the network. If our DNS was down and I need to connect to a router in some city, do you really expect me to remember the IP address? I would have to look it up and our chosen database for that happens to be DNS. It has some obvious advantages. Regards Baldur
I remember when I was a junior in a major NOC, we had this management host with a local hosts file for all critical components. Probably worth reviewing some old school techniques. 😉 If you can automate your gazillion routers business, you probably can also automate a couple of hosts file. Jean From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Baldur Norddahl Sent: November 29, 2021 4:22 AM To: NANOG <nanog@nanog.org> Subject: Re: IPv6 and CDN's man. 29. nov. 2021 02.12 skrev Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp <mailto:mohta@necom830.hpcl.titech.ac.jp> >:
The only way to truly reduce Opex at scale is automation.
Automation by what? DNS? Masataka Ohta Most of our customers are provisioned by Radius. The remaining are configured by scripting using Netconf. We use DNS to document the network. If our DNS was down and I need to connect to a router in some city, do you really expect me to remember the IP address? I would have to look it up and our chosen database for that happens to be DNS. It has some obvious advantages. Regards Baldur
søn. 28. nov. 2021 13.59 skrev Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp>:
But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures.
99% if not 100% of our subnets have either only routers or only hosts + a gateway. So that would be a strange rule to follow. Also very expensive if we are talking public addressing. I find that 10.x.y.z is not much if you want to have a system in your subnet numbering. With ipv6 there is much more space to enable systematic numbering schemes.
Baldur Norddahl wrote:
But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures.
99% if not 100% of our subnets have either only routers or only hosts + a gateway.
It merely means you should not use MAC address based IP addresses for, at least, routers, which is partly why opex of IPv4 is low.
So that would be a strange rule to follow. Also very expensive if we are talking public addressing.
I find that 10.x.y.z is not much if you want to have a system in your subnet numbering.
In most cases, it is enough that addresses are unique within certain domain, which is why many ISPs are assigning addresses, not guaranteed to be globally unique, to there internal routers.
With ipv6 there is much more space to enable systematic numbering schemes.
More space, only to encourage stupid idea of MAC address based addresses of IPv6/ND, is not required for systematic numbering. Masataka Ohta
On 11/28/21 15:59, Masataka Ohta wrote:
It merely means you should not use MAC address based IP addresses for, at least, routers, which is partly why opex of IPv4 is low.
I often wonder what Internet you use :-)...
More space, only to encourage stupid idea of MAC address based addresses of IPv6/ND, is not required for systematic numbering.
I'm sure a router won't stop working because we did not assign it an IP address "systematically". Nor will the spinning of the earth. Mark.
I like to put some servers behind that scheme. 2601::443:xxxx for https servers 2601::25:xxxx for MTA servers. 2601::993:xxxx for IMAP It gives a quick note of what is that ip even though it’s ipv6 and usually non-human readable. Not sure what kind of scheme is use by medium/big ISP. Do you go by zip code of the area covered or some kind of logical to help people know what is behind that ipv6 network? Jean From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Baldur Norddahl Sent: November 28, 2021 8:22 AM To: NANOG <nanog@nanog.org> Subject: Re: IPv6 and CDN's søn. 28. nov. 2021 13.59 skrev Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp <mailto:mohta@necom830.hpcl.titech.ac.jp> >: But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures. 99% if not 100% of our subnets have either only routers or only hosts + a gateway. So that would be a strange rule to follow. Also very expensive if we are talking public addressing. I find that 10.x.y.z is not much if you want to have a system in your subnet numbering. With ipv6 there is much more space to enable systematic numbering schemes.
On 11/28/21 16:20, Jean St-Laurent via NANOG wrote:
I like to put some servers behind that scheme.
2601::443:xxxx for https servers
2601::25:xxxx for MTA servers.
2601::993:xxxx for IMAP
It gives a quick note of what is that ip even though it’s ipv6 and usually non-human readable.
Not sure what kind of scheme is use by medium/big ISP.
Do you go by zip code of the area covered or some kind of logical to help people know what is behind that ipv6 network?
We really aren't clever with IPv6 address design and assignment. The most we do is assign: - 1x /48 to Loopbacks for all routers. - /48's per PoP for infrastructure links. - /48's per city for /56 assignments to customers. - /48's per city for assignments to customers. We don't try to co-ordinate them in a "meaningful" way that would offer visual identification. We feel that is just too much work, and that getting IPv4/IPv6 parity for day-to-day operations is a challenge in and of itself, without trying to be fancy. Mark.
On Nov 28, 2021, at 08:55 , Mark Tinka <mark@tinka.africa> wrote:
On 11/28/21 16:20, Jean St-Laurent via NANOG wrote:
I like to put some servers behind that scheme.
2601::443:xxxx for https servers 2601::25:xxxx for MTA servers. 2601::993:xxxx for IMAP
It gives a quick note of what is that ip even though it’s ipv6 and usually non-human readable.
Not sure what kind of scheme is use by medium/big ISP.
Do you go by zip code of the area covered or some kind of logical to help people know what is behind that ipv6 network?
We really aren't clever with IPv6 address design and assignment. The most we do is assign:
- 1x /48 to Loopbacks for all routers. - /48's per PoP for infrastructure links. - /48's per city for /56 assignments to customers. - /48's per city for assignments to customers.
Why are you so stingy with customer assignments? Why not properly assign /48s to customers and /40s to cities? Owen
On 11/28/2021 9:47 AM, Owen DeLong via NANOG wrote:
Why not properly assign /48s to customers and /40s to cities? ----------------------------------------------------------------------------------
Side note: I recently tried to get /48 per customer with ARIN on repeated emails and they refused. We were already given an IPv6 block a while back. I told them I wanted to expand it so I could give out a /48 per customer and that we had more than 65535 customers, which is the block we got; 65535 /48s. I didn't even account for our needs. Without arguing the reasons, we will have to hand out /56s, rather than /48s because of this. So, it's not all /48-unicorns, puppies and rainbows. scott
On 29 Nov 2021, at 09:41, scott <surfer@mauigateway.com> wrote:
On 11/28/2021 9:47 AM, Owen DeLong via NANOG wrote:
Why not properly assign /48s to customers and /40s to cities? ----------------------------------------------------------------------------------
Side note: I recently tried to get /48 per customer with ARIN on repeated emails and they refused. We were already given an IPv6 block a while back. I told them I wanted to expand it so I could give out a /48 per customer and that we had more than 65535 customers, which is the block we got; 65535 /48s. I didn't even account for our needs.
Without arguing the reasons, we will have to hand out /56s, rather than /48s because of this. So, it's not all /48-unicorns, puppies and rainbows.
scott
Looks like a policy omission. You should be able to grow the per customer allocation up to /48 per customer. One shouldn’t be stuck with /56 because one made a bad choice of prefix size initially. -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Nov 28, 2021, at 15:51 , Mark Andrews <marka@isc.org> wrote:
On 29 Nov 2021, at 09:41, scott <surfer@mauigateway.com> wrote:
On 11/28/2021 9:47 AM, Owen DeLong via NANOG wrote:
Why not properly assign /48s to customers and /40s to cities? ----------------------------------------------------------------------------------
Side note: I recently tried to get /48 per customer with ARIN on repeated emails and they refused. We were already given an IPv6 block a while back. I told them I wanted to expand it so I could give out a /48 per customer and that we had more than 65535 customers, which is the block we got; 65535 /48s. I didn't even account for our needs.
Without arguing the reasons, we will have to hand out /56s, rather than /48s because of this. So, it's not all /48-unicorns, puppies and rainbows.
scott
Looks like a policy omission. You should be able to grow the per customer allocation up to /48 per customer. One shouldn’t be stuck with /56 because one made a bad choice of prefix size initially.
There is definitely something wrong here… Policy clearly states that you should be able to obtain an allocation large enough to provide /48s to all your customers if you so choose. In fact, it is generally quite generous beyond that point. Owen
On 11/29/21 00:41, scott wrote:
Side note: I recently tried to get /48 per customer with ARIN on repeated emails and they refused. We were already given an IPv6 block a while back. I told them I wanted to expand it so I could give out a /48 per customer and that we had more than 65535 customers, which is the block we got; 65535 /48s. I didn't even account for our needs.
Without arguing the reasons, we will have to hand out /56s, rather than /48s because of this. So, it's not all /48-unicorns, puppies and rainbows.
We have two types of customers - that that get assigned a /48, and those that get assigned a /56. Mark.
On Nov 28, 2021, at 23:19 , Mark Tinka <mark@tinka.africa> wrote:
On 11/29/21 00:41, scott wrote:
Side note: I recently tried to get /48 per customer with ARIN on repeated emails and they refused. We were already given an IPv6 block a while back. I told them I wanted to expand it so I could give out a /48 per customer and that we had more than 65535 customers, which is the block we got; 65535 /48s. I didn't even account for our needs.
Without arguing the reasons, we will have to hand out /56s, rather than /48s because of this. So, it's not all /48-unicorns, puppies and rainbows.
We have two types of customers - that that get assigned a /48, and those that get assigned a /56.
Mark.
So why be stingy to the second class? Why not just assign everyone a /48? Owen
On Nov 28, 2021, at 04:58 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mark Tinka wrote:
Here in nanog, we are talking about network operations, considerable part of which can not rely on DNS. And yet Facebook were unable to access their kit to fix their recent outage because of it (or, lack of it).
Exactly.
That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
I’d argue that the true failure was failure to document the system adequately to provide for prompt resolution of the DNS problems in the absence of DNS and failure to properly distribute that knowledge to those that would need it in such a circumstance.
There was a time when knowing the IP(v4) address of every interface of every router in your network was cool.
I surely acknowledge your point that it is impossible to do so with MAC address based IPv6 addresses, which is why IPv6 opex is so high.
Hardly anyone uses MAC address based IPv6 addresses for anything, so your claim here is specious. Statically or manually configured IPv6 addresses are no more difficult than the equivalent in IPv4, just slightly longer. Consider, for example, 192.159.10.7 vs. 2620:0:930::400:7 The 400 could have been omitted leaving 2620:0:930::7, but I chose to have additional flexibility that is available from IPv6 in classifying addresses for particular purposes and decided to use the next order quartet for that purpose.
But, with manually configured IP addresses, it is trivially easy to have a rule to assign lower part of IP addresses within a subnet for hosts and upper part for routers, which is enough to troubleshoot most network failures.
It’s equally easy to do that in IPv6 as well (while I prefer to use the reverse strategy, using low order addresses for routers and higher numbers for non-forwarding hosts).
I have never had to care about that in close to 15 years. Right up there with losing interest in making software modems work in Linux, when it was a thing :-).
So, you are saying you haven't faced real operational problems to loss DNS information for these 15 years.
Congratulations for your luck!
In reality, DNS properly implemented is extraordinarily reliable these days. It’s not completely failure-proof, as Facebook recently demonstrated so thoroughly, but consider how extraordinary that failure was in order to garner so much attention. Gone are the days when DNS failures were a part of daily operational life that we simply accepted.
If you want to remain stuck in the past, we don't have to join you.
Surely, the recent disaster of facebook happened in the recent past.
Yes, but if you really look at it, that was a failure of preparedness much more than a failure of DNS.
So what?
So, the world has moved on and modern networks are built using tools you appear to prefer not to depend on… I remember when an RS-232 port was the gold standard of being able to recover your router. Today, we accept at least USB and in most cases even Ethernet as a viable alternative. Things get better, but that comes with higher-level dependencies on underlying infrastructure. Fortunately, the underlying infrastructure gets better as well, making that possible and even reasonable. The fact that you prefer not to recognize this is a sign that you are failing to adapt to the present in favor of living in the past as Mark stated. Owen
On Sun, 28 Nov 2021 at 13:00, Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp> wrote:
That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
I don't want to wade into the middle of this argument, but has there been more information about the recent facebook outage released that I missed? All I've read seems to say that the loss of connectivity to their DNS servers was a symptom, rather than the cause of the outage. Dave
On Sun, Nov 28, 2021 at 1:28 PM Dave Bell <me@geordish.org> wrote:
That facebook poorly managed their DNS to cause the recent disaster I don't want to wade into the middle of this argument, but has
On Sun, 28 Nov 2021 at 13:00, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote: there been more information about the recent facebook outage released that I missed? All I've read seems to say that the loss of connectivity to their DNS servers was a symptom, rather than the cause of the outage.
Hi Dave, You haven't missed anything. Facebook broke its backbone by mistake. Before they could restore it, the DNS records went stale and the now-stale servers withdrew themselves from the network as Facebook designed them to do when the records go stale. Unfortunately for Facebook, the internal servers did the same thing as the external ones. This broke the authentication system which in turn broke everything else, complicating their efforts to access the various systems including the ones they could have copied and pasted IP addresses from. But, to hear Masataka tell it, copy and paste hasn't been invented yet so we all type IP addresses by hand on our vt100 CRT terminals. Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/
William Herrin wrote:
But, to hear Masataka tell it, copy and paste hasn't been invented yet so we all type IP addresses by hand on our vt100 CRT terminals.
You should be using so advanced technologies to input ASCII text with touch and swipe, which is very slow, even slower than cut and paste. But, you should remember that using ASCII keyboard (of vt100 or whatever) is the fastest way to input IPv4 addresses. Or, maybe, you can't touch type. Masataka Ohta
Dave Bell wrote:
That facebook poorly managed their DNS to cause the recent disaster is an important evidence to support my point that DNS, so often, may not be helpful for network operations against disastrous failures, including, but not limited to, DNS failures.
I don't want to wade into the middle of this argument, but has there been more information about the recent facebook outage released that I missed?
You should have missed: https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/ The end result was that our DNS servers became unreachable even though they were still operational. This made it impossible for the rest of the internet to find our servers.
All I've read seems to say that the loss of connectivity to their DNS servers was a symptom, rather than the cause of the outage.
See above. Another part of the release should also be interesting: and second, the total loss of DNS broke many of the internal tools we'd normally use to investigate and resolve outages like this. That is an evidence for my statement above "that DNS, so often, may not be helpful for network operations against disastrous failures". You can't rely on automatic tools over DNS, when DNS is failing. Masataka Ohta
On 11/29/21 03:33, Masataka Ohta wrote:
The end result was that our DNS servers became unreachable even though they were still operational. This made it impossible for the rest of the internet to find our servers.
So your suggestion to map machine addresses to human-readable names is... what? Or should we all just get bigger brains and remember machine addresses by heart :-)?
That is an evidence for my statement above "that DNS, so often, may not be helpful for network operations against disastrous failures".
Don't be drawn into Facebook's size as being what all operators on the Internet do. If DNS, for all operators, died in the same way it did for Facebook, I'm certain we'd all be too busy to answer each other on this thread. Operations significantly smaller than Facebook have had well-architected DNS deployment for yonks. Don't let Facebook's scale leave you with the assumption that if they cocked it up, the rest of us don't have a chance in hell. Mark.
On Nov 28, 2021, at 23:25 , Mark Tinka <mark@tinka.africa> wrote:
On 11/29/21 03:33, Masataka Ohta wrote:
The end result was that our DNS servers became unreachable even though they were still operational. This made it impossible for the rest of the internet to find our servers.
So your suggestion to map machine addresses to human-readable names is... what?
If you can limit the names to the characters a-f, i, o, s, and z then it’s possible to do so with IPv6 addresses natively. (which you can’t do in IPv4). (o=0, i=1, s=5, and z=2) I’m not saying this is a good idea, but it is possible. There are a large number of english words that can be spelled with just those 9 characters. Owen
On Nov 28, 2021, at 02:42 , Mark Tinka <mark@tinka.africa> wrote:
On 11/28/21 06:43, Masataka Ohta wrote:
Here in nanog, we are talking about network operations, considerable part of which can not rely on DNS.
And yet Facebook were unable to access their kit to fix their recent outage because of it (or, lack of it).
I’d argue that failing to put the correct documentation in place for coping with a DNS outage was the bigger issue than the DNS failure in the Facebook outage… So would a number of the engineers I know at Facebook.
There was a time when knowing the IP(v4) address of every interface of every router in your network was cool. I have never had to care about that in close to 15 years. Right up there with losing interest in making software modems work in Linux, when it was a thing :-).
There was a time when every router in a moderately large network was less than 50. Those days are gone. Today, it’s impossible to build large scale networks without depending on certain tools. That means that the failure of those tools can be catastrophic if one is not properly prepared. This is simply the modern reality. Proper preparation is harder than it used to be, but for any such network, there should be online and off-line copies of sufficient documentation (which is adequately maintained) to cope restore service of any such underlying facility quickly in the event of a failure. Owen
Ipv6 can be shorter than ipv4. Here is the proof: ping6 ::1 is shorter than ping 127.1 ipv6 addresses can be very small when done properly. Jean -----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Mark Tinka Sent: November 28, 2021 5:39 AM To: nanog@nanog.org Subject: Re: IPv6 and CDN's On 11/28/21 05:37, Masataka Ohta wrote:
Try to type in raw IPv6 addresses.
There is DNS for that. Mark.
On Nov 27, 2021, at 19:37 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mark Tinka wrote:
On 11/27/21 17:07, Masataka Ohta wrote:
Because lengthy IPv6 addresses mean a lot more opex than IPv4. I disagree
Try to type in raw IPv6 addresses.
Rarely necessary in the modern age, but really not significantly more difficult than IPv4 once you become accustomed to it. Owen
On 26/11/2021 13:52, Mark Tinka wrote:
On 11/3/21 22:13, Max Tulyev wrote:
Implementing IPv6 reduces costs for CGNAT. You will have (twice?) less traffic flow through CGNAT, so cheaper hardware and less IPv4 address space. Isn't it?
How to express that in numbers CFO can take to the bank?
"want to buy 5 of those shiny new CGNAT boxes or only 2 ?" Frank
On 22/10/2021 17:08, tim@pelican.org wrote:
I don't think it'll ever make money, but I think it will reduce costs. CGNAT boxes cost money, operating them costs money, dealing with the support fallout from them costs money. Especially in the residential space, where essentially if the customer calls you, ever, you just blew years' worth of margin.
There aren't enough folk thinking along these lines, so thank you for writing it. Every flow you can route exclusively with 6, is one flow you aren't having to pay extra for so it can sit in a CGNAT state table. ... And that's before they call you, as Tim also rightly points out. -- Tom
On 10/22/21 17:45, Bryan Fields wrote:
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
It is being offered, it's just not being adopted. We deliver an IPv6 /126 p2p and /56 or /48 onward assignment to all our DIA customers. No interest. We deliver an IPv6 /125 p2p and eBGP session to all our IP Transit customers. 5/10 are interested. You can only do what you can only do. Mark.
Bryan, On Oct 22, 2021, at 11:45 AM, Bryan Fields <Bryan@bryanfields.net> wrote:
On 10/22/21 11:13 AM, Job Snijders via NANOG wrote:
Another aspect that flabbergasts me anno 2021 is how there *still* are BGP peering disputes between (more than two) major global internet service providers in which IPv6 is 'held hostage' as part of slow commercial negotiations. Surely end-to-end IPv6 connectivity should be a priority?
Even the DNS root servers are not 100% reachable via IPv6.
Excepting temporary failures, they are as far as I am aware. Why do you think they aren’t?
I would think IANA would have some standard about reachability for root operators.
I think you might misunderstand relationships here. The IANA team’s standards are what the community defines. In the case of the root operators, RFC 7720 says “root service” must be available via IPv6 and RSSAC-001 (“Service Expectations of Root Servers”, https://www.icann.org/en/system/files/files/rssac-001-root-service-expectati...) says: "[E.3.1-B] Individual Root Servers will deliver the service in conformance to IETF standards and requirements as described in RFC 7720 [4] and any other IETF standards-defined Internet Protocol as deemed appropriate." So, in theory, all the root servers should be available via IPv6 and, as far as I can tell, they are. However, the IANA team is not the enforcement arm of the Internet. If a root server operator chooses to not abide by RFC 7720, there is nothing the IANA team can do unilaterally other than make the root server operator aware of the fact.
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
Different markets, different approaches. In the areas I’ve lived in Los Angeles, commodity residential service via AT&T (1 Gbps up/down fiber) and Spectrum (varying speeds) is dual stack by default (as far as I can tell). I suspect all it would take would be one of the providers in your area to offer IPv6 and advertise the fact in their marketing to cause the others to fall into line. Regards, -drc
I think you will find that there are some places in which getting IPv6 network service has been difficult, and as a result even IPv6-capable equipment is not reachable by IPv6. Those are, however, few and far between. Sent using a machine that autocorrects in interesting ways...
On Oct 23, 2021, at 6:04 AM, David Conrad <drc@virtualized.org> wrote:
So, in theory, all the root servers should be available via IPv6 and, as far as I can tell, they are.
On Sat, Oct 23, 2021, 15:17 Fred Baker <fredbaker.ietf@gmail.com> wrote:
I think you will find that there are some places in which getting IPv6 network service has been difficult, and as a result even IPv6-
Fred, do you mean places like, all of Verizon FiOS? capable equipment is not reachable by IPv6. Those are, however, few and far
between.
Sent using a machine that autocorrects in interesting ways...
On Oct 23, 2021, at 6:04 AM, David Conrad <drc@virtualized.org> wrote:
So, in theory, all the root servers should be available via IPv6 and, as far as I can tell, they are.
Sent using a machine that autocorrects in interesting ways...
On Oct 23, 2021, at 1:55 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sat, Oct 23, 2021, 15:17 Fred Baker <fredbaker.ietf@gmail.com> wrote: I think you will find that there are some places in which getting IPv6 network service has been difficult, and as a result even IPv6-
Fred, do you mean places like, all of Verizon FiOS?
That would be an example. If I traceroute to an ipv6 address, the fact that I get a response is proof that the path works. F root has some servers for which, MOU be damned, there is no working IPv6 path. Mumble…
capable equipment is not reachable by IPv6. Those are, however, few and far between.
Sent using a machine that autocorrects in interesting ways...
On Oct 23, 2021, at 6:04 AM, David Conrad <drc@virtualized.org> wrote:
So, in theory, all the root servers should be available via IPv6 and, as far as I can tell, they are.
On 10/23/21 9:03 AM, David Conrad wrote:
Bryan,
Even the DNS root servers are not 100% reachable via IPv6.
Excepting temporary failures, they are as far as I am aware. Why do you think they aren’t?
I can't reach C, 2001:500:2::c, from many places in v6 land. My home and secondary data center can't reach it, but my backup VM's at another data center can. <snip>
However, the IANA team is not the enforcement arm of the Internet. If a root server operator chooses to not abide by RFC 7720, there is nothing the IANA team can do unilaterally other than make the root server operator aware of the fact.
Surely IANA has the power to compel a root server operator to abide by policy or they lose the right to be a root server?
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
Different markets, different approaches. In the areas I’ve lived in Los Angeles, commodity residential service via AT&T (1 Gbps up/down fiber) and Spectrum (varying speeds) is dual stack by default (as far as I can tell). I suspect all it would take would be one of the providers in your area to offer IPv6 and advertise the fact in their marketing to cause the others to fall into line.
Prior ISP charged me $15/month per IPv4 address and a mandatory router rent of $10/month. New one gets $5/month per IPv4 address. The reason for this is IP scarcity. They have plenty of v4 space, so this allows them to charge for it. v6 isn't going to make them any more money as they can't charge for it. -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net
Bryan, On Oct 23, 2021, at 5:56 PM, Bryan Fields <Bryan@bryanfields.net> wrote:
Excepting temporary failures, they are as far as I am aware. Why do you think they aren’t?
I can't reach C, 2001:500:2::c, from many places in v6 land. My home and
secondary data center can't reach it, but my backup VM's at another data center can.
Ah. Cogent. I suspect IPv6 peering policies. Somebody should bake a cake.
However, the IANA team is not the enforcement arm of the Internet. If a root server operator chooses to not abide by RFC 7720, there is nothing the IANA team can do unilaterally other than make the root server operator aware of the fact.
Surely IANA has the power to compel a root server operator to abide by policy or they lose the right to be a root server?
To compel? No. Not in the slightest. That is not how the root server system works. This is a (very) common misconception. There has been some effort to create a governance model for the root server system (see https://www.icann.org/en/system/files/files/rssac-037-15jun18-en.pdf) but I believe it has gotten bogged down in the question of “what do you do when a root server operator isn’t doing the job ‘right’ (whatever that means and after figuring out who decides) but doesn’t want to give up being a root server operator?”. It’s a hard question, but it isn’t the folks at IANA who answer it. Regards, -drc
On 10/26/21 12:10 PM, David Conrad wrote:
Surely IANA has the power to compel a root server operator to abide by policy or they lose the right to be a root server? To compel? No. Not in the slightest. That is not how the root server system works. This is a (very) common misconception.
Can you explain how it would work? Say you have a root server operator who starts messing up, is there any ability to remove them?
There has been some effort to create a governance model for the root server system (see https://www.icann.org/en/system/files/files/rssac-037-15jun18-en.pdf) but I believe it has gotten bogged down in the question of “what do you do when a root server operator isn’t doing the job ‘right’ (whatever that means and after figuring out who decides) but doesn’t want to give up being a root server operator?”.
Seems like a good policy, 6.3 seems to cover how to fix technical issues with a root operator.
It’s a hard question, but it isn't the folks at IANA who answer it.
Who does? Doesn't IANA designate root servers and the . zone? -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net
Bryan - One of the things that was clarified with the IANA Stewardship Transition is that ICANN has (at least) two distinct roles contained within it: one is coordination of the domain name community to develop Domain Name policy and the other is the IANA / Public Technical Identifiers (PTI) role serving as operator of the IANA functions (i.e. performing the administration of the various DNS, protocol registries, and the Internet numbers registries) The IANA doesn’t set policy, but rather takes policy for each set of identifiers from the respective community: a) ICANN DNS Community for the DNS root zone, b) IETF for the protocol parameter registries, and c) the RIRs for the unicast IPv4, unicast IPv6, and ASN registries listed in IETF RFC 7249. David is definitely correct to say that determining what (if any) governance model should be utilized for the root server operators is a question outside the scope of the administrative/technical operations performed by the IANA/PTI, and rather a question that the various DNS stakeholders (DNS community, ICANN, IETF, and the Root Server Operators) must ponder. /John On 26 Oct 2021, at 12:25 PM, Bryan Fields <Bryan@bryanfields.net> wrote:
On 10/26/21 12:10 PM, David Conrad wrote:
Surely IANA has the power to compel a root server operator to abide by policy or they lose the right to be a root server? To compel? No. Not in the slightest. That is not how the root server system works. This is a (very) common misconception.
Can you explain how it would work? Say you have a root server operator who starts messing up, is there any ability to remove them?
There has been some effort to create a governance model for the root server system (see https://www.icann.org/en/system/files/files/rssac-037-15jun18-en.pdf) but I believe it has gotten bogged down in the question of “what do you do when a root server operator isn’t doing the job ‘right’ (whatever that means and after figuring out who decides) but doesn’t want to give up being a root server operator?”.
Seems like a good policy, 6.3 seems to cover how to fix technical issues with a root operator.
It’s a hard question, but it isn't the folks at IANA who answer it.
Who does? Doesn't IANA designate root servers and the . zone?
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net
It appears that Bryan Fields <Bryan@bryanfields.net> said:
Can you explain how it would work? Say you have a root server operator who starts messing up, is there any ability to remove them?
Nope. We are fortunate that for over 30 years the root servers have all been competent and reliable.
It’s a hard question, but it isn't the folks at IANA who answer it.
Who does? Doesn't IANA designate root servers and the . zone?
The root servers are basically the people who have always run the root servers, give or take a few changes due to mergers and a few additions over a decade ago to get better geographic diversity. R's, John
On Oct 26, 2021, at 9:25 AM, Bryan Fields <Bryan@bryanfields.net> wrote:
Can you explain how it would work? Say you have a root server operator who starts messing up, is there any ability to remove them?
You might look at https://www.icann.org/en/system/files/files/rssac-037-15jun18-en.pdf. Yes, there is a proposed way to remove an operator that is not working out.
On Tue, 26 Oct 2021, David Conrad wrote:
Ah. Cogent. I suspect IPv6 peering policies. Somebody should bake a cake.
According to https://twitter.com/Benjojo12/status/1452673637606166536 Cogent<->Google IPv6 now works. A cake is in order, but perhaps a celebratory one!? -- Mikael Abrahamsson email: swmike@swm.pp.se
On Oct 26, 2021, at 9:11 AM, David Conrad <drc@virtualized.org> wrote:
There has been some effort to create a governance model for the root server system (see https://www.icann.org/en/system/files/files/rssac-037-15jun18-en.pdf) but I believe it has gotten bogged down in the question of “what do you do when a root server operator isn’t doing the job ‘right’ (whatever that means and after figuring out who decides) but doesn’t want to give up being a root server operator?”.
Unless you actually read the document. The process is that the fact is recognized and documented, the Designation and Removal function advises the ICANN board, they adopt a resolution, and instruct the IANA to remove the addresses from the relevant files, and from that point on nobody NEW tries to usevtheRSO. If someone does ask the company a question, they might or might not respond, and it might even have correct data, but then again might not. We can’t control who sends us requests, and we don’t have a black list.
On Fri, Oct 22, 2021 at 8:48 AM Bryan Fields <Bryan@bryanfields.net> wrote:
On 10/22/21 11:13 AM, Job Snijders via NANOG wrote:
Another aspect that flabbergasts me anno 2021 is how there *still* are BGP peering disputes between (more than two) major global internet service providers in which IPv6 is 'held hostage' as part of slow commercial negotiations. Surely end-to-end IPv6 connectivity should be a priority?
Even the DNS root servers are not 100% reachable via IPv6. I would think IANA would have some standard about reachability for root operators.
FWIW, I just was able to change my home office internet (I reside in the most densely populated county of Florida). The new provider sold me a dual stack connection, however when they came to deliver it, there was no IPv6 as promised. After spending almost a week playing phone tag, I finally got some one with clue. I was told they have no support if IPv6 and no plans to ever support IPv6 as there is no way to monetize it.
This leaves me in the same position as my prior circuit via the local cable co. (no plans to offer IPv6) but at least it's faster than the 2 meg up cable service.
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
87% of mobiles in the usa are ipv6 https://www.worldipv6launch.org/measurements/
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net
And ipv4 I presume so there is still easier and cost less money to just go with that. From our point as an MSP no customer has a requirement that they want to be able to be reached via IPV6 so it’s still cheaper to buy up IPV4 address space and do a lot of nat than to convert all our services to function properly with IPV6. Sure one could argue that they should have been made that way from the beginning but without customer demand why would we spend the money? //Gustav
23 okt. 2021 kl. 15:33 skrev Ca By <cb.list6@gmail.com>:
On Fri, Oct 22, 2021 at 8:48 AM Bryan Fields <Bryan@bryanfields.net> wrote: On 10/22/21 11:13 AM, Job Snijders via NANOG wrote:
Another aspect that flabbergasts me anno 2021 is how there *still* are BGP peering disputes between (more than two) major global internet service providers in which IPv6 is 'held hostage' as part of slow commercial negotiations. Surely end-to-end IPv6 connectivity should be a priority?
Even the DNS root servers are not 100% reachable via IPv6. I would think IANA would have some standard about reachability for root operators.
FWIW, I just was able to change my home office internet (I reside in the most densely populated county of Florida). The new provider sold me a dual stack connection, however when they came to deliver it, there was no IPv6 as promised. After spending almost a week playing phone tag, I finally got some one with clue. I was told they have no support if IPv6 and no plans to ever support IPv6 as there is no way to monetize it.
This leaves me in the same position as my prior circuit via the local cable co. (no plans to offer IPv6) but at least it's faster than the 2 meg up cable service.
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
87% of mobiles in the usa are ipv6
https://www.worldipv6launch.org/measurements/
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net
On Oct 23, 2021, at 8:30 AM, Ca By <cb.list6@gmail.com> wrote:
87% of mobiles in the usa are ipv6
https://www.worldipv6launch.org/measurements/ <https://www.worldipv6launch.org/measurements/>
Agreed. When they have to connect to an IPv4 only host, they do some type of AFTR. These devices have never known a world outside of this situation. That is a major difference.
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net <http://bryanfields.net/>
So I'm curious how the mobile operators deploying ipv6 to the handsets are dealing with ipv4. The simplest would be to get the phone a routable ipv4 address, but that would seemingly exacerbate the reason they went to v6 in the first place. Are carriers NAT'ing somewhere along the line? If so, where? Like does the phone encapsulate v4 in 4-in-6? Or does the phone get a net 10 address and it gets NAT'd by the carrier? It seems also for mobile carriers there is incentive for as much transit as possible for native v6 to the servers. Or is the deployment of v6 mainly within the carrier network itself and it's NAT'd somewhere? Basically what does a typical v6/v4 architecture look like for a mobile carrier these days? Mike On 10/23/21 8:13 AM, Brian Johnson wrote:
On Oct 23, 2021, at 8:30 AM, Ca By <cb.list6@gmail.com> wrote:
87% of mobiles in the usa are ipv6
Agreed. When they have to connect to an IPv4 only host, they do some type of AFTR. These devices have never known a world outside of this situation. That is a major difference.
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net
On Sat, Oct 23, 2021 at 10:33 AM Michael Thomas <mike@mtcc.com> wrote:
So I'm curious how the mobile operators deploying ipv6 to the handsets are dealing with ipv4. The simplest would be to get the phone a routable ipv4 address, but that would seemingly exacerbate the reason they went to v6 in the first place.
First, consider that the 3 major cell carriers in the usa each have 100 million customers. Also, consider they all now have a home broadband angle. Where do 100 million ipv4 addresses come from? Not rfc 1918, not arin, … and we are just talking about customer ip addresses, not considering towers, backend systems, call centers, retail …. So the genesis of 464xlat / rfc 6877 is that ipv4 cannot go where we need to go, the mobile architecture must be ipv6 to be comply with the e2e principle and not constrain the scaling of the customers / edge. Other cell carriers believe in operating many unique ipv4 networks … like a 10.0.0.0/8 per metro, but even that breaks down and cannot scale… and you end up with proxies / nats / sbcs everywhere just to make internal apps like ims work, which is a lot of state. Are carriers NAT'ing somewhere along the line? If so, where? Like does the
phone encapsulate v4 in 4-in-6? Or does the phone get a net 10 address and it gets NAT'd by the carrier?
~80% of traffic goes to fb, goog, yt, netflix, bing, o364, hbomax, apple tv, … all of which are ipv6. So, only 20% of traffic requires nat, when you have ipv6. I am hoping tiktoc and aws move to be default on for ipv6 soon. The nats dont scale well and take the brunt of attacks, so services that require nat suffer. Real shame, but they have a path to improve performance by deploying ipv6. Thats why performance driven companies use ipv6 (fb, goog, akamai, …)
It seems also for mobile carriers there is incentive for as much transit as possible for native v6 to the servers. Or is the deployment of v6 mainly within the carrier network itself and it's NAT'd somewhere?
Basically what does a typical v6/v4 architecture look like for a mobile carrier these days?
Mike
On 10/23/21 8:13 AM, Brian Johnson wrote:
On Oct 23, 2021, at 8:30 AM, Ca By <cb.list6@gmail.com> wrote:
87% of mobiles in the usa are ipv6
https://www.worldipv6launch.org/measurements/
Agreed. When they have to connect to an IPv4 only host, they do some type of AFTR. These devices have never known a world outside of this situation. That is a major difference.
-- Bryan Fields
727-409-1194 - Voice http://bryanfields.net
On 10/23/21 11:52 AM, Ca By wrote:
On Sat, Oct 23, 2021 at 10:33 AM Michael Thomas <mike@mtcc.com> wrote:
So I'm curious how the mobile operators deploying ipv6 to the handsets are dealing with ipv4. The simplest would be to get the phone a routable ipv4 address, but that would seemingly exacerbate the reason they went to v6 in the first place.
First, consider that the 3 major cell carriers in the usa each have 100 million customers. Also, consider they all now have a home broadband angle. Where do 100 million ipv4 addresses come from? Not rfc 1918, not arin, … and we are just talking about customer ip addresses, not considering towers, backend systems, call centers, retail ….
So the genesis of 464xlat / rfc 6877 is that ipv4 cannot go where we need to go, the mobile architecture must be ipv6 to be comply with the e2e principle and not constrain the scaling of the customers / edge. Other cell carriers believe in operating many unique ipv4 networks … like a 10.0.0.0/8 <http://10.0.0.0/8> per metro, but even that breaks down and cannot scale… and you end up with proxies / nats / sbcs everywhere just to make internal apps like ims work, which is a lot of state.
464, that's what i was looking for... there are so many transition schemes i wasn't sure which one they chose. So it's essentially double NAT'ing. Does that require TURN too for streaming? I can't remember what the limitations of STUN are.
Are carriers NAT'ing somewhere along the line? If so, where? Like does the phone encapsulate v4 in 4-in-6? Or does the phone get a net 10 address and it gets NAT'd by the carrier?
~80% of traffic goes to fb, goog, yt, netflix, bing, o364, hbomax, apple tv, … all of which are ipv6. So, only 20% of traffic requires nat, when you have ipv6. I am hoping tiktoc and aws move to be default on for ipv6 soon.
Yeah, aws is the most glaring since it probably hosts a significant portion of the long tail. it appears that aws only supports v6 with vpn's. Google only appears to support v6 if you use their load balancer. Sad. Mike
Ca By wrote:
First, consider that the 3 major cell carriers in the usa each have 100 million customers. Also, consider they all now have a home broadband angle. Where do 100 million ipv4 addresses come from? Not rfc 1918, not arin, … and we are just talking about customer ip addresses, not considering towers, backend systems, call centers, retail ….
Are you saying mobile terminals must be identified by IP addresses? Masataka Ohta
On Sun, Oct 24, 2021 at 9:28 AM Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp> wrote:
Ca By wrote:
First, consider that the 3 major cell carriers in the usa each have 100 million customers. Also, consider they all now have a home broadband angle. Where do 100 million ipv4 addresses come from? Not rfc 1918, not arin, … and we are just talking about customer ip addresses, not considering towers, backend systems, call centers, retail ….
Are you saying mobile terminals must be identified by IP addresses?
Nodes in an ip network require ip addresses. 4G and 5G are ip networks for both voice and data. I do not believe it is either technically nor economically feasible to run a 4g or 5g network without an ip address on the ue.
Masataka Ohta
On 10/24/21 19:19, Ca By wrote:
Nodes in an ip network require ip addresses. 4G and 5G are ip networks for both voice and data.
I do not believe it is either technically nor economically feasible to run a 4g or 5g network without an ip address on the ue.
*shaking_my_head* For the avoidance of doubt, Cameron is speaking sense. Mark.
I’m typing this on an LTE UE on our network with a NAT’d IPv4 IP address. Feels relevant. —L.B. Ms. Lady Benjamin PD Cannon of Glencoe, ASCE 6x7 Networks & 6x7 Telecom, LLC CEO lb@6by7.net <mailto:lb@6by7.net> "The only fully end-to-end encrypted global telecommunications company in the world.” FCC License KJ6FJJ
On Oct 24, 2021, at 10:58 PM, Mark Tinka <mark@tinka.africa> wrote:
On 10/25/21 01:35, Masataka Ohta wrote:
So are IP entities behind NAT. So?
Your point being...?
Mark.
On 10/25/21 08:29, Lady Benjamin Cannon of Glencoe, ASCE wrote:
I’m typing this on an LTE UE on our network with a NAT’d IPv4 IP address.
Feels relevant.
We may be missing each other here... From the point of view of TCP/IP, a node behind a CGN has a unique IP address. So what I'm trying to understand is, despite whether a connection is pure or NAT'ed, how does a device on the Internet expect to communicate without an IP address? Mark.
On 10/23/21 9:30 AM, Ca By wrote:
Until IPv6 becomes provides a way to make money for the ISP, I don't see it being offered outside of the datacenter.
87% of mobiles in the usa are ipv6
Mobile is different, v6 makes financial sense as CG NAT doesn't scale to 400m cell phones in north America. (does NANOG scope include Mexico?) That said most (all) IPv6 cellular providers still don't use it for end to end connectivity, as inbound connections are silently dropped. In the US if you want inbound connectivity to work via cellular, you must to buy the static IP service from Verizon, and it has no IPv6 support, and no plans for it in the future. Oddly enough the MVNO services over T-Mobile seem to allow inbound IPv6, but TMO proper doesn't. V6 that works everywhere would simplify a _huge_ connectivity problem for me. -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net
Hi again, Op 22-10-21 om 17:13 schreef Job Snijders:
Tl;DR
Not at all. This was a very interesting read! Thank you. While pondering over it, I noticed that the ns[1234].fastly.net servers are nicely anycasted throughout the globe. If anyone could turn on IPv6 on their authoritatives without therisk of loosing too much performance, I reckon it would be them... our Cloudflare. But they already did it. ;-).
work in progress!
I have good hopes. Rumour has it that Fastly employs some very smart people. I'm sure we'll see nice things happening when the time is right. -- Marco
On Fri, Oct 22, 2021 at 05:13:09PM +0200, Job Snijders via NANOG wrote:
Hi everyone, goedenmiddag Marco!
On Fri, Oct 22, 2021 at 01:40:42PM +0200, Marco Davids via NANOG wrote:
We currently live in times where is actually fun to go IPv6-only. In my case, as in: running a FreeBSD kernel compiled without the IPv4-stack.
Indeed, this is fun experimentation. Shaking the (source code) trees through excercises like these is a valuable way to identify gaps.
It turns out that there underlying CDN's with domain names such as ‘l-msedge.net’ and ‘trafficmanager.net’ (Microsoft) or 'fastly.net', that reside on authoritative name servers that *only* have an IPv4 address.
As some observant readers noticed (hint: https://ip6.nl/#!deb.debian.org), Fastly is working hard with select customers and friends to support IPv6 for everyone.
** SNIP **
as BGP traffic engineering) might be reluctant to offer IPv6 services "as if they are the same as IPv4". More study is required.
Tl;DR - work in progress! :-)
Some of the other CDNs do have IPv6 on the authorities and should work without issues. eg: dig -6 +trace www.akamai.com. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
participants (40)
-
Baldur Norddahl
-
Brian Johnson
-
Bryan Fields
-
Ca By
-
Christopher Morrow
-
Dave Bell
-
Dave Taht
-
David Conrad
-
Frank Habicht
-
Fred Baker
-
Gary Buhrmaster
-
Grzegorz Janoszka
-
Gustav Ulander
-
Jared Mauch
-
JCLB
-
Jean St-Laurent
-
Jens Link
-
Job Snijders
-
John Curran
-
John Levine
-
Jose Luis Rodriguez
-
Lady Benjamin Cannon of Glencoe, ASCE
-
Lukas Tribus
-
Marco Davids
-
Mark Andrews
-
Mark Tinka
-
Masataka Ohta
-
Matthew Walster
-
Max Tulyev
-
Michael Thomas
-
Mikael Abrahamsson
-
Mike Hammett
-
Oliver O'Boyle
-
Owen DeLong
-
scott
-
Scott Morizot
-
sronan@ronan-online.com
-
tim@pelican.org
-
Tom Hill
-
William Herrin