Hi folks, Anyone knows what is used for the AWS Elastic IP? is it LISP? Thanks. Regards, -lmn
On Thu, May 28, 2015 at 11:00 AM, Ca By <cb.list6@gmail.com> wrote:
On Thu, May 28, 2015 at 7:34 AM, Luan Nguyen <lnguyen@opsource.net> wrote:
Hi folks, Anyone knows what is used for the AWS Elastic IP? is it LISP?
AWS does not really talk about things like this, but i highly doubt it is LISP.
i sort of doesn't matter right? it is PROBABLY some form of encapsulation (like gre, ip-in-ip, lisp, mpls, vpls, etc) ... something to remove the 'internal ip network' from what is routed in a datacenter and what is routed externally on the tubes. Maybe a better question: "Why would lisp matter here?" (what makes lisp the thing you grabbed at as opposed to any of the other possible encap options?)
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Subject: Re: AWS Elastic IP architecture [...] i sort of doesn't matter right? it is PROBABLY some form of encapsulation (like gre, ip-in-ip, lisp, mpls, vpls, etc) ... [...]
I don't know how the public blocks get to the datacenter (e.g. whether they are using MPLS) but after that I think it is pretty straightforward. All of the VMs have only one IPv4 address assigned out of 10/8. This doesn't change when you attach an Elastic IP to them. All that is happening is that they have some NAT device somewhere (maybe even just a redundant pair of VMs?) that has a block of public IPs assigned to it and they are static NAT'ing the Elastic IP to the VM. They control the allocation of the Elastic IPs, so they just pick one that is routed out of that datacenter already. They probably don't need to do anything out of the ordinary to get it there. (See: http://aws.amazon.com/articles/1346 )
On Thu, May 28, 2015 at 11:59 AM, Michael Helmeste <elf@ubertel.net> wrote:
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Subject: Re: AWS Elastic IP architecture [...] i sort of doesn't matter right? it is PROBABLY some form of encapsulation (like gre, ip-in-ip, lisp, mpls, vpls, etc) ... [...]
I don't know how the public blocks get to the datacenter (e.g. whether they are using MPLS) but after that I think it is pretty straightforward. All of the VMs have only one IPv4 address assigned out of 10/8. This doesn't change when you attach an Elastic IP to them.
right, so they encap somwhere after between 'tubez' and 'vm'. and likely have a simple 'swap the ip header' function somewhere before the vm as well.
All that is happening is that they have some NAT device somewhere (maybe even just a redundant pair of VMs?) that has a block of public IPs assigned to it and they
i'd question scalability of that sort of thing... but sure, sounds like a reasonable model to think about.
-----Original Message----- From: christopher.morrow@gmail.com Subject: Re: AWS Elastic IP architecture
[...] All that is happening is that they have some NAT device somewhere (maybe even just a redundant pair of VMs?) that has a block of public IPs assigned to it and they
i'd question scalability of that sort of thing... but sure, sounds like a reasonable model to think about.
I agree it appears ugly from a traditional network service provider perspective, but to my understanding much of the large scale cloud stuff is built using the cheapest, dumbest switching you can find and as little rich L3 routing gear (e.g. ASR/MX) as you can get away with. The more functionality you can pack into software (with the universal building block being a VM), the less you have to worry about buying network hardware to any particular requirement other than "forwards Ethernet most of the time." It gives more control and agility to the developers of the platform, and spending a few gigabytes of RAM for every /23 and adding a little more latency and jitter ultimately becomes an economical trade off. You can also move the network stuff up to the hypervisor layer (which I am sure they have done for things like Security Groups), but it makes rolling out updates harder and increases the general hack-level.
On Thu, May 28, 2015 at 2:39 PM, Michael Helmeste <elf@ubertel.net> wrote:
and spending a few gigabytes of RAM for every /23
it's not clear to me that you need ram at all for this... there are multiple dimensions to the scaling problem I was aiming at, this is but one of them. anyway, unless an EC2/aws/etc person speaks up I bet we can 'conjecturebate'(tm) forever without success.
On May 28, 2015, at 10:03 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Thu, May 28, 2015 at 11:59 AM, Michael Helmeste <elf@ubertel.net> wrote:
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Subject: Re: AWS Elastic IP architecture [...] i sort of doesn't matter right? it is PROBABLY some form of encapsulation (like gre, ip-in-ip, lisp, mpls, vpls, etc) ... [...]
I don't know how the public blocks get to the datacenter (e.g. whether they are using MPLS) but after that I think it is pretty straightforward. All of the VMs have only one IPv4 address assigned out of 10/8. This doesn't change when you attach an Elastic IP to them.
right, so they encap somwhere after between 'tubez' and 'vm'. and likely have a simple 'swap the ip header' function somewhere before the vm as well.
It doesn’t sound like they have to encap/decap anything. Sounds like the packet comes in, gets NAT’d and then gets routed to the 10/8 address by normal means. Why do you assume some encap/decap process somewhere in this process?
All that is happening is that they have some NAT device somewhere (maybe even just a redundant pair of VMs?) that has a block of public IPs assigned to it and they
i'd question scalability of that sort of thing... but sure, sounds like a reasonable model to think about.
They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses, the variety of things that are inherently broken by NAT or multi-layer NAT, and a few other relatively well-known problems, the biggest scalability problem I see in such a solution is the lack of available public IPv4 addresses to give to elastic IP utilization. However, this is a scale problem shared by the entire IPv4 internet. The solution is to add IPv6 capabilities to your hosts/software/etc. Unfortunately, if you build your stuff on AWS, said solution is not possible and Amazon, despite repeated public prodding, has not announced any plans, intention, or date for making IPv6 available in a meaningful way to things hosted on their infrastructure. Suggestion: If you care about scale or about your application being able to function in the future (say more than the next couple of years), don’t build it on AWS… Build it somewhere that has IPv6 capabilities such as (in no particular order): Linode, Host Virtual[1], SoftLayer, etc. Owen [1] Full disclosure: I have no affiliation with any of the companies listed except Host Virtual (vr.org <http://vr.org/>). I have done some IPv4 and IPv6 consulting for them. I have no skin in the game promoting any of the above organizations, including Host Virtual. To the best of my knowledge, all of the organizations have ethical business practices and offer excellent customer service.
On Fri, May 29, 2015 at 4:22 AM, Owen DeLong <owen@delong.com> wrote:
Why do you assume some encap/decap process somewhere in this process?
why do you think they have a single 10/8 deployment per location and not per customer? if it' sper customer, they have to provide some encap (I'd think) to avoid lots and lots of headaches. I don't imagine that if aws/ec2 is 'millions of customers' running on 'cheapest ethernet reference platform possible' they can do much fancy stuff with respect to virtual networking. I'd expect almost all of that to have to happen at the vm-host (not the guest), and that there's just some very simple encapsulation of traffic from the 'edge' to the vm-host and then 'native' (for some sense of that word) up to the 'vm'.
On May 29, 2015, at 8:27 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 4:22 AM, Owen DeLong <owen@delong.com> wrote:
Why do you assume some encap/decap process somewhere in this process?
why do you think they have a single 10/8 deployment per location and not per customer? if it' sper customer, they have to provide some encap (I'd think) to avoid lots and lots of headaches. I don't imagine that if aws/ec2 is 'millions of customers' running on 'cheapest ethernet reference platform possible' they can do much fancy stuff with respect to virtual networking. I'd expect almost all of that to have to happen at the vm-host (not the guest), and that there's just some very simple encapsulation of traffic from the 'edge' to the vm-host and then 'native' (for some sense of that word) up to the 'vm'.
Because that’s what one of their engineers told me at one point in the past. Admittedly, it may have changed. My understanding was along the lines of a very large flat L2 space among the VM Hosts with minimal routing on the hosts and a whole lot of /32 routes. Again, my information may be incomplete, obsolete, or incorrect. Memories of bar conversations get fuzzy after 12+ months. Owen
On May 28, 2015, at 8:00 AM, Ca By <cb.list6@gmail.com> wrote:
On Thu, May 28, 2015 at 7:34 AM, Luan Nguyen <lnguyen@opsource.net> wrote:
Hi folks, Anyone knows what is used for the AWS Elastic IP? is it LISP?
AWS does not really talk about things like this, but i highly doubt it is LISP.
Yeah, if it were LISP, they could probably handle IPv6. Owen
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap? the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future. So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding. Please try to avoid putting words in my mouth in the future. Owen
i love that you are always combative, it makes for great tv. On Fri, May 29, 2015 at 9:04 PM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
sort of seemed like part of your point.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
it's totally possible that they DO LISP and simply disable ipv6 for some other unspecified reason too, right? Maybe they are just on a jihad against larger ip numbers? or their keyboards have no colons?
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
you have so many words there already it's going to be fun fitting more in if I did try. have a swell weekend!
On May 29, 2015, at 6:14 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
i love that you are always combative, it makes for great tv.
On Fri, May 29, 2015 at 9:04 PM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
sort of seemed like part of your point.
I swear, it really wasn’t.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
it's totally possible that they DO LISP and simply disable ipv6 for some other unspecified reason too, right? Maybe they are just on a jihad against larger ip numbers? or their keyboards have no colons?
I suppose, but according to statements made by their engineers, it has to do with the “way that they have structured their backend networks to the virtual hosts”. I’m pretty sure that I’ve ruled the last two out based on discussions I’ve had with their engineers, but you’re right, I was probably a little more glib about it than was 100% accurate. Bottom line, however, is it doesn’t matter what the reason, they are utterly incapable of doing IPv6 and utterly and completely unrepentant about it.
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
you have so many words there already it's going to be fun fitting more in if I did try.
LoL
have a swell weekend!
You too. Owen
They could do 6rd by just flipping a switch on one of their routers. Granted it is not native IPv6 but maybe better than nothing. Regards Baldur
On Fri, May 29, 2015 at 9:45 PM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 6:14 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
i love that you are always combative, it makes for great tv.
On Fri, May 29, 2015 at 9:04 PM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
sort of seemed like part of your point.
I swear, it really wasn’t.
sweet! :)
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
it's totally possible that they DO LISP and simply disable ipv6 for some other unspecified reason too, right? Maybe they are just on a jihad against larger ip numbers? or their keyboards have no colons?
I suppose, but according to statements made by their engineers, it has to do with the “way that they have structured their backend networks to the virtual hosts”.
I’m pretty sure that I’ve ruled the last two out based on discussions I’ve had with their engineers, but you’re right, I was probably a little more glib about it than was 100% accurate.
Bottom line, however, is it doesn’t matter what the reason, they are utterly incapable of doing IPv6 and utterly and completely unrepentant about it.
it is sort of a bummer, they WILL have to do it eventually though (you'd think)... and 'sooner rather than later' makes a lot of sense to work out the bugs and problems and 'we should have thoughta that!'s...not to mention as they sit and grow it becomes more painful everyday to make the move :( Amazon doesn't even offer a v4/v6 LoadBalancer service right? (I had thought they did, but I guess I'm mis-remembering)
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
you have so many words there already it's going to be fun fitting more in if I did try.
LoL
have a swell weekend!
You too.
so far so good! (hoping for a little rain to cool/clean things)
Amazon doesn't even offer a v4/v6 LoadBalancer service right? (I had thought they did, but I guess I'm mis-remembering)
They sort of do, but it’s utterly incompatible with all of their modern capabilities. You have to use some pretty antiquated VM provisioning and such to use it if I understood people correctly. Owen
Only EC2 classic has dual stack anything. VPC load balancers (and, indeed, everything about VPC) is IPv4 only. And EC2 classic is being phased out, so dualstack is sort of dying on AWS. However, I do have some solid information that they're scrambling to retrofit, but seeing as how we know AWS operates internally (compartmentalizing information to the point of paranoia), I reckon it will be another year or two before we even see IPv6 support extend to CloudFront (their CDN) endpoints. Don't hold your breath on seeing v6 inside VPC/EC2 anytime soon...is what I was told. On Sat, May 30, 2015 at 3:49 PM, Owen DeLong <owen@delong.com> wrote:
Amazon doesn't even offer a v4/v6 LoadBalancer service right? (I had thought they did, but I guess I'm mis-remembering)
They sort of do, but it’s utterly incompatible with all of their modern capabilities. You have to use some pretty antiquated VM provisioning and such to use it if I understood people correctly.
Owen
-- Blair Trosper p.g.a. S2 Entertainment Partners Desk: 469-333-8008 Cell: 512-619-8133 Agent/Rep: WME (Los Angeles, CA) - 310-248-2000 PR/Manager: BORG (Dallas, TX) - 844-THE-BORG
Oh, and the only thing dual stack about EC2 Classic was ELBs (elastic load balancers). Instances had no means of IPv6 communication except via an ELB. That is the FULL extent of IPv6 implementation on AWS at present...and most people do not have EC2 classic. On Sat, May 30, 2015 at 4:20 PM, Blair Trosper <blair.trosper@gmail.com> wrote:
Only EC2 classic has dual stack anything. VPC load balancers (and, indeed, everything about VPC) is IPv4 only.
And EC2 classic is being phased out, so dualstack is sort of dying on AWS. However, I do have some solid information that they're scrambling to retrofit, but seeing as how we know AWS operates internally (compartmentalizing information to the point of paranoia), I reckon it will be another year or two before we even see IPv6 support extend to CloudFront (their CDN) endpoints.
Don't hold your breath on seeing v6 inside VPC/EC2 anytime soon...is what I was told.
On Sat, May 30, 2015 at 3:49 PM, Owen DeLong <owen@delong.com> wrote:
Amazon doesn't even offer a v4/v6 LoadBalancer service right? (I had thought they did, but I guess I'm mis-remembering)
They sort of do, but it’s utterly incompatible with all of their modern capabilities. You have to use some pretty antiquated VM provisioning and such to use it if I understood people correctly.
Owen
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary. As it turns out that IPv6 is already available on ELBs since 2011: https://aws.amazon.com/blogs/aws/elastic-load-balancing-ipv6-zone-apex-suppo... Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... Netflix is using it already as per their techblog since 2012: http://techblog.netflix.com/2012/07/enabling-support-for-ipv6.html Regards, Andras On Sat, May 30, 2015 at 11:04 AM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
Owen
On Sat, May 30, 2015 at 11:38 AM, Andras Toth <diosbejgli@gmail.com> wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
As it turns out that IPv6 is already available on ELBs since 2011: https://aws.amazon.com/blogs/aws/elastic-load-balancing-ipv6-zone-apex-suppo...
ah! I thought I'd remembered this for ~v6day or something similar. cool! so at least for some LB services you can get v6 entrance services.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Netflix is using it already as per their techblog since 2012: http://techblog.netflix.com/2012/07/enabling-support-for-ipv6.html
neat!
Regards, Andras
On Sat, May 30, 2015 at 11:04 AM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
Owen
On May 30, 2015, at 8:38 AM, Andras Toth <diosbejgli@gmail.com> wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
As it turns out that IPv6 is already available on ELBs since 2011: https://aws.amazon.com/blogs/aws/elastic-load-balancing-ipv6-zone-apex-suppo... <https://aws.amazon.com/blogs/aws/elastic-load-balancing-ipv6-zone-apex-support-additional-security/>
See other posts… ELB is being phased out and works only with EC2 and classic. As I said, it does not work with modern Amazon VPC.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses>
All well and good and equally irrelevant.
Netflix is using it already as per their techblog since 2012: http://techblog.netflix.com/2012/07/enabling-support-for-ipv6.html <http://techblog.netflix.com/2012/07/enabling-support-for-ipv6.html>
Yes… This token checkbox effort which doesn’t work unless you are running on old hosts without access to any current storage technologies and face other limitations is available. My statements stand, as far as I am concerned. Owen
Regards, Andras
On Sat, May 30, 2015 at 11:04 AM, Owen DeLong <owen@delong.com> wrote:
On May 29, 2015, at 8:23 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, May 29, 2015 at 3:45 AM, Owen DeLong <owen@delong.com> wrote:
Yeah, if it were LISP, they could probably handle IPv6.
why can't they do v6 with any other encap?
That’s not my point.
the encap really doesn't matter at all to the underlying ip protocol used, or shouldn't... you decide at the entrance to the 'virtual network' that 'thingy is in virtual-network-5 and encap the packet... regardless of ip version of the thing you are encapsulating.
Whatever encapsulation or other system they are using, clearly they can’t do IPv6 for some reason because they outright refuse to even offer so much as a verification that IPv6 is on any sort of roadmap or is at all likely to be considered for deployment any time in the foreseeable future.
So, my point wasn’t that LISP is the only encapsulation that supports IPv6. Indeed, I didn’t even say that. What I said was that their apparent complete inability to do IPv6 makes it unlikely that they are using an IPv6-capable encapsulation system. Thus, it is unlikely they are using LISP. I only referenced LISP because it was specifically mentioned by the poster to whom I was responding.
Please try to avoid putting words in my mouth in the future.
Owen
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered: "Load balancers in a VPC support IPv4 addresses only." and "Load balancers in EC2-Classic support both IPv4 and IPv6 addresses." - Matt
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work. In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses" Andras On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
Point of clarification: AWS customer IP subnets can overlap, but customer VPCs that encompass overlapping subnets cannot peer with each other. In other words, the standard arguments in favor of address uniqueness still apply. TV On May 31, 2015 7:23:37 AM EDT, Andras Toth <diosbejgli@gmail.com> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.
I wasn’t being specific about VPC vs. Classic. The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications. I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period. What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world. Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6. On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com> wrote:
I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
Sigh… IPv6 has huge utility. AWS’ implementation of IPv6 is brain-dead and mostly useless for most applications. I think if you will review my track record over the last 5+ years, you will plainly see that I am fully aware of the utility and need for IPv6. http://lmgtfy.com?q=owen+delong+ipv6 <http://lmgtfy.com/?q=owen+delong+ipv6> My network (AS1734) is fully dual-stacked, unlike AWS. If AWS is so convinced of the utility of IPv6, why do they continue to refuse to do a real implementation that provides IPv6 capabilities to users of their current architecture. Currently, on AWS, the only IPv6 is via ELB for classic EC2 hosts. You cannot put a native IPv6 address on an AWS virtual server at all (EC2 or VPC). Unless your application is satisfied by running an IPv4-only web server which has an IPv6 VIP proxy in front of it with some extra headers added by the proxy to help you parse out the actual source address of the connection, then your application cannot use IPv6 on AWS. As such, I stand by my statement that there is effectively no meaningful support for IPv6 in AWS, period. AWS may disagree and think that ELB for classic EC2 is somehow meaningful, but their lack of other support for any of their modern architectures and the fact that they are in the process of phasing out classic EC2 makes me think that’s a pretty hard case to make. Owen
On May 31, 2015, at 9:01 AM, Blair Trosper <blair.trosper@gmail.com> wrote:
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6.
On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com>> wrote: I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com <mailto:diosbejgli@gmail.com>> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 <http://10.0.0.0/24> for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org>> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote:
Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses>
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
Since your network has IPv6, I fail to see the issue. Nobody is anywhere near being able to go single-stack on IPv6, so AWS is just another network your customers will continue to reach over v4. So what? Heck, if v6 support from a cloud hosting company is so important, I see a great business opportunity in your future. Matthew Kaufman (Sent from my iPhone)
On May 31, 2015, at 10:57 AM, Owen DeLong <owen@delong.com> wrote:
Sigh…
IPv6 has huge utility.
AWS’ implementation of IPv6 is brain-dead and mostly useless for most applications.
I think if you will review my track record over the last 5+ years, you will plainly see that I am fully aware of the utility and need for IPv6.
http://lmgtfy.com?q=owen+delong+ipv6 <http://lmgtfy.com/?q=owen+delong+ipv6>
My network (AS1734) is fully dual-stacked, unlike AWS.
If AWS is so convinced of the utility of IPv6, why do they continue to refuse to do a real implementation that provides IPv6 capabilities to users of their current architecture.
Currently, on AWS, the only IPv6 is via ELB for classic EC2 hosts. You cannot put a native IPv6 address on an AWS virtual server at all (EC2 or VPC). Unless your application is satisfied by running an IPv4-only web server which has an IPv6 VIP proxy in front of it with some extra headers added by the proxy to help you parse out the actual source address of the connection, then your application cannot use IPv6 on AWS.
As such, I stand by my statement that there is effectively no meaningful support for IPv6 in AWS, period.
AWS may disagree and think that ELB for classic EC2 is somehow meaningful, but their lack of other support for any of their modern architectures and the fact that they are in the process of phasing out classic EC2 makes me think that’s a pretty hard case to make.
Owen
On May 31, 2015, at 9:01 AM, Blair Trosper <blair.trosper@gmail.com> wrote:
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6.
On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com>> wrote: I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com <mailto:diosbejgli@gmail.com>> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 <http://10.0.0.0/24> for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org>> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote: Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses>
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
AWS built their network first...before IPv6 "popped", so you can appreciate the huge task they have of retrofitting all their products to support it. I don't envy the task, but they have said publicly and privately that it's a priority. But it's also a massive undertaking, and you can't expect them to snap their fingers and turn it out over a weekend, man... The prize of being first cuts both ways when newer technologies at lower network levels start taking off and you don't have support built in to something proprietary. Would it be great if they had it faster? Obviously yes. Are they working on it as a priority? Yes. Can they go any faster? Probably. Are there other choices for cloud providers that are full dual stack if this really is a live or die issue for you? Yes. Access to dual-stack isn't a fundamental human right. If you don't like what AWS is doing, then use someone else who has dualstack. I don't get the outrage...and it's so irrational, that you've caused me to actually *defend* AWS. bt On Sun, May 31, 2015 at 1:29 PM, Matthew Kaufman <matthew@matthew.at> wrote:
Since your network has IPv6, I fail to see the issue.
Nobody is anywhere near being able to go single-stack on IPv6, so AWS is just another network your customers will continue to reach over v4. So what?
Heck, if v6 support from a cloud hosting company is so important, I see a great business opportunity in your future.
Matthew Kaufman
(Sent from my iPhone)
On May 31, 2015, at 10:57 AM, Owen DeLong <owen@delong.com> wrote:
Sigh…
IPv6 has huge utility.
AWS’ implementation of IPv6 is brain-dead and mostly useless for most applications.
I think if you will review my track record over the last 5+ years, you will plainly see that I am fully aware of the utility and need for IPv6.
http://lmgtfy.com?q=owen+delong+ipv6 < http://lmgtfy.com/?q=owen+delong+ipv6>
My network (AS1734) is fully dual-stacked, unlike AWS.
If AWS is so convinced of the utility of IPv6, why do they continue to refuse to do a real implementation that provides IPv6 capabilities to users of their current architecture.
Currently, on AWS, the only IPv6 is via ELB for classic EC2 hosts. You cannot put a native IPv6 address on an AWS virtual server at all (EC2 or VPC). Unless your application is satisfied by running an IPv4-only web server which has an IPv6 VIP proxy in front of it with some extra headers added by the proxy to help you parse out the actual source address of the connection, then your application cannot use IPv6 on AWS.
As such, I stand by my statement that there is effectively no meaningful support for IPv6 in AWS, period.
AWS may disagree and think that ELB for classic EC2 is somehow meaningful, but their lack of other support for any of their modern architectures and the fact that they are in the process of phasing out classic EC2 makes me think that’s a pretty hard case to make.
Owen
On May 31, 2015, at 9:01 AM, Blair Trosper <blair.trosper@gmail.com> wrote:
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6.
On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com <mailto: owen@delong.com>> wrote: I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com <mailto:diosbejgli@gmail.com>> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 <http://10.0.0.0/24> for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org>> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote: Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... < http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in...
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
On May 31, 2015, at 11:36 AM, Blair Trosper <blair.trosper@gmail.com> wrote:
AWS built their network first...before IPv6 "popped", so you can appreciate the huge task they have of retrofitting all their products to support it.
Sure, and if they said “We have a plan, and it will take X amount of time”, I would respect that. If they said “We have a plan and we’re not sure how long it will take”, I would continue to poke them about sooner is better than later and having a target date helps people to plan. “We don’t think IPv6 matters and we aren’t announcing any plans to get it implemented or any date by which it will be available”, on the other hand, being what they have actually repeatedly said to me until very recently, not so much. Now, they’re saying (essentially) “We think IPv6 might matter, but we aren’t announcing any plans to get it implemented or any date by which it will be available” . To me, this is still a problematic situation for their customers. Especially when you look at the impact it has on the rest of the internet. Review Lee Howard’s Denver ION presentation about per-user-per-year costs of delivering IPv4 over the next several years and it rapidly becomes clear that the failure of Amazon to make dual stack available is actually one of the major factors preventing eyeball carriers from being able to make plans for IPv6 monastic on any reasonable timeframe and a major factor in their CGN costs.
I don't envy the task, but they have said publicly and privately that it's a priority. But it's also a massive undertaking, and you can't expect them to snap their fingers and turn it out over a weekend, man…
They haven’t, really, exactly said that. They’ve sort of hinted that they might be working on it in some places. They’ve sort-a-kind-a paid it some lip service. They haven’t announced plans, dates, or any firm commitment in any form.
The prize of being first cuts both ways when newer technologies at lower network levels start taking off and you don't have support built in to something proprietary.
I started talking to folks at Amazon about this issue more than 5 years ago. At the time, they told me flat out that it was not a priority. I gave them half a decade to figure out it was a priority and do something about it while remaining relatively quite about it publicly. At this point, things have reached a point where the damage that occurs as a result of applications being deployed on such a dead-end service and the limitations that service imposes on those applications can no longer be tolerated.
Would it be great if they had it faster? Obviously yes.
Agreed.
Are they working on it as a priority? Yes.
Do you have any evidence to support this claim?
Can they go any faster? Probably.
Isn’t that answer alone a sign that perhaps it isn’t so much of a priority to them?
Are there other choices for cloud providers that are full dual stack if this really is a live or die issue for you? Yes.
This represents one of the most common fallacies in people’s thinking about IPv6. Your failure to implement IPv6 doesn’t just impact you and your customers. Especially when you’re something like AWS. It impacts the customers of your customers and their service providers, too. If Amazon and Skype were IPv6 capable, you would actually find a relatively significant fraction of traffic that is likely to get CGN’d today would be delivered over IPv6 instead. That’s a HUGE win and a HUGE cost savings to lots of eyeball ISPs out there. None of them are likely AWS customers. None of them are likely to be perceived by AWS as “demand” for IPv6, yet, they are in fact the source of the majority of the demand.
Access to dual-stack isn't a fundamental human right. If you don't like what AWS is doing, then use someone else who has dualstack.
Again, you are ignoring the larger consequences of their failure. You can rest assured that I am not purchasing service from AWS due to their failed policies toward IPv6. However, that doesn’t fully mitigate the impact to me from those bad decisions. So, in an effort to both further mitigate those impacts and to help others avoid them, I have started vocally encouraging people to take a serious look at AWS’ lack of IPv6 and consider alternatives when selecting a cloud hosting provider.
I don't get the outrage...and it's so irrational, that you've caused me to actually *defend* AWS.
I hope I have explained the reasons for my position a bit better so that you no longer feel the need to do so. I am not outraged by AWS’ actions. They are free to do what they wish. However, I want to make sure that application developers are aware of the impact this has on their application, should they choose to deploy it in AWS and I want to encourage current users of AWS to consider IPv6-capable alternatives for the good of the internet. Owen
bt
On Sun, May 31, 2015 at 1:29 PM, Matthew Kaufman <matthew@matthew.at <mailto:matthew@matthew.at>> wrote: Since your network has IPv6, I fail to see the issue.
Nobody is anywhere near being able to go single-stack on IPv6, so AWS is just another network your customers will continue to reach over v4. So what?
Heck, if v6 support from a cloud hosting company is so important, I see a great business opportunity in your future.
Matthew Kaufman
(Sent from my iPhone)
On May 31, 2015, at 10:57 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com>> wrote:
Sigh…
IPv6 has huge utility.
AWS’ implementation of IPv6 is brain-dead and mostly useless for most applications.
I think if you will review my track record over the last 5+ years, you will plainly see that I am fully aware of the utility and need for IPv6.
http://lmgtfy.com?q=owen+delong+ipv6 <http://lmgtfy.com/?q=owen+delong+ipv6> <http://lmgtfy.com/?q=owen+delong+ipv6 <http://lmgtfy.com/?q=owen+delong+ipv6>>
My network (AS1734) is fully dual-stacked, unlike AWS.
If AWS is so convinced of the utility of IPv6, why do they continue to refuse to do a real implementation that provides IPv6 capabilities to users of their current architecture.
Currently, on AWS, the only IPv6 is via ELB for classic EC2 hosts. You cannot put a native IPv6 address on an AWS virtual server at all (EC2 or VPC). Unless your application is satisfied by running an IPv4-only web server which has an IPv6 VIP proxy in front of it with some extra headers added by the proxy to help you parse out the actual source address of the connection, then your application cannot use IPv6 on AWS.
As such, I stand by my statement that there is effectively no meaningful support for IPv6 in AWS, period.
AWS may disagree and think that ELB for classic EC2 is somehow meaningful, but their lack of other support for any of their modern architectures and the fact that they are in the process of phasing out classic EC2 makes me think that’s a pretty hard case to make.
Owen
On May 31, 2015, at 9:01 AM, Blair Trosper <blair.trosper@gmail.com <mailto:blair.trosper@gmail.com>> wrote:
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6.
On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com> <mailto:owen@delong.com <mailto:owen@delong.com>>> wrote: I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com <mailto:diosbejgli@gmail.com> <mailto:diosbejgli@gmail.com <mailto:diosbejgli@gmail.com>>> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 <http://10.0.0.0/24> <http://10.0.0.0/24 <http://10.0.0.0/24>> for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org> <mailto:mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org>>> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote: Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses> <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses>>
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
On 5/31/15, 3:11 PM, "Owen DeLong" <owen@delong.com> wrote:
if they said “We have a plan, and it will take X amount of time”, I would respect that.
If they said “We have a plan and we’re not sure how long it will take”, I would continue to poke them about sooner is better than later and having a target date helps people to plan.
“We don’t think IPv6 matters and we aren’t announcing any plans to get it implemented or any date by which it will be available”, on the other hand, being what they have actually repeatedly said to me until very recently, not so much.
Now, they’re saying (essentially) “We think IPv6 might matter, but we aren’t announcing any plans to get it implemented or any date by which it will be available” . To me, this is still a problematic situation for their customers.
At the risk of feeding the troll... This isn't just an AWS problem. "All Compute Engine networks use the IPv4 protocol. Compute Engine currently does not support IPv6. However, Google is a major advocate of IPv6 and it is an important future direction." https://cloud.google.com/compute/docs/networking "The foundational work to enable IPv6 in the Azure environment is well underway. However, we are unable to share a date when IPv6 support will be generally available at this time." http://azure.microsoft.com/en-us/pricing/faq/ This is only marginally better, as it acknowledges that it's important, but still has no actual committed timeline and doesn't even reference any available ELB hacks. Anyone else want to either name and shame, or highlight cloud providers that actually *support* IPv6 as an alternative to these so that one might be able to vote with one's wallet? This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
As I said before: Host Virtual (vr.org <http://vr.org/>) Softlayer (softlayer.com <http://softlayer.com/>) Linode (Linode.com <http://linode.com/>) All have full dual-stack support. I’m sure there are others. Owen
On May 31, 2015, at 2:49 PM, George, Wes <wesley.george@twcable.com> wrote:
On 5/31/15, 3:11 PM, "Owen DeLong" <owen@delong.com> wrote:
if they said “We have a plan, and it will take X amount of time”, I would respect that.
If they said “We have a plan and we’re not sure how long it will take”, I would continue to poke them about sooner is better than later and having a target date helps people to plan.
“We don’t think IPv6 matters and we aren’t announcing any plans to get it implemented or any date by which it will be available”, on the other hand, being what they have actually repeatedly said to me until very recently, not so much.
Now, they’re saying (essentially) “We think IPv6 might matter, but we aren’t announcing any plans to get it implemented or any date by which it will be available” . To me, this is still a problematic situation for their customers.
At the risk of feeding the troll...
This isn't just an AWS problem.
"All Compute Engine networks use the IPv4 protocol. Compute Engine currently does not support IPv6. However, Google is a major advocate of IPv6 and it is an important future direction." https://cloud.google.com/compute/docs/networking
"The foundational work to enable IPv6 in the Azure environment is well underway. However, we are unable to share a date when IPv6 support will be generally available at this time."
http://azure.microsoft.com/en-us/pricing/faq/
This is only marginally better, as it acknowledges that it's important, but still has no actual committed timeline and doesn't even reference any available ELB hacks.
Anyone else want to either name and shame, or highlight cloud providers that actually *support* IPv6 as an alternative to these so that one might be able to vote with one's wallet?
This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
On Sun, May 31, 2015 at 9:07 PM, Owen DeLong <owen@delong.com> wrote:
As I said before:
Host Virtual (vr.org <http://vr.org/>) Softlayer (softlayer.com <http://softlayer.com/>) Linode (Linode.com <http://linode.com/>)
All have full dual-stack support.
<snip>
At the risk of feeding the troll...
This isn't just an AWS problem.
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled? What really matters for a cloud service user? What information could be surfaced to the cloud providers in order to get the most important ipv6 'stuff' done 'now'? o Is it most important to be able to address ever VM you create with an ipv6 address? o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6? o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable? o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use? I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful. Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers? -chris
On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
IPv6 feature-parity with IPv4. My must-haves, sorted in order of importance (most to least):
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
o Is it most important to be able to address ever VM you create with an ipv6 address?
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
If, by "backend services", you mean things like RDS, S3, etc, this is in the right place.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Being able to address VMs over IPv6 (and have VMs talk to the outside world over IPv6) is *really* useful. Takes away the need to NAT anything.
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
No. I'm currently building an infrastructure which is entirely v6-native internally; the only parts which are IPv4 are public-facing incoming service endpoints, and outgoing connections to other parts of the Internet, which are proxied. Everything else is talking amongst themselves entirely over IPv6. - Matt -- "After years of studying math and encountering surprising and counterintuitive results, I came to accept that math is always reasonable, by my intuition of what is reasonably is not always reasonable." -- Steve VanDevender, ASR
On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
IPv6 feature-parity with IPv4.
My must-haves, sorted in order of importance (most to least):
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
and would a headerswapping 'proxy' be ok? there's (today) a 'header swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect: "People can see your kitten gifs".
o Is it most important to be able to address every VM you create with an ipv6 address?
why is this bit important though? I see folk, I think, get hung up on this, but I can't figure out WHY this is as important as folk seem to want it to be? all the vms have names, you end up using the names not the ips... and thus the underlying ip protocool isn't really important? Today those names translate to v4 public ips, which get 'headerswapped' into v4 private addresses on the way through the firehedge at AWS. Tomorrow they may get swapped from v6 to v4... or there may be v6 endpoints.
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
If, by "backend services", you mean things like RDS, S3, etc, this is in the right place.
I meant 'your oracle financials installation at $HOMEBASE'. Things like 'internal amazon services' to me are a named endpoint and: 1) the name you use could be resolving to something different than the external view 2) it's a name not an ip version... provided you have the inny and it's an outy, I'm not sure that what ip protocol you use on the RESTful request matters a bunch.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Being able to address VMs over IPv6 (and have VMs talk to the outside world over IPv6) is *really* useful. Takes away the need to NAT anything.
but the nat isn't really your concern right (it all happens magically for you)? presuming you can talk to 'backend services' and $HOMEBASE over ipv6 you'd also be able to make connections to other v6 endpoints as well. there's little difference REALLY between v4 and v6 ... and jabbing a connection through a proxy to get v6 endpoints would work 'just fine'. (albeit protocol limitations at the higher levels could be interesting if the connection wasn't just 'swapping headers')
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
No. I'm currently building an infrastructure which is entirely v6-native internally; the only parts which are IPv4 are public-facing incoming service endpoints, and outgoing connections to other parts of the Internet, which are proxied. Everything else is talking amongst themselves entirely over IPv6.
that's great, but I'm not sure that 'all v6 internally!' matters a whole bunch? I look at aws/etc as "bunch of goo doing computation/calculation/storage/etc" with some public VIP (v4, v6, etc) that are well defined and which are tailored to your userbase's needs/abilities. You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to 'superawesome.vm.mine.com' and provide http/s (or whatever) services via 'external-service-name.com'. Whether the 1200 vms in your private network cloud are ipv4 or ipv6 isn't important (really) since they also talk to eachother via names, not literal ip numbers. There isn't NAT that you care about there either, the name/ip translation does the right thing (or should) such that 'superawesome.vm.availzone1.com' and 'superawesome.vm.availzone2.com' can chat freely by name without concerns for underlying ip version numbers used (and even without caring that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Sent: Monday, June 01, 2015 7:24 AM To: Matt Palmer Cc: nanog list Subject: Re: AWS Elastic IP architecture
On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
IPv6 feature-parity with IPv4.
My must-haves, sorted in order of importance (most to least):
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
and would a headerswapping 'proxy' be ok? there's (today) a 'header swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect: "People can see your kitten gifs".
o Is it most important to be able to address every VM you create with an ipv6 address?
why is this bit important though? I see folk, I think, get hung up on this, but I can't figure out WHY this is as important as folk seem to want it to be?
all the vms have names, you end up using the names not the ips... and thus the underlying ip protocool isn't really important? Today those names translate to v4 public ips, which get 'headerswapped' into v4 private addresses on the way through the firehedge at AWS. Tomorrow they may get swapped from v6 to v4... or there may be v6 endpoints.
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
If, by "backend services", you mean things like RDS, S3, etc, this is in the right place.
I meant 'your oracle financials installation at $HOMEBASE'. Things like 'internal amazon services' to me are a named endpoint and: 1) the name you use could be resolving to something different than the external view 2) it's a name not an ip version... provided you have the inny and it's an outy, I'm not sure that what ip protocol you use on the RESTful request matters a bunch.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Being able to address VMs over IPv6 (and have VMs talk to the outside world over IPv6) is *really* useful. Takes away the need to NAT anything.
but the nat isn't really your concern right (it all happens magically for you)? presuming you can talk to 'backend services' and $HOMEBASE over ipv6 you'd also be able to make connections to other v6 endpoints as well. there's little difference REALLY between v4 and v6 ... and jabbing a connection through a proxy to get v6 endpoints would work 'just fine'. (albeit protocol limitations at the higher levels could be interesting if the connection wasn't just 'swapping headers')
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
No. I'm currently building an infrastructure which is entirely v6-native internally; the only parts which are IPv4 are public-facing incoming service endpoints, and outgoing connections to other parts of the Internet, which are proxied. Everything else is talking amongst themselves entirely over IPv6.
that's great, but I'm not sure that 'all v6 internally!' matters a whole bunch? I look at aws/etc as "bunch of goo doing computation/calculation/storage/etc" with some public VIP (v4, v6, etc) that are well defined and which are tailored to your userbase's needs/abilities.
You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to 'superawesome.vm.mine.com' and provide http/s (or whatever) services via 'external-service-name.com'. Whether the 1200 vms in your private network cloud are ipv4 or ipv6 isn't important (really) since they also talk to eachother via names, not literal ip numbers. There isn't NAT that you care about there either, the name/ip translation does the right thing (or should) such that 'superawesome.vm.availzone1.com' and 'superawesome.vm.availzone2.com' can chat freely by name without concerns for underlying ip version numbers used (and even without caring that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.
Look at the problem in the other direction, and you will see that addresses often matter. What if you want to deny ssh connections from a particular address range? The source isn't going to tell you the name it is coming from. What if you want to deploy an MTA on your VM and block connections from SPAM-A-LOT data centers? How do you do that when the header-swap function presents useless crap from an external proxy mapping function? That said, if you double nat from the vip to the stack in a way that masks the internal transport of the service (so that a native stack on the VM behaves as if it is directly attached to the outside world), then it doesn't matter what the service internal transport is. What I read in your line of comments to Owen is that the service only does a header swap once and expects the application on the VM to compensate. In that case there is an impact on the cost of deployment and overall utility. Tony
On Mon, Jun 1, 2015 at 11:41 AM, Tony Hain <alh-ietf@tndh.net> wrote:
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Sent: Monday, June 01, 2015 7:24 AM To: Matt Palmer Cc: nanog list Subject: Re: AWS Elastic IP architecture
On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
IPv6 feature-parity with IPv4.
My must-haves, sorted in order of importance (most to least):
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
and would a headerswapping 'proxy' be ok? there's (today) a 'header swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect: "People can see your kitten gifs".
o Is it most important to be able to address every VM you create with an ipv6 address?
why is this bit important though? I see folk, I think, get hung up on this, but I can't figure out WHY this is as important as folk seem to want it to be?
all the vms have names, you end up using the names not the ips... and thus the underlying ip protocool isn't really important? Today those names translate to v4 public ips, which get 'headerswapped' into v4 private addresses on the way through the firehedge at AWS. Tomorrow they may get swapped from v6 to v4... or there may be v6 endpoints.
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
If, by "backend services", you mean things like RDS, S3, etc, this is in the right place.
I meant 'your oracle financials installation at $HOMEBASE'. Things like 'internal amazon services' to me are a named endpoint and: 1) the name you use could be resolving to something different than the external view 2) it's a name not an ip version... provided you have the inny and it's an outy, I'm not sure that what ip protocol you use on the RESTful request matters a bunch.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Being able to address VMs over IPv6 (and have VMs talk to the outside world over IPv6) is *really* useful. Takes away the need to NAT anything.
but the nat isn't really your concern right (it all happens magically for you)? presuming you can talk to 'backend services' and $HOMEBASE over ipv6 you'd also be able to make connections to other v6 endpoints as well. there's little difference REALLY between v4 and v6 ... and jabbing a connection through a proxy to get v6 endpoints would work 'just fine'. (albeit protocol limitations at the higher levels could be interesting if the connection wasn't just 'swapping headers')
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
No. I'm currently building an infrastructure which is entirely v6-native internally; the only parts which are IPv4 are public-facing incoming service endpoints, and outgoing connections to other parts of the Internet, which are proxied. Everything else is talking amongst themselves entirely over IPv6.
that's great, but I'm not sure that 'all v6 internally!' matters a whole bunch? I look at aws/etc as "bunch of goo doing computation/calculation/storage/etc" with some public VIP (v4, v6, etc) that are well defined and which are tailored to your userbase's needs/abilities.
You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to 'superawesome.vm.mine.com' and provide http/s (or whatever) services via 'external-service-name.com'. Whether the 1200 vms in your private network cloud are ipv4 or ipv6 isn't important (really) since they also talk to eachother via names, not literal ip numbers. There isn't NAT that you care about there either, the name/ip translation does the right thing (or should) such that 'superawesome.vm.availzone1.com' and 'superawesome.vm.availzone2.com' can chat freely by name without concerns for underlying ip version numbers used (and even without caring that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.
Look at the problem in the other direction, and you will see that addresses often matter. What if you want to deny ssh connections from a particular address range?
this sounds like a manage-your-vm question... keep that on v4 only until native v6 gets to your vm. (short term vs long term solution space)
The source isn't going to tell you the name it is coming from. What if you want to deploy an MTA on your VM and block connections from SPAM-A-LOT data centers? How do you do that when the header-swap function presents useless crap from an external proxy mapping function?
yup, 'not http' services are harder to deal with in a 'swap headers' world.
That said, if you double nat from the vip to the stack in a way that masks the internal transport of the service (so that a native stack on the VM behaves as if it is directly attached to the outside world), then it doesn't matter what the service internal transport is.
I was mostly envisioning this, but the header-swap seems easy for a bunch of things (to me at least).
What I read in your line of comments to Owen is that the service only does a header swap once and expects the application on the VM to compensate. In that case there is an impact on the cost of deployment and overall utility.
'compensate' ? do you mean 'get some extra information about the real source address for further policy-type questions to be answered' ? I would hope that in the 'header swap' service there's as little overhead applied to the end system as possible... I'd like my apache server to answer v6 requests without having a v6 address-listening-port on my machine. For 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :( Oh, so what if the 'header swap' service simply shoveled the v6 into a gre (or equivalent) tunnel and dropped that on your doorstep? potentially with an 'apt-get install aws-tunnelservice' ? I would bet in the 'vm network' you could solve a bunch of this easily enough, and provide a v6 address inside the tunnel on the vm providing the services. loadbalancing is a bit rougher (more state management) but .. is doable. -chris
I would hope that in the 'header swap' service there's as little overhead applied to the end system as possible... I'd like my apache server to answer v6 requests without having a v6 address-listening-port on my machine.
Why? Honestly: why would you want to abstract v6 up into the application layer rather than just having the network address and address family visible right at the network layer?
For 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :(
Which is an example of why people are pushing to just do the network layer stuff in the network layer, rather than pushing it up the stack.
'compensate' ? do you mean 'get some extra information about the real source address for further policy-type questions to be answered' ?
Hopefully without speaking for someone else, but probably yes. It also necessitates having application-/protocol-specific methods for extracting that information (e.g. the X-forwarded-for you cited). So the options are: 1. Go with the header-swap-type setup you describe, which requires protocol-/application-specific means to extract network-layer information out of the application layer, and will also eventually require your application to do this with native v6 addresses down the line as well (i.e. do a bunch of work now only to have it redo it later). or 2. Just do it properly the first time around. I would opt for #2. -- Hugo On Mon 2015-Jun-01 11:52:15 -0400, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Mon, Jun 1, 2015 at 11:41 AM, Tony Hain <alh-ietf@tndh.net> wrote:
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Christopher Morrow Sent: Monday, June 01, 2015 7:24 AM To: Matt Palmer Cc: nanog list Subject: Re: AWS Elastic IP architecture
On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
IPv6 feature-parity with IPv4.
My must-haves, sorted in order of importance (most to least):
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
and would a headerswapping 'proxy' be ok? there's (today) a 'header swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect: "People can see your kitten gifs".
o Is it most important to be able to address every VM you create with an ipv6 address?
why is this bit important though? I see folk, I think, get hung up on this, but I can't figure out WHY this is as important as folk seem to want it to be?
all the vms have names, you end up using the names not the ips... and thus the underlying ip protocool isn't really important? Today those names translate to v4 public ips, which get 'headerswapped' into v4 private addresses on the way through the firehedge at AWS. Tomorrow they may get swapped from v6 to v4... or there may be v6 endpoints.
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
If, by "backend services", you mean things like RDS, S3, etc, this is in the right place.
I meant 'your oracle financials installation at $HOMEBASE'. Things like 'internal amazon services' to me are a named endpoint and: 1) the name you use could be resolving to something different than the external view 2) it's a name not an ip version... provided you have the inny and it's an outy, I'm not sure that what ip protocol you use on the RESTful request matters a bunch.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Being able to address VMs over IPv6 (and have VMs talk to the outside world over IPv6) is *really* useful. Takes away the need to NAT anything.
but the nat isn't really your concern right (it all happens magically for you)? presuming you can talk to 'backend services' and $HOMEBASE over ipv6 you'd also be able to make connections to other v6 endpoints as well. there's little difference REALLY between v4 and v6 ... and jabbing a connection through a proxy to get v6 endpoints would work 'just fine'. (albeit protocol limitations at the higher levels could be interesting if the connection wasn't just 'swapping headers')
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
No. I'm currently building an infrastructure which is entirely v6-native internally; the only parts which are IPv4 are public-facing incoming service endpoints, and outgoing connections to other parts of the Internet, which are proxied. Everything else is talking amongst themselves entirely over IPv6.
that's great, but I'm not sure that 'all v6 internally!' matters a whole bunch? I look at aws/etc as "bunch of goo doing computation/calculation/storage/etc" with some public VIP (v4, v6, etc) that are well defined and which are tailored to your userbase's needs/abilities.
You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to 'superawesome.vm.mine.com' and provide http/s (or whatever) services via 'external-service-name.com'. Whether the 1200 vms in your private network cloud are ipv4 or ipv6 isn't important (really) since they also talk to eachother via names, not literal ip numbers. There isn't NAT that you care about there either, the name/ip translation does the right thing (or should) such that 'superawesome.vm.availzone1.com' and 'superawesome.vm.availzone2.com' can chat freely by name without concerns for underlying ip version numbers used (and even without caring that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.
Look at the problem in the other direction, and you will see that addresses often matter. What if you want to deny ssh connections from a particular address range?
this sounds like a manage-your-vm question... keep that on v4 only until native v6 gets to your vm. (short term vs long term solution space)
The source isn't going to tell you the name it is coming from. What if you want to deploy an MTA on your VM and block connections from SPAM-A-LOT data centers? How do you do that when the header-swap function presents useless crap from an external proxy mapping function?
yup, 'not http' services are harder to deal with in a 'swap headers' world.
That said, if you double nat from the vip to the stack in a way that masks the internal transport of the service (so that a native stack on the VM behaves as if it is directly attached to the outside world), then it doesn't matter what the service internal transport is.
I was mostly envisioning this, but the header-swap seems easy for a bunch of things (to me at least).
What I read in your line of comments to Owen is that the service only does a header swap once and expects the application on the VM to compensate. In that case there is an impact on the cost of deployment and overall utility.
'compensate' ? do you mean 'get some extra information about the real source address for further policy-type questions to be answered' ?
I would hope that in the 'header swap' service there's as little overhead applied to the end system as possible... I'd like my apache server to answer v6 requests without having a v6 address-listening-port on my machine. For 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :(
Oh, so what if the 'header swap' service simply shoveled the v6 into a gre (or equivalent) tunnel and dropped that on your doorstep? potentially with an 'apt-get install aws-tunnelservice' ? I would bet in the 'vm network' you could solve a bunch of this easily enough, and provide a v6 address inside the tunnel on the vm providing the services.
loadbalancing is a bit rougher (more state management) but .. is doable.
-chris
On Mon, Jun 1, 2015 at 12:21 PM, Hugo Slabbert <hugo@slabnet.com> wrote:
2. Just do it properly the first time around.
I would opt for #2.
sure, so would everyone... but they didn't so... what gets you enough there to help customers and also doesn't required a forklift of your running operation?
On Mon 2015-Jun-01 13:20:57 -0400, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Mon, Jun 1, 2015 at 12:21 PM, Hugo Slabbert <hugo@slabnet.com> wrote:
2. Just do it properly the first time around.
I would opt for #2.
sure, so would everyone... but they didn't so... what gets you enough there to help customers and also doesn't required a forklift of your running operation?
Sorry; I worded this poorly. Obviously they didn't go dual stack right at the outset. Options #1 and #2 I listed were done from the perspective that it's currently a v4-only environment, and some measure of work has to be done to get it to have v6 capability of *some* form. I'm working with the (soon to be not) unspoken assumption of a future state of the platform where we've "v6'd all the things", checking off all of the boxes in your original message on this; paraphrased: - Every VM has a v6 address - VMs can talk to backend services (including on your prem) over v6 - VM/system admin interfaces are reachable over v6 - You can serve up v6-accessible services from your VM(s) If my (previously unspoken) assumption of a fully v6-capable future state of the platform holds, I'm saying that going with a proxy-type solution as an interim stopgap solution carries a whole bunch of additional labor/operational cost. Implementing either option #1 or option #2 carries some combination of cost from hardware, software, and elbow grease, with values of 0 to $bigint in each category. To be fair: some of the additional elbow grease cost from option #1 is externalized from the hoster to either customers and/or the developers of the software stacks used by customers. That notwithstanding: if you're going to need to do #2 at some point anyway, why not just skip #1 and put your energy into #2 to start with? To be honest: I don't think we are diametrically opposed on this. Backing up a bit:
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
Relieve some pressure and possibly generate at least *some* forward momentum? Sure. And AWS is obviously free to do as they see fit. I think a lot of folks in this discussion are just tired of seeing half measures that expend a bunch of resources to delay the inevitable and push us further into CGN hell when those resources could rather have been allocated to a proper dual-stack or v6-only solution. -- Hugo
snip
What I read in your line of comments to Owen is that the service only does a header swap once and expects the application on the VM to compensate. In that case there is an impact on the cost of deployment and overall utility.
'compensate' ? do you mean 'get some extra information about the real source address for further policy-type questions to be answered' ?
Yes. Since that is not a required step on a native machine, there would be development / extra configuration required. While people that are interested in IPv6 deployment would likely do the extra work, those who "just want it to work" would delay IPv6 services until someone created the magic. Unfortunately that describes most of the people that use hosted services, so external proxy / nat approaches really do nothing to further any use of IPv6.
I would hope that in the 'header swap' service there's as little overhead applied to the end system as possible... I'd like my apache server to answer v6 requests without having a v6 address-listening-port on my machine. For 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :(
So to avoid the exceedingly simple config change of "Listen 80" rather than "Listen x.x.x.x:80" you would rather not open the IPv6 port? If the service internal transport is really transparent, https would work for free. I don't have any data to base it on, but I always thought that scaling an e-commerce site was the primary utility in using a hosted VM service. If that is true, it makes absolutely no sense to do a proxy VIP thingy for IPv6 port 80 to fill the cart, then fail the connection when trying to check-out. As IPv4 becomes more fragile with the additional layering of nats, the likelihood of that situation goes up, causing even more people to want to turn off the IPv6 vip. It is better for the service to appear to be down at the start than to have customers spend time then fail at the point of gratification, because they are much more likely to forget about an apparent service outage than to forgive wasting their time.
Oh, so what if the 'header swap' service simply shoveled the v6 into a gre (or equivalent) tunnel and dropped that on your doorstep? potentially with an 'apt-get install aws-tunnelservice' ? I would bet in the 'vm network' you could solve a bunch of this easily enough, and provide a v6 address inside the tunnel on the vm providing the services.
loadbalancing is a bit rougher (more state management) but .. is doable.
I think tunneling would be more efficient and manageable overall. I have not thought through the trade-offs between terminating it on the host vs inside the VM, but gut feel says that for the end-user / application it might be better inside the vm so there is a clean interface, while for service manageability it would be better on the host, even though some information might get lost in the interface translation. As long as the IP header that the VM stack presents to the application is the same as the one presented to the vip (applies outbound as well), the rest is a design detail that is best left to each organization. Tony
On May 31, 2015, at 7:46 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sun, May 31, 2015 at 9:07 PM, Owen DeLong <owen@delong.com> wrote:
As I said before:
Host Virtual (vr.org <http://vr.org/>) Softlayer (softlayer.com <http://softlayer.com/>) Linode (Linode.com <http://linode.com/>)
All have full dual-stack support.
<snip>
At the risk of feeding the troll...
This isn't just an AWS problem.
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
It means that I should be able to develop my cloud application(s) with full native IPv6 support and expect to be able to serve IPv4-only, IPv6-only, and dual-stack customers using native IP stacks on my virtual machines without requiring external proxies, translators, etc. from the cloud service provider.
What really matters for a cloud service user? What information could be surfaced to the cloud providers in order to get the most important ipv6 'stuff' done 'now’?
Ideally, simple native routing of IPv6 to provisioned hosts should suffice in most cases.
o Is it most important to be able to address ever VM you create with an ipv6 address?
Yes.
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
It’s hard to imagine how you could provide the first one above without having this one come along for the ride.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
This would be the one where I would most be able to tolerate a delay.
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
If you can’t get the first one, this might be adequate as a short-term fallback for some applications. However, it’s far from ideal and not all that useful.
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where: 1. The services basically have to be supported by some form of proxy/nat/etc. So long as you are using a supported L4 protocol, that might be fine, but not everything fits in TCP/UDP/ICMP. Generally supporting GRE, IKE, and application-specific protocols becomes an issue. 2. The developer has to develop and maintain an IPv4-compatible codebase rather than be able to use the dual stack capabilities of a host with IPV6_V6ONLY=FALSE in the socket options. This delays the ability to produce native IPv6 applications. 3. Proxies and translators add complexity, increase fragility, and reduce performance.
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
For the reasons outlined above, I don’t really think so. Owen
On Mon, Jun 1, 2015 at 3:06 AM, Owen DeLong <owen@delong.com> wrote:
On May 31, 2015, at 7:46 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sun, May 31, 2015 at 9:07 PM, Owen DeLong <owen@delong.com> wrote:
As I said before:
Host Virtual (vr.org <http://vr.org/>) Softlayer (softlayer.com <http://softlayer.com/>) Linode (Linode.com <http://linode.com/>)
All have full dual-stack support.
<snip>
At the risk of feeding the troll...
This isn't just an AWS problem.
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
It means that I should be able to develop my cloud application(s) with full native IPv6 support and expect to be able to serve IPv4-only, IPv6-only, and dual-stack customers using native IP stacks on my virtual machines without requiring external proxies, translators, etc. from the cloud service provider.
Ok, I suppose as a long term goal that seems fine. I don't get why 'ipv6 address on my vm' matters a whole bunch (*in a world where v4 is still available to you I mean), but as a long term goal, sure fine. In the short term though, that was my question: "What should be prioritized, and then get that information to aws/gce/rackspace/etc product managers" because I suspect their engineers are not silly, they know that v6 should be part of the plan... but the PM folk are saying: "No one is asking for this v6 thing, but lots of people want that other shiney bauble...so build the shiney!!"
What really matters for a cloud service user? What information could be surfaced to the cloud providers in order to get the most important ipv6 'stuff' done 'now’?
Ideally, simple native routing of IPv6 to provisioned hosts should suffice in most cases.
o Is it most important to be able to address ever VM you create with an ipv6 address?
Yes.
long term sure, short term though.. really?? isn't it more important to be able to terminate v6 services from 'public' users? (and v4 as well because not everyone has v6 at home/work/mobile)
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
It’s hard to imagine how you could provide the first one above without having this one come along for the ride.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
This would be the one where I would most be able to tolerate a delay.
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
If you can’t get the first one, this might be adequate as a short-term fallback for some applications. However, it’s far from ideal and not all that useful.
I agree it's not ideal, and I was making a list of 'short term goals' that could be prioritized and get us all to the v6 utopia later.
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where:
1. The services basically have to be supported by some form of proxy/nat/etc. So long as you are using a supported L4 protocol, that might be fine, but not everything fits in TCP/UDP/ICMP. Generally supporting GRE, IKE, and application-specific protocols becomes an issue.
I figured the simplest path from v6 to v4 was: "Rip the header and extension headers off, make a v4 packet, deliver to vm" aside from "yes, your request came from <ipv6 literal>" that should 'just work', you do have to maintain some state to deliver back, depending on the implementation (but even that is probably able to be hidden from the 'user' and provisioned/capacity planned independent of the 'user').
2. The developer has to develop and maintain an IPv4-compatible codebase rather than be able to use the dual stack capabilities of a host with IPV6_V6ONLY=FALSE in the socket options. This delays the ability to produce native IPv6 applications.
3. Proxies and translators add complexity, increase fragility, and reduce performance.
I think this point is cogent, but ... also part of the pie that 'aws' pays for you the user... Or rather they run a 'service' which takes care of this, has slas and slos... "All packets delivered with 99.99% having Xms extra latency!" "Service has 99.999% uptime!" "Throughput comparable (99.99% at peak, 99% of the time) to straight/native v4"
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
For the reasons outlined above, I don’t really think so.
ok, I figured that for short-term while the providers figure out: "Oh yea, this v6 thing is pretty simple in our encap'd world... why didn't we just turn it on originally?" getting v6 endpoints with little hassle for the 'user' (vm user) would help us all a bunch.
On Mon, Jun 01, 2015 at 11:30:00AM -0400, Christopher Morrow wrote:
I don't get why 'ipv6 address on my vm' matters a whole bunch (*in a world where v4 is still available to you I mean),
It simplifies infrastructure management considerably. Having to balance between "how many subnets will I ever need?" vs "how many machines could I end up with in a subnet?" is something I never thought would become annoying, until I had the opportunity to not worry about it... then it was frustrating to have to go back to it. Not having to use a VPN/NAT/jump box to hit all my infrastructure seems like a small benefit, but it saves having to maintain a VPN/jump box (and all its attendant annoyances). Oh, yeah, never having to faff around with split-horizon DNS management... "Family Guy Tooth Fairy" on YouTube. <grin> In short, there's a whole pile of dodgy hacks we deploy almost without thinking about it, because "that's just how things are done", to work around limitations in IPv4 deployments. Having IPv6 everywhere *within* the infrastructure makes all of those hacks disappear, and like most things we "just do because we have to", you don't realise how much of a PITA they were until they're gone. - Matt -- And Jesus said unto them, "And whom do you say that I am?" They replied, "You are the eschatological manifestation of the ground of our being, the ontological foundation of the context of our very selfhood revealed." And Jesus replied, "What?" -- Seen on the 'net
On Mon, Jun 1, 2015 at 6:36 PM, Matt Palmer <mpalmer@hezmatt.org> wrote:
On Mon, Jun 01, 2015 at 11:30:00AM -0400, Christopher Morrow wrote:
I don't get why 'ipv6 address on my vm' matters a whole bunch (*in a world where v4 is still available to you I mean),
It simplifies infrastructure management considerably. Having to balance between "how many subnets will I ever need?" vs "how many machines could I end up with in a subnet?" is something I never thought would become annoying, until I had the opportunity to not worry about it... then it was frustrating to have to go back to it. Not having to use a VPN/NAT/jump box to hit all my infrastructure seems like a small benefit, but it saves having to maintain a VPN/jump box (and all its attendant annoyances). Oh, yeah, never having to faff around with split-horizon DNS management... "Family Guy Tooth Fairy" on YouTube. <grin>
sure, most of that you have to worry about if you're building your own cloud thingy... but in that case, why not just do the 'right thing' as you see fit (which you seem to have done, yay!). If you're just using aws/ec2/gce/whatever... all of that is taken care of for you, so there's nothing to setup and what ip address the vm has just isn't relevant. Whether or not they use ipv6 isn't relevant really either, honestly (for the management and even interprocess comms).
In short, there's a whole pile of dodgy hacks we deploy almost without thinking about it, because "that's just how things are done", to work around limitations in IPv4 deployments. Having IPv6 everywhere *within* the infrastructure makes all of those hacks disappear, and like most things we "just do because we have to", you don't realise how much of a PITA they were until they're gone.
so... the 'dodgy hacks' only really matter if you have to keep them running (keep a nat box and a bastion and ...) if that's all done for you by the chosen provider then, none of these arguments hold. your bit about subnet sizing and numbering also glosses over a slew of 'where did machine X go?' (naming) problems. which, incidentally you avoid with: "dhcp address and name" in the v6 world. So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers) -chris
-- And Jesus said unto them, "And whom do you say that I am?" They replied, "You are the eschatological manifestation of the ground of our being, the ontological foundation of the context of our very selfhood revealed." And Jesus replied, "What?" -- Seen on the 'net
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers)
Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT. Sometimes you can get away w/o IPv6, sometimes you can't. In all cases IPv4 is getting more and more expensive to support as more customers share public IP addresses even if it is just have to re-tune rate limits to account for the sharing. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-= rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com <javascript:;>> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers)
Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT. Sometimes you can get away w/o IPv6, sometimes you can't. In all cases IPv4 is getting more and more expensive to support as more customers share public IP addresses even if it is just have to re-tune rate limits to account for the sharing.
Agreed. Here is some data.
It's worth noting that the Samsung Galaxy S6 launched with IPv6 on by default at AT&T, Verizon, Sprint, T-Mobile. And the majority of the T-Mobile at Verizon customer base is on IPv6, so IPv4 is the minority right now in mobile. Oh, and when i say ipv4 is the minority i mean NAT44. Proper public ipv4 is not even on the mobile radar, but ipv6 is http://www.worldipv6launch.org/measurements/ CB
Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org <javascript:;>
On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers)
Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right?
Agreed. Here is some data.
It's worth noting that the Samsung Galaxy S6 launched with IPv6 on by default at AT&T, Verizon, Sprint, T-Mobile.
And the majority of the T-Mobile at Verizon customer base is on IPv6, so IPv4 is the minority right now in mobile. Oh, and when i say ipv4 is the minority i mean NAT44.
Proper public ipv4 is not even on the mobile radar, but ipv6 is
but.. http/s to an ipv4 address works, so...
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers)
Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right?
It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. This will happen to business as well. The ability to be able to be able to call out to everyone is lost if the cloud provider doesn't fully support IPv6. There are a whole segment of applications that don't work, or don't work well, or don't work without a whole lot of additional investment when one end is behind a CGN (covers all the above as IPv4 is supplied over a CGN). This attitude of we don't have to invest in IPv6 yet because we have lots of public IPv4 addresses stinks to high heaven these day, whether you are a ISP, cloud provider or someone else. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Mon, Jun 1, 2015 at 9:32 PM, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers)
Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right?
It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. This will happen to business as well. The ability to be able to be able to call out to everyone is lost if the cloud provider doesn't fully support IPv6.
so, I totally agree that long term v6 must also appear in the cloud-spaces... I was (long back in this thread) asking: "sure, v6 is great, what top 1-3 things could a cloud provider prioritize NOW to get the ball rolling" (presuming they have some 'real' reason why v6 'just can not be added to interface configs').
There are a whole segment of applications that don't work, or don't work well, or don't work without a whole lot of additional investment when one end is behind a CGN (covers all the above as IPv4 is supplied over a CGN).
'additional investment' == 'client initiates connection to server' right? :)
This attitude of we don't have to invest in IPv6 yet because we have lots of public IPv4 addresses stinks to high heaven these day, whether you are a ISP, cloud provider or someone else.
yup, agreed. I was (and am still) reacting to the 'everything is horrible and broken because I can't talk the v6's to all my internal machines' when ... that seems (to me at least) to be completely immaterial when 'there is a v6 endpoint for your http/https/xmpp/etc' available 'now'. (or could be in relatively short order). -chris
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers) Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6.
...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. Matthew Kaufman
In message <556D35DF.8080901@matthew.at>, Matthew Kaufman writes:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail. com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers) Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6.
...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list.
For IPv4 you port forward in the NAT possibly doing port translation as will as address translation. For IPv6 you open the port inbound in the firewall or just move the firewalling to the host. IPv6 is easier. With modern machines you really can get rid of the firewall in front of the machine. Lots of the equipement that connects to the home nets spends plenty of time fully exposed to the Internet w/o a firewall. If it does that why does it need a firewall at home? There is a myth that you need a firewall at home.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free.
It also remove a whole lot of complications. Simplifies the security profile.
Matthew Kaufman -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 6/1/15 10:12 PM, Mark Andrews wrote:
In message <556D35DF.8080901@matthew.at>, Matthew Kaufman writes:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail. com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes: > So... I don't really see any of the above arguments for v6 in a vm > setup to really hold water in the short term at least. I think for > sure you'll want v6 for public services 'soon' (arguably like 10 yrs > ago so you'd get practice and operational experience and ...) but for > the rest sure it's 'nice', and 'cute', but really not required for > operations (unless you have v6 only customers) Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT. ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. ...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. For IPv4 you port forward in the NAT possibly doing port translation as will as address translation.
Takes about 30 seconds in the web interface of your CPE.
For IPv6 you open the port inbound in the firewall or just move the firewalling to the host.
Takes about 30 seconds in the web interface of your CPE.
IPv6 is easier. With modern machines you really can get rid of the firewall in front of the machine.
For the good of the botnet operators, I encourage doing this. I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
Lots of the equipement that connects to the home nets spends plenty of time fully exposed to the Internet w/o a firewall.
I don't believe that's accurate.
If it does that why does it need a firewall at home?
There is a myth that you need a firewall at home.
Perpetuated by all the actual cases of poorly designed operating systems and embedded systems getting attacked and making the news as a result (http://www.insecam.org/ among others)
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. It also remove a whole lot of complications. Simplifies the security profile.
If you think that the NDP DOS exposure is a "simplification" of security, then I'd love to hear more about the benefits of IPv6. Matthew Kaufman
Tell me how do you plan find printer in /64 subnet, scan it? On 02.06.2015 18:08, Matthew Kaufman wrote:
I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
On Tue, Jun 02, 2015 at 07:21:12PM +0300, Nikolay Shopik wrote:
Tell me how do you plan find printer in /64 subnet, scan it?
On 02.06.2015 18:08, Matthew Kaufman wrote:
I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
Security by obscurity? Come, now. -- Mike Andrews, W5EGO mikea@mikea.ath.cx Tired old sysadmin
Ah, the "IPv6 subnets are so big you can't find the hosts" myth. Let's see... to find which hosts are active in IPv6 I can: - run a popular web service that people connect to, revealing their addresses - run a DNS server that lots of folks directly use (see Google) - use the back door login your router vendor provided and ask - query your unsecured public SNMP and ask - get you to install software that sends back a list of what's on your subnet - make educated guesses about your non-privacy IP addresses based on the MAC address ranges of popular hardware that is available in stores this year to reduce the search space to a manageable size - hack the site where you get automatic updates from and use its logs That's just off the top of my head Matthew Kaufman (Sent from my iPhone)
On Jun 2, 2015, at 9:21 AM, Nikolay Shopik <shopik@inblock.ru> wrote:
Tell me how do you plan find printer in /64 subnet, scan it?
On 02.06.2015 18:08, Matthew Kaufman wrote:
I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
Matthew, Good list - Windows doesn't run non-privacy addresses, so it won't work next time. - If you could guess address of router props to you - Before using SNMP you still need device address. - If you can install software on remote PC, when you probably have same result in IPv4 world. - If you able run popular web/DNS server you probably have already enough money from elsewhere unless someone offer you more money to sell that info and this applies both to IPv4 and IPv6 regardless of firewall. So I'm not saying IPv6 doing just fine w/o firewall, just that it doing much better than IPv4 and its NAT with security through obscurity. And especially from simple kind attacks. On 02.06.2015 19:35, Matthew Kaufman wrote:
Ah, the "IPv6 subnets are so big you can't find the hosts" myth.
Let's see... to find which hosts are active in IPv6 I can: - run a popular web service that people connect to, revealing their addresses - run a DNS server that lots of folks directly use (see Google) - use the back door login your router vendor provided and ask - query your unsecured public SNMP and ask - get you to install software that sends back a list of what's on your subnet - make educated guesses about your non-privacy IP addresses based on the MAC address ranges of popular hardware that is available in stores this year to reduce the search space to a manageable size - hack the site where you get automatic updates from and use its logs
That's just off the top of my head
Matthew Kaufman
(Sent from my iPhone)
On Jun 2, 2015, at 9:21 AM, Nikolay Shopik <shopik@inblock.ru> wrote:
Tell me how do you plan find printer in /64 subnet, scan it?
On 02.06.2015 18:08, Matthew Kaufman wrote:
I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
On Tue, 02 Jun 2015 09:35:11 -0700, Matthew Kaufman said:
Ah, the "IPv6 subnets are so big you can't find the hosts" myth.
Let's see... to find which hosts are active in IPv6 I can: - run a popular web service that people connect to, revealing their addresses
If your vulnerable laser printer or webcam is calling out to Hotmail or Google or whatever, you got *bigger* problems, dude....
On Wed 2015-Jun-03 13:11:34 -0400, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:
On Tue, 02 Jun 2015 09:35:11 -0700, Matthew Kaufman said:
Ah, the "IPv6 subnets are so big you can't find the hosts" myth.
Let's see... to find which hosts are active in IPv6 I can: - run a popular web service that people connect to, revealing their addresses
If your vulnerable laser printer or webcam is calling out to Hotmail or Google or whatever, you got *bigger* problems, dude....
Not to support Mr. Kaufman's line of reasoning, but: https://h30495.www3.hp.com/c/46775/US/en/?jumpid=in_R11549%2Feprintcenter https://www.google.com/cloudprint/#printers :(
IoT says your toaster will be uploading your breakfast to 10 social media accounts and your socks will be connected to the hospital. Your fridge is also a spambot now too! http://www.businessinsider.com/hackers-use-a-refridgerator-to-attack-busines... IoT means everything gets hacked. Maybe someone can make Cryptolocker to lock you out of your fridge until you pay a ransom. We are entering a whole new era of exciting vulnerabilities. Steve Mikulasik -----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Wednesday, June 03, 2015 11:12 AM To: Matthew Kaufman Cc: nanog@nanog.org Subject: Re: AWS Elastic IP architecture On Tue, 02 Jun 2015 09:35:11 -0700, Matthew Kaufman said:
Ah, the "IPv6 subnets are so big you can't find the hosts" myth.
Let's see... to find which hosts are active in IPv6 I can: - run a popular web service that people connect to, revealing their addresses
If your vulnerable laser printer or webcam is calling out to Hotmail or Google or whatever, you got *bigger* problems, dude....
In message <556DC6FD.7040801@matthew.at>, Matthew Kaufman writes:
On 6/1/15 10:12 PM, Mark Andrews wrote:
In message <556D35DF.8080901@matthew.at>, Matthew Kaufman writes:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail. com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote: > In message > <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> > , Christopher Morrow writes: >> So... I don't really see any of the above arguments for v6 in a vm >> setup to really hold water in the short term at least. I think for >> sure you'll want v6 for public services 'soon' (arguably like 10 yrs >> ago so you'd get practice and operational experience and ...) but for >> the rest sure it's 'nice', and 'cute', but really not required for >> operations (unless you have v6 only customers) > Everyone has effectively IPv6-only customers today. IPv6 native + > CGN only works for services. Similarly DS-Lite and 464XLAT. ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. ...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. For IPv4 you port forward in the NAT possibly doing port translation as will as address translation.
Takes about 30 seconds in the web interface of your CPE.
For IPv6 you open the port inbound in the firewall or just move the firewalling to the host.
Takes about 30 seconds in the web interface of your CPE.
IPv6 is easier. With modern machines you really can get rid of the firewall in front of the machine.
For the good of the botnet operators, I encourage doing this.
I can't run my laser printer without a firewall in front of it, and I can't even guess how secure the controller in the septic system pump box might be... so I don't risk it. And I *know* that some of the webcams I have are vulnerable and have no updates available.
Well send the printer back as defective which it is. As for the controller of the septic system pump box it should be able to be on the net without a firewall in front of it.
Lots of the equipement that connects to the home nets spends plenty of time fully exposed to the Internet w/o a firewall.
I don't believe that's accurate.
All the laptops, phones, tablets, e-readers spend some time connect without a firewall in front of them. A Windows box hasn't needed a firewall in front of it Windows XP SP2. Macs don't need firewalls in front of them. Linux boxes don't need firewalls in front of them. About the only thing the border router should be doing is preventing spoofed "internal" packets from coming in and filtering non-locally sourced packets leaving. There really shouldn't be any to filter legitimately addressed packets. If there is then the product is defective.
If it does that why does it need a firewall at home?
There is a myth that you need a firewall at home.
Perpetuated by all the actual cases of poorly designed operating systems and embedded systems getting attacked and making the news as a result (http://www.insecam.org/ among others)
So send the cameras back to the manufacture/retailer for a fix/refund.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. It also remove a whole lot of complications. Simplifies the security profile.
If you think that the NDP DOS exposure is a "simplification" of security, then I'd love to hear more about the benefits of IPv6.
Even with that it simplifies security. Routers will get code to work around that.
Matthew Kaufman -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Jun 2, 2015, at 5:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes:
So... I don't really see any of the above arguments for v6 in a vm setup to really hold water in the short term at least. I think for sure you'll want v6 for public services 'soon' (arguably like 10 yrs ago so you'd get practice and operational experience and ...) but for the rest sure it's 'nice', and 'cute', but really not required for operations (unless you have v6 only customers) Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT.
ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6.
...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list.
IPv4 with NAT, you can open one host at home to remote access, or, in some cases, you can select different hosts by using the port number in lieu of the host name/address. IPv6 — I add a permit statement to the firewall to allow the traffic in to each host/group of hosts that I want and I am done. I do not see the above as being equal effort or as yielding equal results. As such, I’d say that your statement gets added to the great myths of Matthew Kauffman rather than there being any myth about this being an IPv6 advantage. I can assure you that it is MUCH easier for me to remote-manage my mother’s machines over their IPv6 addresses than to get to them over IPv4. I live in California and have native dual-stack without NAT. She lives in Texas and has native dual-stack with NAT for her IPv4.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free.
IPv6 comes with at least one design-advantage — More addresses. However, more addresses, especially on the scale IPv6 delivers them comes with MANY benefits: 1. Simplified addressing 2. Better aggregation 3. Direct access where permitted 4. Elimination of NAT and its security flaws and disadvantages 5. Simplified topologies 6. Better sunbathing 7. Better network planning with sparse allocations 8. Simpler application code 9. Reduced complexity in: Proxies Applications Firewalls Logging Monitoring Audit Intrusion Detection Intrusion Prevention DDoS mitigation 10. The ability to write software with hope of your codebase working for the next 10 years or more. I’m sure there are other benefits as well, but this gives you at least 10. There are those that would argue that other design advantages include: 1. Fixed length simplified header 2. Stateless Address Autoconfiguration 3. Mobile IP 4. A much cleaner implementation of IPSEC I’m sure there are more, but this is a quick list off the top of my head. Owen
On 6/2/15 2:35 AM, Owen DeLong wrote:
On Jun 2, 2015, at 5:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote:
In message <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> , Christopher Morrow writes: > So... I don't really see any of the above arguments for v6 in a vm > setup to really hold water in the short term at least. I think for > sure you'll want v6 for public services 'soon' (arguably like 10 yrs > ago so you'd get practice and operational experience and ...) but for > the rest sure it's 'nice', and 'cute', but really not required for > operations (unless you have v6 only customers) Everyone has effectively IPv6-only customers today. IPv6 native + CGN only works for services. Similarly DS-Lite and 464XLAT. ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. ...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. IPv4 with NAT, you can open one host at home to remote access, or, in some cases, you can select different hosts by using the port number in lieu of the host name/address.
IPv4 with NAT, standard NAT/firewall traversal techniques are used so that things inside your house are reachable as necessary. Almost nobody configures their firewall to open up anything.
IPv6 — I add a permit statement to the firewall to allow the traffic in to each host/group of hosts that I want and I am done.
IPv6, standard NAT?firewall traversal techniques are used so that things inside your house are reachable as necessary. Still almost nobody configures their firewall to open up anything. For those who do, the work needed to open up a few host/port mappings in IPv4 is basically identical to opening up a few hosts and ports for IPv6.
I do not see the above as being equal effort or as yielding equal results.
For the automatic traversal cases, the end-user effort is identical. For the incredibly rare case of manual configuration (which as NANOG participants we often forget, since we're adjusting our routers all the time) there is almost no difference for most use cases. Yes, the results are marginally superior in the IPv6 case. Nobody cares.
As such, I’d say that your statement gets added to the great myths of Matthew Kauffman rather than there being any myth about this being an IPv6 advantage.
I can assure you that it is MUCH easier for me to remote-manage my mother’s machines over their IPv6 addresses than to get to them over IPv4.
Only because you've insisted on doing it the hard way, instead of using something like teamviewer or logmein which "just works".
I live in California and have native dual-stack without NAT. She lives in Texas and has native dual-stack with NAT for her IPv4.
And I assume your mother opened up her IPv6 firewall all on her own? Most people won't have the technical skills to adjust the IPv6 firewall that comes with their CPE and/or "Wireless Router" any more than they have the skills to set up IPv4 port mapping.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. IPv6 comes with at least one design-advantage — More addresses.
However, more addresses, especially on the scale IPv6 delivers them comes with MANY benefits:
1. Simplified addressing 2. Better aggregation 3. Direct access where permitted 4. Elimination of NAT and its security flaws and disadvantages 5. Simplified topologies 6. Better sunbathing 7. Better network planning with sparse allocations
All of those are benefits for the network operator, not the end user.
8. Simpler application code
Might be true *if* there was only IPv6. If there's also the need to support IPv4 then supporting *both* is harder than supporting just one.
9. Reduced complexity in: Proxies Applications Firewalls Logging Monitoring Audit Intrusion Detection Intrusion Prevention DDoS mitigation
Matters not to most home users. Doesn't help corporate users because they also need all that for IPv4 indefinitely.
10. The ability to write software with hope of your codebase working for the next 10 years or more.
I'll bet that there's IPv4 devices running happily 10 years from now.
I’m sure there are other benefits as well, but this gives you at least 10.
There are those that would argue that other design advantages include:
1. Fixed length simplified header
Maybe.
2. Stateless Address Autoconfiguration
Disaster, given that I'm still stuck needing DHCPv6 to configure everything that SLAAC won't. Or things that SLAAC only picked up recently (like setting your DNS server) and so are still unsupported in all sorts of gear.
3. Mobile IP
Remains to be seen if this matters.
4. A much cleaner implementation of IPSEC
Sure, but the IPv4 IPSEC seems to work just fine everywhere I've deployed it.
I’m sure there are more, but this is a quick list off the top of my head.
There's a bunch of myths too... and "every device will be directly reachable from the entire Internet" is on there. Matthew Kaufman
On Jun 2, 2015, at 4:08 PM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/2/15 2:35 AM, Owen DeLong wrote:
On Jun 2, 2015, at 5:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 6:32 PM, Mark Andrews wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com> wrote:
On Monday, June 1, 2015, Mark Andrews <marka@isc.org> wrote: > In message > <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com> > , Christopher Morrow writes: >> So... I don't really see any of the above arguments for v6 in a vm >> setup to really hold water in the short term at least. I think for >> sure you'll want v6 for public services 'soon' (arguably like 10 yrs >> ago so you'd get practice and operational experience and ...) but for >> the rest sure it's 'nice', and 'cute', but really not required for >> operations (unless you have v6 only customers) > Everyone has effectively IPv6-only customers today. IPv6 native + > CGN only works for services. Similarly DS-Lite and 464XLAT. ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. ...and for firewalls to not exist. Since they do, absolutely all the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. IPv4 with NAT, you can open one host at home to remote access, or, in some cases, you can select different hosts by using the port number in lieu of the host name/address.
IPv4 with NAT, standard NAT/firewall traversal techniques are used so that things inside your house are reachable as necessary. Almost nobody configures their firewall to open up anything.
HuH? How do I SSH into my host behind my home NAT firewall without configuration of the firewall? You are making no sense here. NAT Traversal techniques provide for outbound connections and/or a way that a pseudo-service can create an inbound connection that looks like an outbound connection to the firewall. It does not in any way provide for generic inbound access to ordinary services without configuration.
IPv6 — I add a permit statement to the firewall to allow the traffic in to each host/group of hosts that I want and I am done.
IPv6, standard NAT?firewall traversal techniques are used so that things inside your house are reachable as necessary. Still almost nobody configures their firewall to open up anything.
Why would one use NAT with IPv6… You’re making no sense there.
For those who do, the work needed to open up a few host/port mappings in IPv4 is basically identical to opening up a few hosts and ports for IPv6.
Not really… For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line. On the other hand, with IPv6, let’s say the machines are all on 2001:db8::/64. Further, let’s say that I group machines for which I want to provide SSH access within 2001:db8::22:0:0:0/80. I can add a single firewall entry which covers this /80 and I’m done. I can put many millions of hosts within that range and they all are accessible directly for SSH from the outside world. Takes about 20 seconds to configure my firewall once and then I never really need to worry about it again. Further, in the IPv4 case, I need special client configuration or client invocation effort every time, while with the IPv6 case, I can simply put the hostname in DNS and then use the name thereafter.
I do not see the above as being equal effort or as yielding equal results.
For the automatic traversal cases, the end-user effort is identical.
Sure, but automatic traversal is the exception not the rule when considering internet services.
For the incredibly rare case of manual configuration (which as NANOG participants we often forget, since we're adjusting our routers all the time) there is almost no difference for most use cases.
Not true as noted above.
Yes, the results are marginally superior in the IPv6 case. Nobody cares.
I would argue that it’s more than marginal.
As such, I’d say that your statement gets added to the great myths of Matthew Kauffman rather than there being any myth about this being an IPv6 advantage.
I can assure you that it is MUCH easier for me to remote-manage my mother’s machines over their IPv6 addresses than to get to them over IPv4.
Only because you've insisted on doing it the hard way, instead of using something like teamviewer or logmein which "just works”.
So does vmc://hostname (if I have IPv6 or non-NAT IPv4).
I live in California and have native dual-stack without NAT. She lives in Texas and has native dual-stack with NAT for her IPv4.
And I assume your mother opened up her IPv6 firewall all on her own?
As a matter of fact, she did open up the first connection which then provided me the necessary access to configure the rest for her.
Most people won't have the technical skills to adjust the IPv6 firewall that comes with their CPE and/or "Wireless Router" any more than they have the skills to set up IPv4 port mapping.
My mother is about as non-technical as you would imagine the typical grandmother portrayed in most such examples. Technically, she’s a great grandmother these days.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. IPv6 comes with at least one design-advantage — More addresses.
However, more addresses, especially on the scale IPv6 delivers them comes with MANY benefits:
1. Simplified addressing 2. Better aggregation 3. Direct access where permitted 4. Elimination of NAT and its security flaws and disadvantages 5. Simplified topologies 6. Better subnetting 7. Better network planning with sparse allocations
All of those are benefits for the network operator, not the end user.
I can see that all of those benefit the network operator. However, with the exception of 2 and 7, I would argue that all of them are also of benefit to at least some fraction of end-users. I am an end user and I am seeing benefits from all but 2. I use sparse address allocation to allow me to classify hosts for convenience, for example. Note, the original item 6 (corrected above) was autocorrected from subnetting to sunbathing. I have restored it to subnetting.
8. Simpler application code
Might be true *if* there was only IPv6. If there's also the need to support IPv4 then supporting *both* is harder than supporting just one.
Only because the need to support NAT traversal comes as overhead in supporting IPv4. Otherwise, most applications can be written for IPv6 and set the socket option IPV6_V6ONLY=FALSE (which should be the default) and on a dual-stack host, the application will simply work and deal with both protocols without ever caring about IPv4. (IPv4 connections are presented to the application as an IPv6 connection from a host with address ::ffff:IP:v4 (where IP:v4 is the 32-bit IPv4 address of the source host).
9. Reduced complexity in: Proxies Applications Firewalls Logging Monitoring Audit Intrusion Detection Intrusion Prevention DDoS mitigation
Matters not to most home users.
Until it does. I’ve had lots of complaints from end users where they didn’t know that the issue was a proxy problem, application coding error with NAT traversal, Firewall problem, etc., but upon investigation, these were exactly the source of said end-user’s problem.
Doesn't help corporate users because they also need all that for IPv4 indefinitely.
See above. The more traffic that corporate users can get off of IPv4, the less trouble these things will cause for them. As such, your argument falls flat.
10. The ability to write software with hope of your codebase working for the next 10 years or more. I'll bet that there's IPv4 devices running happily 10 years from now.
Maybe, but I bet within 10 years, there’s very little, if any, IPv4 running across the backbone of the internet.
I’m sure there are other benefits as well, but this gives you at least 10.
There are those that would argue that other design advantages include:
1. Fixed length simplified header
Maybe.
2. Stateless Address Autoconfiguration
Disaster, given that I'm still stuck needing DHCPv6 to configure everything that SLAAC won't. Or things that SLAAC only picked up recently (like setting your DNS server) and so are still unsupported in all sorts of gear.
Not a disaster, but perhaps slightly less convenient for your particular situation. In my situation, most hosts don’t need anything more than an IP address, default gateway, and Recursive Nameserver. RAs provide all of that and my hosts just work fine with SLAAC out of the box.
3. Mobile IP
Remains to be seen if this matters.
I’ve found it quite useful as have several other people I know.
4. A much cleaner implementation of IPSEC
Sure, but the IPv4 IPSEC seems to work just fine everywhere I've deployed it.
For some definition of work. The additional overhead, increased complexity of configuring it, incompatible implementations, and other nightmares I have encountered in IPv4 IPSEC vs. the relative ease and convenience of using it in IPv6 strikes me as quite worth while. A treadle sewing machine works if you choose to use one. That doesn’t mean that an electric sewing machine with CNC stitching capabilities for embroidery, etc. is not a better alternative.
I’m sure there are more, but this is a quick list off the top of my head.
There's a bunch of myths too... and "every device will be directly reachable from the entire Internet" is on there.
That’s not a myth, it’s just an incomplete statement that people like you love to truncate for the sake of claiming it is a myth. The complete statement is “Every device can be directly reachable from the entire internet to the extent allowed by policy.” The complete statement is quite true. The incomplete statement is false only because it contains a scope which is beyond the ability of what a protocol can deliver, due to the missing words. Owen
On 6/3/2015 4:56 AM, Owen DeLong wrote:
On Jun 2, 2015, at 4:08 PM, Matthew Kaufman <matthew@matthew.at <mailto:matthew@matthew.at>> wrote:
On 6/2/15 2:35 AM, Owen DeLong wrote:
On Jun 2, 2015, at 5:49 AM, Matthew Kaufman <matthew@matthew.at <mailto:matthew@matthew.at>> wrote:
In message <CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com <mailto:CAL9jLaaQUP1UzoKag3Kuq8a5bMcB2q6Yg=B_=1fFWxRN6K-bNA@mail.gmail.com>
, Christopher Morrow writes: On Mon, Jun 1, 2015 at 9:02 PM, Ca By <cb.list6@gmail.com <mailto:cb.list6@gmail.com>> wrote: > On Monday, June 1, 2015, Mark Andrews <marka@isc.org > <mailto:marka@isc.org>> wrote: >> In message >> <CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com >> <mailto:CAL9jLaYXCdfViHbUPx-=rs4vSx5mFECpfuE8b7VQ+Au2hCXpMQ@mail.gmail.com>> >> , Christopher Morrow writes: >>> So... I don't really see any of the above arguments for v6 in a vm >>> setup to really hold water in the short term at least. I >>> think for >>> sure you'll want v6 for public services 'soon' (arguably like >>> 10 yrs >>> ago so you'd get practice and operational experience and ...) >>> but for >>> the rest sure it's 'nice', and 'cute', but really not required for >>> operations (unless you have v6 only customers) >> Everyone has effectively IPv6-only customers today. IPv6 native + >> CGN only works for services. Similarly DS-Lite and 464XLAT. ok, and for the example of 'put my service in the cloud' ... the service is still accessible over ipv4 right? It depends on what you are trying to do. Having something in the cloud manage something at home. You can't reach the home over IPv4 more and more these days as. IPv6 is the escape path for that but you need both ends to be able to speak IPv6. ...and for firewalls to not exist. Since they do, absolutely all
On 6/1/2015 6:32 PM, Mark Andrews wrote: the techniques required to "reach something at home" over IPv4 are required for IPv6. This is on the "great myths of the advantages of IPv6" list. IPv4 with NAT, you can open one host at home to remote access, or, in some cases, you can select different hosts by using the port number in lieu of the host name/address.
IPv4 with NAT, standard NAT/firewall traversal techniques are used so that things inside your house are reachable as necessary. Almost nobody configures their firewall to open up anything.
HuH?
How do I SSH into my host behind my home NAT firewall without configuration of the firewall?
Nobody but you and a few hundred other people on this mailing list SSH into hosts at your home. Everyone else in the entire world reaches hosts at their house through their firewall just fine because those hosts are their Nest thermostat, or their Dropcam, or their PC running Skype, or maybe (in rare cases) something like LogMeIn. None of those people ever touch the settings of the device they had delivered by their ISP and/or purchased at Best Buy. Not ever.
You are making no sense here. NAT Traversal techniques provide for outbound connections and/or a way that a pseudo-service can create an inbound connection that looks like an outbound connection to the firewall.
It does not in any way provide for generic inbound access to ordinary services without configuration.
So what? Nobody (to several levels of statistical significance) needs "generic inbound access to ordinary services". Heck, the only "ordinary services" that exist any more are HTTP/HTTPS.
IPv6 — I add a permit statement to the firewall to allow the traffic in to each host/group of hosts that I want and I am done.
IPv6, standard NAT?firewall traversal techniques are used so that things inside your house are reachable as necessary. Still almost nobody configures their firewall to open up anything.
Why would one use NAT with IPv6… You’re making no sense there.
I didn't say you would... but you need firewall traversal, and the "standard NAT and firewall traversal techniques" are how you traverse your IPv6 firewall.
For those who do, the work needed to open up a few host/port mappings in IPv4 is basically identical to opening up a few hosts and ports for IPv6.
Not really…
For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line.
Ok, you go find me 1000 households where nobody in the house is on the NANOG list but where there are 20 machines running SSH already installed.
On the other hand, with IPv6, let’s say the machines are all on 2001:db8::/64. Further, let’s say that I group machines for which I want to provide SSH access within 2001:db8::22:0:0:0/80. I can add a single firewall entry which covers this /80 and I’m done. I can put many millions of hosts within that range and they all are accessible directly for SSH from the outside world.
Takes about 20 seconds to configure my firewall once and then I never really need to worry about it again.
Yeah, so there you are manually configuring your firewall again. Which isn't surprising, because you also want to have 20 hosts offering SSH to the world... which makes you an outlier.
Further, in the IPv4 case, I need special client configuration or client invocation effort every time, while with the IPv6 case, I can simply put the hostname in DNS and then use the name thereafter.
Sure... because like most homeowners, you're proficient at editing BIND config files.
I do not see the above as being equal effort or as yielding equal results.
For the automatic traversal cases, the end-user effort is identical.
Sure, but automatic traversal is the exception not the rule when considering internet services.
I don't know what world you're living in, but every single person I know is a user of one or more software or hardware devices that work just fine behind a firewall, either because they're just uploading things to a service, or periodically checking in with a service, or using automatic traversal techniques.
For the incredibly rare case of manual configuration (which as NANOG participants we often forget, since we're adjusting our routers all the time) there is almost no difference for most use cases.
Not true as noted above.
Most people never configure. Most of the people who do configure need exactly one address and port to work, in which case the work is about the same. You have 20 SSH hosts. You're an outlier, and so yes, in IPv6 there's less typing. I'm an outlier too... my house has a Juniper SRX-240 that I configure from the command line at its border. That's not normal. I'm not normal. And that's ok.
Yes, the results are marginally superior in the IPv6 case. Nobody cares.
I would argue that it’s more than marginal.
You are free to argue that.
As such, I’d say that your statement gets added to the great myths of Matthew Kauffman rather than there being any myth about this being an IPv6 advantage.
I can assure you that it is MUCH easier for me to remote-manage my mother’s machines over their IPv6 addresses than to get to them over IPv4.
Only because you've insisted on doing it the hard way, instead of using something like teamviewer or logmein which "just works”.
So does vmc://hostname (if I have IPv6 or non-NAT IPv4).
With default out-of-the-box firewall settings as your device arrives from Best Buy?
I live in California and have native dual-stack without NAT. She lives in Texas and has native dual-stack with NAT for her IPv4.
And I assume your mother opened up her IPv6 firewall all on her own?
As a matter of fact, she did open up the first connection which then provided me the necessary access to configure the rest for her.
So it isn't actually that hard. Just like it isn't that hard for one address and port for the user of a Slingbox or whatever other random product you can find that doesn't have full traversal capabilities in it.
Most people won't have the technical skills to adjust the IPv6 firewall that comes with their CPE and/or "Wireless Router" any more than they have the skills to set up IPv4 port mapping.
My mother is about as non-technical as you would imagine the typical grandmother portrayed in most such examples. Technically, she’s a great grandmother these days.
Congratulations to her.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. IPv6 comes with at least one design-advantage — More addresses.
However, more addresses, especially on the scale IPv6 delivers them comes with MANY benefits:
1.Simplified addressing 2.Better aggregation 3.Direct access where permitted 4.Elimination of NAT and its security flaws and disadvantages 5.Simplified topologies 6.Better subnetting 7.Better network planning with sparse allocations
All of those are benefits for the network operator, not the end user.
I can see that all of those benefit the network operator.
However, with the exception of 2 and 7, I would argue that all of them are also of benefit to at least some fraction of end-users.
Some fraction, sure. But since that fraction is well under 0.1%, I'm sticking with my position.
I am an end user and I am seeing benefits from all but 2. I use sparse address allocation to allow me to classify hosts for convenience, for example.
Outlier.
Note, the original item 6 (corrected above) was autocorrected from subnetting to sunbathing. I have restored it to subnetting.
I preferred sunbathing.
8.Simpler application code
Might be true *if* there was only IPv6. If there's also the need to support IPv4 then supporting *both* is harder than supporting just one.
Only because the need to support NAT traversal comes as overhead in supporting IPv4.
The same code is needed to do IPv6 firewall traversal.
Otherwise, most applications can be written for IPv6 and set the socket option IPV6_V6ONLY=FALSE (which should be the default) and on a dual-stack host, the application will simply work and deal with both protocols without ever caring about IPv4. (IPv4 connections are presented to the application as an IPv6 connection from a host with address ::ffff:IP:v4 (where IP:v4 is the 32-bit IPv4 address of the source host).
For an operating system and TCP/IP stack where that is true, sure. When you're trying to squeeze things into a Cortex M4 that you want to hang on someone's wall, dual-stack takes more Flash.
9.Reduced complexity in: Proxies Applications Firewalls Logging Monitoring Audit Intrusion Detection Intrusion Prevention DDoS mitigation
Matters not to most home users.
Until it does. I’ve had lots of complaints from end users where they didn’t know that the issue was a proxy problem, application coding error with NAT traversal, Firewall problem, etc., but upon investigation, these were exactly the source of said end-user’s problem.
Define "lots". Most users are having great luck with their current IPv4-only connection and the devices they're buying to use with it.
Doesn't help corporate users because they also need all that for IPv4 indefinitely.
See above. The more traffic that corporate users can get off of IPv4, the less trouble these things will cause for them. As such, your argument falls flat.
If I'm still getting support tickets due to IPv4 issues, the problems haven't gone away. Again, tell me when I can switch IPv4 off entirely.
10.The ability to write software with hope of your codebase working for the next 10 years or more. I'll bet that there's IPv4 devices running happily 10 years from now.
Maybe, but I bet within 10 years, there’s very little, if any, IPv4 running across the backbone of the internet.
I'd take that bet.
I’m sure there are other benefits as well, but this gives you at least 10.
There are those that would argue that other design advantages include:
1.Fixed length simplified header
Maybe.
2.Stateless Address Autoconfiguration
Disaster, given that I'm still stuck needing DHCPv6 to configure everything that SLAAC won't. Or things that SLAAC only picked up recently (like setting your DNS server) and so are still unsupported in all sorts of gear.
Not a disaster, but perhaps slightly less convenient for your particular situation.
In my situation, most hosts don’t need anything more than an IP address, default gateway, and Recursive Nameserver. RAs provide all of that and my hosts just work fine with SLAAC out of the box.
Not too many WinXP machines at your house I guess.
3.Mobile IP
Remains to be seen if this matters.
I’ve found it quite useful as have several other people I know.
Outlier.
4.A much cleaner implementation of IPSEC
Sure, but the IPv4 IPSEC seems to work just fine everywhere I've deployed it.
For some definition of work. The additional overhead, increased complexity of configuring it, incompatible implementations, and other nightmares I have encountered in IPv4 IPSEC vs. the relative ease and convenience of using it in IPv6 strikes me as quite worth while.
A treadle sewing machine works if you choose to use one. That doesn’t mean that an electric sewing machine with CNC stitching capabilities for embroidery, etc. is not a better alternative.
Given that the security provided by both is identical, I'm not sure that's a good example, but it was creative.
I’m sure there are more, but this is a quick list off the top of my head.
There's a bunch of myths too... and "every device will be directly reachable from the entire Internet" is on there.
That’s not a myth, it’s just an incomplete statement that people like you love to truncate for the sake of claiming it is a myth.
The complete statement is “Every device can be directly reachable from the entire internet to the extent allowed by policy.”
The complete statement is quite true. The incomplete statement is false only because it contains a scope which is beyond the ability of what a protocol can deliver, due to the missing words.
Yes. But those extra words mean that I need to carry around exactly the same tricks I use to get through IPv4 NAT/firewall cases. Matthew Kaufman
IPv4 with NAT, standard NAT/firewall traversal techniques are used so that things inside your house are reachable as necessary. Almost nobody configures their firewall to open up anything.
HuH?
How do I SSH into my host behind my home NAT firewall without configuration of the firewall?
Nobody but you and a few hundred other people on this mailing list SSH into hosts at your home.
SSH, VNC, HTTP, HTTPs, LPD, whatever… Pick your service. SSH was just an example.
Everyone else in the entire world reaches hosts at their house through their firewall just fine because those hosts are their Nest thermostat, or their Dropcam, or their PC running Skype, or maybe (in rare cases) something like LogMeIn.
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services. Even if you want to consider all of those services, the reality is that their codebases could be substantially improved and simplified as well as their security posture improved by eliminating NAT.
None of those people ever touch the settings of the device they had delivered by their ISP and/or purchased at Best Buy. Not ever.
Sure… They all live like sheeple not even realizing that they’ve been handed a deficient and limited internet incapable of living up to its potential. They remain blissfully ignorant that they are a rat in a digital cage because they are unaware of life outside the cage. What’s your point? Are you claiming this makes cages a good thing? Are you claiming that since the rats are not demanding to be let out of their cages, we shouldn’t seek to create an environment where cages are not needed?
You are making no sense here. NAT Traversal techniques provide for outbound connections and/or a way that a pseudo-service can create an inbound connection that looks like an outbound connection to the firewall.
It does not in any way provide for generic inbound access to ordinary services without configuration.
So what?
Nobody (to several levels of statistical significance) needs "generic inbound access to ordinary services". Heck, the only "ordinary services" that exist any more are HTTP/HTTPS.
This simply isn’t true. To the limited extent that it is true, the reality is that it is a consequence of the limitations of IPv4 and NAT rather than a state that anyone other than you ever really considered desirable.
IPv6 — I add a permit statement to the firewall to allow the traffic in to each host/group of hosts that I want and I am done.
IPv6, standard NAT?firewall traversal techniques are used so that things inside your house are reachable as necessary. Still almost nobody configures their firewall to open up anything.
Why would one use NAT with IPv6… You’re making no sense there.
I didn't say you would... but you need firewall traversal, and the "standard NAT and firewall traversal techniques" are how you traverse your IPv6 firewall.
That’s an awful lot of icky overhead vs. the simple clean solution of permitting desired traffic. I suspect that in the IPv6 world, eventually, rather than silly hacks like UPnP, STUN, etc., we will see, instead, standardized APIs for authenticating with the firewall and automated mechanisms for adding permission after authentication.
For those who do, the work needed to open up a few host/port mappings in IPv4 is basically identical to opening up a few hosts and ports for IPv6.
Not really…
For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line.
Ok, you go find me 1000 households where nobody in the house is on the NANOG list but where there are 20 machines running SSH already installed.
OK, half a dozen Video Game consoles or whatever other service you want to imagine. Just because the standard way to do things today is with silly workarounds required by the lack of address transparency created by NAT, that doesn’t mean we have to continue to do things so badly in the future.
On the other hand, with IPv6, let’s say the machines are all on 2001:db8::/64. Further, let’s say that I group machines for which I want to provide SSH access within 2001:db8::22:0:0:0/80. I can add a single firewall entry which covers this /80 and I’m done. I can put many millions of hosts within that range and they all are accessible directly for SSH from the outside world.
Takes about 20 seconds to configure my firewall once and then I never really need to worry about it again.
Yeah, so there you are manually configuring your firewall again. Which isn't surprising, because you also want to have 20 hosts offering SSH to the world... which makes you an outlier.
Right… Once per service or once per host per service worst case. Eventually, there will be APIs for this, but today, it’s manual. It’s manual today because being able to make use of it makes me an outlier. In the future, with address transparency, being able to deploy services will not make you a outlier.
Further, in the IPv4 case, I need special client configuration or client invocation effort every time, while with the IPv6 case, I can simply put the hostname in DNS and then use the name thereafter.
Sure... because like most homeowners, you're proficient at editing BIND config files.
Who needs to edit BIND config file to put something in DNS? Sure, you can, and I do, but there are at least a dozen web-based DNS providers where you can host a zone file for free and manage it through a web GUI.
I do not see the above as being equal effort or as yielding equal results.
For the automatic traversal cases, the end-user effort is identical.
Sure, but automatic traversal is the exception not the rule when considering internet services.
I don't know what world you're living in, but every single person I know is a user of one or more software or hardware devices that work just fine behind a firewall, either because they're just uploading things to a service, or periodically checking in with a service, or using automatic traversal techniques.
I’m living in a world where at least some people want to be peers rather than mere consumers. A world where the thought of being able to make phone calls without a phone company has appeal to at least some people. A world where peer to peer information and communication is considered a good and healthy thing. Sure, if you want to live in the ABC/DIsney/Comcast/NBC/Micr0$0ft fantasy world where people are mindless subjects of their corporate overlords, strictly consuming information and only that which has the approval of said corporate overlords, then the path you advocate makes perfect sense.
For the incredibly rare case of manual configuration (which as NANOG participants we often forget, since we're adjusting our routers all the time) there is almost no difference for most use cases.
Not true as noted above.
Most people never configure.
Most of the people who do configure need exactly one address and port to work, in which case the work is about the same.
Try configuring port forwarding for a household where there is an Xbox One and a PS/3. Not all that uncommon a scenario. Common enough that both Micr0$0ft and Sony technical support have stock scripts for telling you which home gateways you can buy that will work and how to configure them.
You have 20 SSH hosts. You're an outlier, and so yes, in IPv6 there's less typing.
Picking apart the specifics of a random example doesn’t actually make the general situation less relevant.
I'm an outlier too... my house has a Juniper SRX-240 that I configure from the command line at its border. That's not normal. I'm not normal. And that's ok.
On this, we can agree.
As such, I’d say that your statement gets added to the great myths of Matthew Kauffman rather than there being any myth about this being an IPv6 advantage.
I can assure you that it is MUCH easier for me to remote-manage my mother’s machines over their IPv6 addresses than to get to them over IPv4.
Only because you've insisted on doing it the hard way, instead of using something like teamviewer or logmein which "just works”.
So does vmc://hostname <vmc://hostname> (if I have IPv6 or non-NAT IPv4).
With default out-of-the-box firewall settings as your device arrives from Best Buy?
If I were to answer strictly the question as asked, yes. However, no, but let’s compare the costs… Assuming a 3 year life-cycle on that piece of shit you bought at Best Buy... Logmein Pro for Users (cheapest alternative I could find) $99/year One-time configuration of firewall: $PIZZA for local computer whiz kid every 3 years. I’d argue that the need to configure the firewall is a very cost effective alternative with a less than one month break even. Over a 20 year timeframe, you’ve gone from $1980 (Logmein) to $105 (assuming you pay as much as $15 per pizza which is high).
I live in California and have native dual-stack without NAT. She lives in Texas and has native dual-stack with NAT for her IPv4.
And I assume your mother opened up her IPv6 firewall all on her own?
As a matter of fact, she did open up the first connection which then provided me the necessary access to configure the rest for her.
So it isn't actually that hard. Just like it isn't that hard for one address and port for the user of a Slingbox or whatever other random product you can find that doesn't have full traversal capabilities in it.
It wasn’t hard because there was address transparency via IPv6 and a simple permit was all that was needed. No port forwarding, no mapping, just a “Allow inbound packets to port XX TCP”.
Most people won't have the technical skills to adjust the IPv6 firewall that comes with their CPE and/or "Wireless Router" any more than they have the skills to set up IPv4 port mapping.
My mother is about as non-technical as you would imagine the typical grandmother portrayed in most such examples. Technically, she’s a great grandmother these days.
Congratulations to her.
IPv6 has exactly one benefit... there's more addresses. It comes with a whole lot of new pain points, and probably a bunch of security nightmare still waiting to be discovered. And it for sure isn't free. IPv6 comes with at least one design-advantage — More addresses.
However, more addresses, especially on the scale IPv6 delivers them comes with MANY benefits:
1. Simplified addressing 2. Better aggregation 3. Direct access where permitted 4. Elimination of NAT and its security flaws and disadvantages 5. Simplified topologies 6. Better subnetting 7. Better network planning with sparse allocations
All of those are benefits for the network operator, not the end user.
I can see that all of those benefit the network operator.
However, with the exception of 2 and 7, I would argue that all of them are also of benefit to at least some fraction of end-users.
Some fraction, sure. But since that fraction is well under 0.1%, I'm sticking with my position.
I disagree. I think 3, especially, will be a growing fraction of users as address transparency becomes the norm and developers start to make use of it.
I am an end user and I am seeing benefits from all but 2. I use sparse address allocation to allow me to classify hosts for convenience, for example.
Outlier.
Today. We’re talking about the future. I’m talking about the advantages of moving forward. You’re making excuses for remaining mired in the past.
Note, the original item 6 (corrected above) was autocorrected from subnetting to sunbathing. I have restored it to subnetting.
I preferred sunbathing.
Please, go make yourself as crispy as you desire. Fortunately, for you, the sun is not behind a NAT firewall.
8. Simpler application code
Might be true *if* there was only IPv6. If there's also the need to support IPv4 then supporting *both* is harder than supporting just one.
Only because the need to support NAT traversal comes as overhead in supporting IPv4.
The same code is needed to do IPv6 firewall traversal.
But IPv6 firewall traversal isn’t necessary.
Otherwise, most applications can be written for IPv6 and set the socket option IPV6_V6ONLY=FALSE (which should be the default) and on a dual-stack host, the application will simply work and deal with both protocols without ever caring about IPv4. (IPv4 connections are presented to the application as an IPv6 connection from a host with address ::ffff:IP:v4 (where IP:v4 is the 32-bit IPv4 address of the source host).
For an operating system and TCP/IP stack where that is true, sure. When you're trying to squeeze things into a Cortex M4 that you want to hang on someone's wall, dual-stack takes more Flash.
Yes, having two stacks in the box takes marginally more flash than one. According to http://www.ipso-alliance.org/wp-content/media/lightweight_os.pdf <http://www.ipso-alliance.org/wp-content/media/lightweight_os.pdf>, an IPv6 stack shouldn’t require more than about 15k of flash. These days, that ’s not a lot even in a small ARM Cortex M4.
9. Reduced complexity in: Proxies Applications Firewalls Logging Monitoring Audit Intrusion Detection Intrusion Prevention DDoS mitigation
Matters not to most home users.
Until it does. I’ve had lots of complaints from end users where they didn’t know that the issue was a proxy problem, application coding error with NAT traversal, Firewall problem, etc., but upon investigation, these were exactly the source of said end-user’s problem.
Define "lots". Most users are having great luck with their current IPv4-only connection and the devices they're buying to use with it.
No, most users are suffering in silence because they have no idea where to go to get their problems solved. Most users haven’t experienced a working internet, so the current level of dysfunction is normal to them and they tolerate it because they have no idea that there are better alternatives available.
Doesn't help corporate users because they also need all that for IPv4 indefinitely.
See above. The more traffic that corporate users can get off of IPv4, the less trouble these things will cause for them. As such, your argument falls flat.
If I'm still getting support tickets due to IPv4 issues, the problems haven't gone away. Again, tell me when I can switch IPv4 off entirely.
Sure… I didn’t say reducing IPv4 utilization would eliminate IPv4 issues. I said it would reduce them. However, i’m confused… If nobody has problems with IPv4 as you describe above, why are you now complaining about the number of IPv4 tickets you get. Seems you want to have your cake and eat it too here.
10. The ability to write software with hope of your codebase working for the next 10 years or more. I'll bet that there's IPv4 devices running happily 10 years from now.
Maybe, but I bet within 10 years, there’s very little, if any, IPv4 running across the backbone of the internet.
I'd take that bet.
I’ll put $100 on it.
I’m sure there are other benefits as well, but this gives you at least 10.
There are those that would argue that other design advantages include:
1. Fixed length simplified header
Maybe.
2. Stateless Address Autoconfiguration
Disaster, given that I'm still stuck needing DHCPv6 to configure everything that SLAAC won't. Or things that SLAAC only picked up recently (like setting your DNS server) and so are still unsupported in all sorts of gear.
Not a disaster, but perhaps slightly less convenient for your particular situation.
In my situation, most hosts don’t need anything more than an IP address, default gateway, and Recursive Nameserver. RAs provide all of that and my hosts just work fine with SLAAC out of the box.
Not too many WinXP machines at your house I guess.
Nope… I’m very proud to say that there are NO Micr0$0ft boxes in my house.
3. Mobile IP
Remains to be seen if this matters.
I’ve found it quite useful as have several other people I know.
Outlier.
This seems be your answer to anything that you don’t like. Outlier or not, the protocol is useful.
4. A much cleaner implementation of IPSEC
Sure, but the IPv4 IPSEC seems to work just fine everywhere I've deployed it.
For some definition of work. The additional overhead, increased complexity of configuring it, incompatible implementations, and other nightmares I have encountered in IPv4 IPSEC vs. the relative ease and convenience of using it in IPv6 strikes me as quite worth while.
A treadle sewing machine works if you choose to use one. That doesn’t mean that an electric sewing machine with CNC stitching capabilities for embroidery, etc. is not a better alternative.
Given that the security provided by both is identical, I'm not sure that's a good example, but it was creative.
The security provided by IPv4 IPSEC and IPv6 IPSEC is nearly identical as well. The difference is the difficulty of use.
I’m sure there are more, but this is a quick list off the top of my head.
There's a bunch of myths too... and "every device will be directly reachable from the entire Internet" is on there.
That’s not a myth, it’s just an incomplete statement that people like you love to truncate for the sake of claiming it is a myth.
The complete statement is “Every device can be directly reachable from the entire internet to the extent allowed by policy.”
The complete statement is quite true. The incomplete statement is false only because it contains a scope which is beyond the ability of what a protocol can deliver, due to the missing words.
Yes. But those extra words mean that I need to carry around exactly the same tricks I use to get through IPv4 NAT/firewall cases.
Not really… If policy permits you to pass the firewall, then you can pass without tricks. If policy doesn’t allow it, you may be able to traverse the firewall with tricks, but then the question becomes should you? True, if you are the one determining the policy, there is that pesky step of expressing the policy to the firewall. In your preferred world, this is done by adding substantial overhead to each and every piece of software and creating profound limitations on how you can use your software. In my preferred world, this is done by configuring the policy in the firewall once and simplifying the application code and increasing the probability that the security policy enforced is the security policy intended. I guess to each their own. Owen
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services.
This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right? you'll be moving the (some of the) pain from 'nat' to 'dns' (or more generally naming and identification). I think though that in a better world, a service related to the thing you want to prod from outside would manage this stuff for you. It's important (I think) to not simplify the discussion as: "Oh, with ipv6 magic happens!" because there are still problems and design things to overcome even with unhindered end-to-end connectivity.
Subject: Re: AWS Elastic IP architecture Date: Thu, Jun 04, 2015 at 01:16:03PM -0400 Quoting Christopher Morrow (morrowc.lists@gmail.com):
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services.
This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right?
Or something.
you'll be moving the (some of the) pain from 'nat' to 'dns' (or more generally naming and identification). I think though that in a better world, a service related to the thing you want to prod from outside would manage this stuff for you.
Possibly.
It's important (I think) to not simplify the discussion as: "Oh, with ipv6 magic happens!" because there are still problems and design things to overcome even with unhindered end-to-end connectivity.
You have successfully demonstrated that users will need some locating service. More so with the cure-all IPv6; because remembering hex is hard for People(tm). You have, however, not shown that all the possible ways of building a locating service that become available once the end-points are uniquely reachable (and thus, as long as we're OK with finding just the right host, identifyable) present an equal level of suckage. I believe that while the work indeed can be daunting for a sufficiently pessimal selection of users, the situation so improves (if we look at simplicity of protocol design and resulting fragility) when the end-points can ignore any middleboxes that the net result, measured as inconvenicence imposed on a standard End User, will improve. -- Måns Nilsson primary/secondary/besserwisser/machina MN-1334-RIPE +46 705 989668 Why is everything made of Lycra Spandex?
On Thu, Jun 4, 2015 at 1:44 PM, Måns Nilsson <mansaxel@besserwisser.org> wrote:
You have successfully demonstrated that users will need some locating service. More so with the cure-all IPv6; because remembering hex is hard for People(tm).
but it's not just hex. Even today you (if given a bare ipv4 address) would need some naming/locating service I suspect. Folk can barely remember their email address, nevermind the hostname of their printer/etc for remote use. Today we 'win' because there's some third-party aggregating 'your device' and 'you' and connecting them together 'properly'.
You have, however, not shown that all the possible ways of building a locating service that become available once the end-points are uniquely reachable (and thus, as long as we're OK with finding just the right host, identifyable) present an equal level of suckage.
sure, I wasn't really trying to accomplish that, just to point out that: you still have to find me in the haystack! and 'well then put dns records in your domain' isn't an answer for 99.99+% of users. Even if Owen's swag of 'thousands' of users 'use ssh' is on target there are ~100m users in the US on cable/dsl plant... (so with 10 ssh users ~.01%) that will basically never 'get it'.
I believe that while the work indeed can be daunting for a sufficiently pessimal selection of users, the situation so improves (if we look at simplicity of protocol design and resulting fragility) when the end-points can ignore any middleboxes that the net result, measured as inconvenicence imposed on a standard End User, will improve.
I bet we end up with the same rendezvous services though... perhaps we wont have to worry about the 'printer' making a long-term (or even periodic?) connection to that service, but I imagine there'll still be some service complexity. It may be better than the current situation, but that's still to be seen.
On Thu, Jun 4, 2015 at 12:16 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services.
This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right?
You do not remove the locating service, what you remove is the remote proxy service. For example with a webcam on IPv4, you would connect to website to download the video. The camera would also connect to the website to upload the video. On IPv6 the webcam would connect to the website to say that it is alive and what its IP is. You would connect to the website and your computer would get the IP and directly connect to the webcam. If there were multiple people connecting, you may even be able to use multicast.
In message <CABidiTJH=+oKpF7OwU+2V4MELaigMTqe3ZdFr51jUKRTpHFdtA@mail.gmail.com> , Philip Dorr writes:
On Thu, Jun 4, 2015 at 12:16 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I=E2=80=99d argue that SSH is several thousand, not a few hundred. In an= y case, I suppose you can make the argument that only a few people are tryi= ng to access their home network resources remotely other than via some sort= of proxy/rendezvous service. However, I would argue that such services exi= st solely to provide a workaround for the deficiencies in the network intro= duced by NAT. Get rid of the stupid NAT and you no longer need such service= s.
This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right?
You do not remove the locating service, what you remove is the remote proxy service.
And the DNS is the simplest location service. Windows boxes and Mac's can register themselves in the DNS today using standardised protocols. This really isn't a hard thing to do. All you need is a fully qualified hostname, addresses and update credentials (username/password (TSIG) or a public key pair SIG(0)) and you can update the addresses records using the DNS and UPDATE. Windows uses GSS-TSIG (Kerberos) to authenticate the UPDATE request. In theory it could also use plain TSIG and/or SIG(0). What is hard is giving them a globally unique address today because it doesn't exist for 99.9% of the devices connected in the world due to the world having run out of IPv4 address about ~20 years ago. At the moment we are at ~1 address per household for IPv4. We are heading into < 1 address per household for most of the households in the world. For a Mac you do System Preference -> Sharing -> Edit and Tick "Use dynamic global hostname" add the hostname and TSIG credentials (User/Password). The Mac will save them. The Mac will then update the address records for itself as they change. What has to happen is making this a regular part of setting up a machine for the first time. This requires other OS vendors adding equivalent functionality to their OS's.
For example with a webcam on IPv4, you would connect to website to download the video. The camera would also connect to the website to upload the video.
On IPv6 the webcam would connect to the website to say that it is alive and what its IP is. You would connect to the website and your computer would get the IP and directly connect to the webcam. If there were multiple people connecting, you may even be able to use multicast.
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 06/04/2015 01:16 PM, Christopher Morrow wrote:
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services. This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right? A large part of my heartburn with this is the proliferation of unidentified rendezvous services with no hint of SLA or anything that are burned in to things like door locks, thermostats, washing machines, etc etc. (also no hint of where and even what country has the rendezvous in question...) Once I've equipped my house with IoT devices, there will be a bunch (hundred?) outbound connections to different rendezvous services. Nothing in the box or literature identifies the server(s) in question either. (and likely most of them don't even use https.) You want your door lock and thermostat to effectively publish when you are away for a couple of weeks, onto someone else's unidentified server? At least dns rendezvous allow endpoint security if the manufacturer even thinks about that...
-- Pete ....
On Jun 4, 2015, at 6:16 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Thu, Jun 4, 2015 at 5:11 AM, Owen DeLong <owen@delong.com> wrote:
I’d argue that SSH is several thousand, not a few hundred. In any case, I suppose you can make the argument that only a few people are trying to access their home network resources remotely other than via some sort of proxy/rendezvous service. However, I would argue that such services exist solely to provide a workaround for the deficiencies in the network introduced by NAT. Get rid of the stupid NAT and you no longer need such services.
This is an interesting argument/point, but if you remove the rendevous service then how do you find the thing in your house? now the user has to manage DNS, or the service in question has to manage a dns entry for the customer, right?
DNS is pretty easy. There are dozen’s of free web-UI based DNS services out there. Some of them even run by registrars.
you'll be moving the (some of the) pain from 'nat' to 'dns' (or more generally naming and identification). I think though that in a better world, a service related to the thing you want to prod from outside would manage this stuff for you.
I’m unconvinced. Perhaps I prefer to create an entry once vs. pay for some other service to do this and charge me on a monthly basis for a one-time action.
It's important (I think) to not simplify the discussion as: "Oh, with ipv6 magic happens!" because there are still problems and design things to overcome even with unhindered end-to-end connectivity.
I made no attempt to declare that there was any magic with IPv6. Indeed, my claim is that less magic is required. Owen
On Wed, Jun 3, 2015 at 7:56 AM, Owen DeLong <owen@delong.com> wrote:
For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line.
in the original case in question the fact that there's nat happeng isn't material... so all of this discussion of NAT is a red herring, right? the user of AWS services cares not that 'nat is happening', because they can simply RESTful up a VM instance and ssh into it in ~30 seconds, no config required. let's skip all NAT discussions on this topic from here on out, yes?
we are starting to waste packets arguing over some private intellectual property On Wed, Jun 3, 2015 at 3:24 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Wed, Jun 3, 2015 at 7:56 AM, Owen DeLong <owen@delong.com> wrote:
For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line.
in the original case in question the fact that there's nat happeng isn't material... so all of this discussion of NAT is a red herring, right? the user of AWS services cares not that 'nat is happening', because they can simply RESTful up a VM instance and ssh into it in ~30 seconds, no config required.
let's skip all NAT discussions on this topic from here on out, yes?
On Jun 3, 2015, at 9:24 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Wed, Jun 3, 2015 at 7:56 AM, Owen DeLong <owen@delong.com> wrote:
For example, let’s say you have 20 machines for whom you want to allow inbound SSH access. In the IPv4 world, with NAT, you have to configure an individual port mapping for each machine and you have to either configure all of the SSH clients, or, specify the particular port for the machine you want to get to on the command line.
in the original case in question the fact that there's nat happeng isn't material... so all of this discussion of NAT is a red herring, right? the user of AWS services cares not that 'nat is happening', because they can simply RESTful up a VM instance and ssh into it in ~30 seconds, no config required.
That depends… If they have a public address ON their machine or dedicated to their machine, then, they MAY not care that NAT is occurring. If they want to run SIP or some other protocol which depends on being able to tell the far end where to connect for secondary channels, then they may care anyway. You can reduce the number of things that NAT breaks, but you can’t eliminate them all.
let's skip all NAT discussions on this topic from here on out, yes?
Only if you can promise me 100% that the NAT in question will not break anything. Owen
On Thu, Jun 4, 2015 at 5:16 AM, Owen DeLong <owen@delong.com> wrote:
On Jun 3, 2015, at 9:24 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
let's skip all NAT discussions on this topic from here on out, yes?
Only if you can promise me 100% that the NAT in question will not break anything.
:) people don't seem to be bothered today.
On Jun 4, 2015, at 6:10 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Thu, Jun 4, 2015 at 5:16 AM, Owen DeLong <owen@delong.com> wrote:
On Jun 3, 2015, at 9:24 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
let's skip all NAT discussions on this topic from here on out, yes?
Only if you can promise me 100% that the NAT in question will not break anything.
:) people don't seem to be bothered today.
People seem to tolerate it today. It is not clear to what extent they are not bothered vs. to what extent they suffer in silence because they do not know of any viable alternative. Owen
On Jun 1, 2015, at 4:30 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Mon, Jun 1, 2015 at 3:06 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com>> wrote:
On May 31, 2015, at 7:46 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sun, May 31, 2015 at 9:07 PM, Owen DeLong <owen@delong.com> wrote:
As I said before:
Host Virtual (vr.org <http://vr.org/>) Softlayer (softlayer.com <http://softlayer.com/>) Linode (Linode.com <http://linode.com/>)
All have full dual-stack support.
<snip>
At the risk of feeding the troll...
This isn't just an AWS problem.
So... ok. What does it mean, for a customer of a cloud service, to be ipv6 enabled?
It means that I should be able to develop my cloud application(s) with full native IPv6 support and expect to be able to serve IPv4-only, IPv6-only, and dual-stack customers using native IP stacks on my virtual machines without requiring external proxies, translators, etc. from the cloud service provider.
Ok, I suppose as a long term goal that seems fine. I don't get why 'ipv6 address on my vm' matters a whole bunch (*in a world where v4 is still available to you I mean), but as a long term goal, sure fine.
Short term, I don’t want to have to pay to develop my application twice. I want to develop once for IPv6 and be done. Using IPV6_V6ONLY=FALSE socket option, I should be able to support both protocols from an application developed for IPv6 on machines which have both protocol capabilities. If you deliver to my native IPv6 or native IPv4 address, then i get real source addresses of the originator of the connection and I can develop my application normally and not have to re-engineer it around a protocol change later. If you don’t, then, if I care about the source of the connection request in my application, I either have to write the application to look elsewhere for it if the connection came in on IPv4, or, I have to get even more creative about it. Worse, it means that I’m having to maintain IPv4-specific cruft in my application some of which may be necessary to support IPv6 connections that arrive over IPv4 because I can’t get native IPv4. No, this is not a long-term goal, it’s a short-term need. Otherwise, my development costs are increased and my development time is extended. Additionally, the extra code creates additional opportunities for errors, security holes, etc. In short, there’s nothing good about not being able to develop a native application and there are many draw-backs.
In the short term though, that was my question: "What should be prioritized, and then get that information to aws/gce/rackspace/etc product managers" because I suspect their engineers are not silly, they know that v6 should be part of the plan... but the PM folk are saying: "No one is asking for this v6 thing, but lots of people want that other shiney bauble...so build the shiney!!”
I have little doubt that this is true. However, but the time all of their customers start screaming for IPv6, do you think they’re going to be ready/willing/able to wait the 6-18 months it will take to deploy it after that point? I think they’re going to be coming in crisis mode wondering how they can get IPv6 turned on yesterday.
What really matters for a cloud service user? What information could be surfaced to the cloud providers in order to get the most important ipv6 'stuff' done 'now’?
Ideally, simple native routing of IPv6 to provisioned hosts should suffice in most cases.
o Is it most important to be able to address ever VM you create with an ipv6 address?
Yes.
long term sure, short term though.. really?? isn't it more important to be able to terminate v6 services from 'public' users? (and v4 as well because not everyone has v6 at home/work/mobile)
Yes, but I need to be able to terminate the v6 services on the v6 socket on my VM, not on some proxy that masks the source address. If you have a 6->4 proxy/nat/whatever that somehow hands me a packet with an IPv6 from address in the IPv4 packet header, that’s a pretty neat trick. If you have operating system stacks that will support this and hand said source address to the application through the standard accept() API, that’s an even better trick. If you don’t have that for every platform supported on AWS (Linux/BSD/Win/etc.), then I think we need the above mentioned native connectivity, no?
o Is it most important to be able to talk to backend services (perhaps at your prem) over ipv6?
It’s hard to imagine how you could provide the first one above without having this one come along for the ride.
o Is it most important that administrative interfaces to the VM systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be ipv6 reachable?
This would be the one where I would most be able to tolerate a delay.
o Is it most important to be able to terminate ipv6 connections (or datagrams) on a VM service for the public to use?
If you can’t get the first one, this might be adequate as a short-term fallback for some applications. However, it’s far from ideal and not all that useful.
I agree it's not ideal, and I was making a list of 'short term goals' that could be prioritized and get us all to the v6 utopia later.
The thing is that in most cases, the sum of implementing this and managing it over a 3-6 month period is more effort than simply routing native IPv6 and adding IPv6 to your provisioning systems. As such, it’s hard for me to justify putting this short term goal in place vs. just deploying native IPv6 connectivity.
I don't see, especially if the vm networking is unique to each customer, that 'ipv6 address on vm' is hugely important as a first/important goal. I DO see that landing publicly available services on an ipv6 endpoint is super helpful.
Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where:
1. The services basically have to be supported by some form of proxy/nat/etc. So long as you are using a supported L4 protocol, that might be fine, but not everything fits in TCP/UDP/ICMP. Generally supporting GRE, IKE, and application-specific protocols becomes an issue.
I figured the simplest path from v6 to v4 was: "Rip the header and extension headers off, make a v4 packet, deliver to vm”
How do you do that without masking the source address of the IPv6 host at the other end of the connection? If you don’t have a good answer to that question… This fails.
aside from "yes, your request came from <ipv6 literal>" that should 'just work', you do have to maintain some state to deliver back, depending on the implementation (but even that is probably able to be hidden from the 'user' and provisioned/capacity planned independent of the 'user’).
Sure, this is all fine and dandy from the user perspective, but it sucks in very big ways from the application developer perspective. Imagine running a service (let’s say not a web-based service for the moment) with 100s of thousands or even millions of users from all over the world. Now, imagine if every connection in your application logs came from 192.9.200.5. How, exactly, do you track where your users are coming from? How do you tell who connected from what network? How do you track down the abuser after you find him in log entries which share the same source address as everything else?
2. The developer has to develop and maintain an IPv4-compatible codebase rather than be able to use the dual stack capabilities of a host with IPV6_V6ONLY=FALSE in the socket options. This delays the ability to produce native IPv6 applications.
3. Proxies and translators add complexity, increase fragility, and reduce performance.
I think this point is cogent, but ... also part of the pie that 'aws' pays for you the user... Or rather they run a 'service' which takes care of this, has slas and slos... "All packets delivered with 99.99% having Xms extra latency!" "Service has 99.999% uptime!" "Throughput comparable (99.99% at peak, 99% of the time) to straight/native v4”
I have yet to see a proxy/nat implementation that delivers on those numbers. Forgive my skepticism, but until this is demonstrated, I think I’d rather focus efforts on native deployments than miserable bandaids.
Would AWS (or any other cloud provider that's not currently up on the v6 bandwagon) enabling a loadbalanced ipv6 vip for your public service (perhaps not just http/s services even?) be enough to relieve some of the pressure on other parties and move the ball forward meaningfully enough for the cloud providers and their customers?
For the reasons outlined above, I don’t really think so.
ok, I figured that for short-term while the providers figure out: "Oh yea, this v6 thing is pretty simple in our encap'd world... why didn't we just turn it on originally?"
getting v6 endpoints with little hassle for the 'user' (vm user) would help us all a bunch.
It might help some, but given the extent to which it complicates application development and breaks things unless you’re running the simplest case of web server, I don’t think it helps much. Owen
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally). Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or, 2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too. Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2. And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4) Matthew Kaufman
On Mon, Jun 1, 2015 at 10:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Facebook selected IPv6-only as outlined above http://blog.ipspace.net/2014/03/facebook-is-close-to-having-ipv6-only.html
2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Matthew Kaufman
fb is not a 'cloud provider'. it's orthogonal to the question. t On Mon, Jun 1, 2015 at 2:36 PM, Ca By <cb.list6@gmail.com> wrote:
On Mon, Jun 1, 2015 at 10:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Facebook selected IPv6-only as outlined above
http://blog.ipspace.net/2014/03/facebook-is-close-to-having-ipv6-only.html
2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over
IPv6
if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Matthew Kaufman
Original I asked because was in the process of thinking out loud what options are there for disaster recovery. I could do anycast BGP, advertise out say a /24 of "elastic IP" and internally have that block running inside our data center interconnect dmvpn tunnels. We do have WAN OPT so it probably will be faster running inside the tunnel - not sure what impact it might have on some applications though. so if a vm moves, i need to generate 2 host routes - public and private ipv4, so those can be the same - some customers have vpn/direct connect so they also need to get to the private ipv4 as well the public one...it could get to be a pain to manage. Dimension Data Cloud does have dual stack for cloud VM. Personally i like IPv6 since it's so much easier to plan around...Customers can bring their own ipv4 address to the cloud, the ipv6 is unique - so if you don't want to nat your private ipv4, you could just use ipv6. Our orchestration tools are dual stack as well...making a rest api call with python to ipv6 is as easy as ipv4. IPv6 makes a big different for us to monitor customer VMs though...since they can bring their own ipv4 and might overlap. Honestly, I don't think people care much about ipv6 yet, but hey, we just dual stack for the future so hey if you want to load balancing ipv6 to either ipv6 or ipv4 real servers, knock yourself out, and for marketing as well...AWS doesn't do it, but we do and with the intercloud solution from cisco - moving from aws to didata would be just a click away :) On Mon, Jun 1, 2015 at 2:43 PM, Todd Underwood <toddunder@gmail.com> wrote:
fb is not a 'cloud provider'.
it's orthogonal to the question.
t
On Mon, Jun 1, 2015 at 2:36 PM, Ca By <cb.list6@gmail.com> wrote:
On Mon, Jun 1, 2015 at 10:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Facebook selected IPv6-only as outlined above
http://blog.ipspace.net/2014/03/facebook-is-close-to-having-ipv6-only.html
2) An all-IPv4 network inside, with the annoying (but well-known) use
of
RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Matthew Kaufman
The question that Matthew Kaufman proposed was specifically asking about app architecture deployments, so what Facebook is choosing to do is entirely germane. - Matt On Mon, Jun 01, 2015 at 02:43:27PM -0400, Todd Underwood wrote:
fb is not a 'cloud provider'.
it's orthogonal to the question.
t
On Mon, Jun 1, 2015 at 2:36 PM, Ca By <cb.list6@gmail.com> wrote:
On Mon, Jun 1, 2015 at 10:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Facebook selected IPv6-only as outlined above
http://blog.ipspace.net/2014/03/facebook-is-close-to-having-ipv6-only.html
2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over
IPv6
if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Matthew Kaufman
-- Designing an effective undergrad CS degree is hard. It's no wonder so many ivy-league schools have more or less given up and turned into Java Certification shops. -- Steve Yegge, http://steve-yegge.blogspot.com/2007/06/rich-programmer-food.html
The question that Matthew Kaufman proposed was specifically asking about app architecture deployments, so what Facebook is choosing to do is entirely germane.
I'd lean more on the "ipv6 evangelism" side of the discussion, but: Facebook controls the whole stack and can require buy-in from their apps people to push IPv6 first. If that costs them dev time to patch some OSS to handle it gracefully, that's their business decision. In the "cloud host" domain, you're dealing with a much more heterogeneous environment where the provider doesn't control the whole stack up to the apps. Making the platform as frictionless as possible for customers is key; customer X is not going like your platform much if widget Y doesn't run properly "because IPv6". Sure, widget Y should get its excrement together and handle it, but all customer X sees is "widget Y fails on provider A, but runs fine on provider B" where provider A was v6-only internal but provider B is either v4-only or dual stack. Guess where customer X spends their dollars now? I'm on your side, here: I run my own stuff v6-first wherever possible and have filed bug reports, submitted workarounds/patches, etc. We need people doing that to push things forward. On this given point, though: Facebook -ne generic hosting platform -- Hugo
Hugo Slabbert wrote:
snip
On this given point, though: Facebook -ne generic hosting platform
From my perspective, most of this conversation has centered on the needs of
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud. This will prevent them from making the same business decision about system efficiency and long term growth that Facebook made due to decisions made by the cloud service operator. the service, and tried very hard to ignore the needs of the customer despite Owen and others repeatedly raising the point. While the needs of the service do impact the cost of delivery, a broken service is still broken. Personally I would consider "free" to be overpriced for a broken service, but maybe that is just me. In any case, if the VM interface doesn't present what looks like a native IPv6 service to the application developer, IPv6 usage will be curtailed and IPv4 growing pains will continue to get worse. Tony
Agree with everything in your post. -- Hugo ----- Original Message ----- From: Tony Hain <alh-ietf@tndh.net> Sent: 2015-06-01 - 16:20 To: 'Hugo Slabbert' <hugo@slabnet.com>, 'Matt Palmer' <mpalmer@hezmatt.org> Subject: RE: AWS Elastic IP architecture
Hugo Slabbert wrote:
snip
On this given point, though: Facebook -ne generic hosting platform
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud. This will prevent them from making the same business decision about system efficiency and long term growth that Facebook made due to decisions made by the cloud service operator.
From my perspective, most of this conversation has centered on the needs of the service, and tried very hard to ignore the needs of the customer despite Owen and others repeatedly raising the point. While the needs of the service do impact the cost of delivery, a broken service is still broken. Personally I would consider "free" to be overpriced for a broken service, but maybe that is just me.
In any case, if the VM interface doesn't present what looks like a native IPv6 service to the application developer, IPv6 usage will be curtailed and IPv4 growing pains will continue to get worse.
Tony
On Mon, Jun 1, 2015 at 7:20 PM, Tony Hain <alh-ietf@tndh.net> wrote:
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud.
;; ANSWER SECTION: www.snapchat.com. 3433 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN A 64.233.176.121 snapchat seems to be doing just fine on 'google cloud services' though? oh: ;; ANSWER SECTION: www.snapchat.com. 3388 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN AAAA 2607:f8b0:4002:c06::79 ha!
-----Original Message----- From: christopher.morrow@gmail.com [mailto:christopher.morrow@gmail.com] On Behalf Of Christopher Morrow Sent: Monday, June 01, 2015 5:10 PM To: Tony Hain Cc: Hugo Slabbert; Matt Palmer; nanog list Subject: Re: AWS Elastic IP architecture
On Mon, Jun 1, 2015 at 7:20 PM, Tony Hain <alh-ietf@tndh.net> wrote:
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud.
;; ANSWER SECTION: www.snapchat.com. 3433 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN A 64.233.176.121
snapchat seems to be doing just fine on 'google cloud services' though? oh:
;; ANSWER SECTION: www.snapchat.com. 3388 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN AAAA 2607:f8b0:4002:c06::79
ha!
Try https://snapchat.com and see if you ever get an IPv6 connection... Yes an application aware proxy can hack some services into appearing to work, but they really fail the service customer because a site may appear to be up over IPv6 until the user switches to https, then having to switch to IPv4 end up appearing dead because IPv4 routing is having a bad hair day.
On Tue, Jun 2, 2015 at 12:25 AM, Tony Hain <alh-ietf@tndh.net> wrote:
-----Original Message----- From: christopher.morrow@gmail.com [mailto:christopher.morrow@gmail.com] On Behalf Of Christopher Morrow Sent: Monday, June 01, 2015 5:10 PM To: Tony Hain Cc: Hugo Slabbert; Matt Palmer; nanog list Subject: Re: AWS Elastic IP architecture
On Mon, Jun 1, 2015 at 7:20 PM, Tony Hain <alh-ietf@tndh.net> wrote:
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud.
;; ANSWER SECTION: www.snapchat.com. 3433 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN A 64.233.176.121
snapchat seems to be doing just fine on 'google cloud services' though? oh:
;; ANSWER SECTION: www.snapchat.com. 3388 IN CNAME ghs.google.com. ghs.google.com. 21599 IN CNAME ghs.l.google.com. ghs.l.google.com. 299 IN AAAA 2607:f8b0:4002:c06::79
ha!
Try https://snapchat.com and see if you ever get an IPv6 connection... Yes an
;; QUESTION SECTION: ;snapchat.com. IN AAAA there is no AAAA for the bare domain... and the bare domain appears to be served from amazon space (54.192.48.27) ~$ openssl s_client -connect snapchat.com:443 CONNECTED(00000003) 139892295607968:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:770: aside from that .... no https listener. Your wang shots are not worth encrypting I suppose? application aware proxy can hack some services into appearing to work, but they really fail the service customer because a site may appear to be up over IPv6 until the user switches to https, then having to switch to IPv4 end up appearing dead because IPv4 routing is having a bad hair day.
On Mon, 01 Jun 2015 21:25:52 -0700, Tony Hain said:
Try https://snapchat.com and see if you ever get an IPv6 connection...
Obviously some gremlins got busy when they got called out on NANOG... % wget https://www.snapchat.com --2015-06-03 13:13:00-- https://www.snapchat.com/ Resolving www.snapchat.com (www.snapchat.com)... 2607:f8b0:400d:c06::79, 74.125.22.121 Connecting to www.snapchat.com (www.snapchat.com)|2607:f8b0:400d:c06::79|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: 'index.html' index.html [ <=> ] 4.35K --.-KB/s in 0s 2015-06-03 13:13:03 (33.5 MB/s) - 'index.html' saved [4458] When I hit it with Firefox, IPFox reports the connection is ipv6 as well (but a bit harder to get a screenshot)... ..
On Monday, June 1, 2015, Tony Hain <alh-ietf@tndh.net> wrote:
Hugo Slabbert wrote:
snip
On this given point, though: Facebook -ne generic hosting platform
True, but it does represent a business decision to choose IPv6. The relevant point here is that the "NEXT" facebook/twitter/snapchat/... is likely being pushed by clueless investors into outsourcing their infrastructure to AWS/Azure/Google-cloud. This will prevent them from making the same business decision about system efficiency and long term growth that Facebook made due to decisions made by the cloud service operator.
This is the exact case for www.duckduckgo.com. They were ipv6, moved to aws, and lost support , no aaaa today To be honest, $dayjob may be next to lose ipv6 if / when www goes to the cloud
From my perspective, most of this conversation has centered on the needs of the service, and tried very hard to ignore the needs of the customer despite Owen and others repeatedly raising the point. While the needs of the service do impact the cost of delivery, a broken service is still broken. Personally I would consider "free" to be overpriced for a broken service, but maybe that is just me.
In any case, if the VM interface doesn't present what looks like a native IPv6 service to the application developer, IPv6 usage will be curtailed and IPv4 growing pains will continue to get worse.
Tony
On Mon, Jun 1, 2015 at 1:49 PM, Matthew Kaufman <matthew@matthew.at> wrote:
1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space...
this point keeps coming up... I don't see that 'overlapping ipv4' matters at all here. it is presented to the customer (vm oeprator) as 'a flat-ish lan' where you poke from machine to machine via names. (so it seems like a rathole/FUD-problem we can just stop talking about now) -chris
On 6/1/2015 12:12 PM, Christopher Morrow wrote:
On Mon, Jun 1, 2015 at 1:49 PM, Matthew Kaufman <matthew@matthew.at> wrote:
1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space...
this point keeps coming up... I don't see that 'overlapping ipv4' matters at all here. it is presented to the customer (vm oeprator) as 'a flat-ish lan' where you poke from machine to machine via names.
(so it seems like a rathole/FUD-problem we can just stop talking about now)
-chris
I have deployed services in clouds where the overlapping RFC1918 space did present challenges to the software stack that was trying to exchange node reachability as IP/port. So yes, there were and still are cases where existing software that is not aware of potential overlapped assignments can break. Matthew Kaufman
On 6/1/15, 1:49 PM, "Matthew Kaufman" <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here¹s the thing In order to land IPv6 services without IPv6 support on the VM, you¹re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
³fraction² is nearly 1/5 in the U.S., and growing fast: https://www.vyncke.org/ipv6status/project.php I don¹t know your source for ³often being worse,² but I have several sources saying, ³lower latency.² (see NANOG60, IPv6 Performance Panel, and see Facebook¹s numbers from World IPv6 Congress).
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
For certain values of ³everyone.²
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
I agree with that. Lee
Matthew Kaufman
On Mon, Jun 01, 2015 at 10:49:09AM -0700, Matthew Kaufman wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
I'd much rather have this. In fact, I'm building this at the moment. It simplifies so many things, like not having to worry about address assignment, choosing appropriately-sized subnets, address re-use, etc. Having direct access to everything from the outside world without having to deal with NAT/VPN/a jump box makes so many things smoother, too. Everything I've deployed (and yes, all the components are OSS) has dealt with IPv6 just fine, and everything I've considered deploying claims IPv6 support. I've had to submit one patch for fixing an IPv6 bug, and because it's OSS, I can do that! - Matt
On Jun 1, 2015, at 6:49 PM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Yes. For one thing, if AWS did this, the open source software would very quickly catch up and IPv4 would be the stack no longer getting primary maintenance in that software. Additionally, it’s easy for me to hide an IPv4 address in a translated to IPv6 packet header. Not so much the other way around.
2) An all-IPv4 network inside, with the annoying (but well-known) use of RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
There are a lot more than 2 people on IPv6-only networks at this point. Most T-Mobile Droid users, for example are on an IPv6-only network at this time. True, T-Mo provides 464Xlat services for those users (which is why T-Mo doesn’t work for iOS users), but that’s still an IPv6-only network.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
To the best of my knowledge, Postgress, MySQL, Oracle, NoSQL all support IPv6. Squid and several other caches have full IPv6 support. We don’t even need better… We just need at least equal. However, if, instead of taking a proactive approach and deploying IPv6 in a useful way prior to calamity you would rather wait until the layers and layers of hacks that are increasingly necessary to keep IPv4 alive reach such a staggering proportion that no matter how bad the IPv6 environment may be, IPv4 is worse, I suppose that’s one way we can handle the transition. Personally, I’d rather opt for #1 early and use it to drive the necessary improvements to the codebases you mentioned and probably some others you didn’t.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Sure, sort of. You actually do get some savings with dual-stack because you reduce the cost and the rate at which the iPv4 complexity continues to increase. You also reduce the time before you’ll be able to fully deprecate IPv4 and the amount of additional capital that has to be expended hacking, cobbling, and otherwise creating cruft for the sole purpose of keeping IPv4 alive and somewhat functional. Owen
On May 31, 2015, at 11:29 AM, Matthew Kaufman <matthew@matthew.at> wrote:
Since your network has IPv6, I fail to see the issue.
Nobody is anywhere near being able to go single-stack on IPv6, so AWS is just another network your customers will continue to reach over v4. So what?
Sigh… The point is that all of the services and applications being built on and delivered over AWS are stuck in the IPv4 mud until such time as they can get IPv6 from AWS or move to a different cloud provider.
Heck, if v6 support from a cloud hosting company is so important, I see a great business opportunity in your future.
There are already several cloud hosting companies that provide full dual-stack support. I already mentioned several of them earlier in the thread, so this is a rather silly conclusion to draw from the thread as a whole. Remember where this all started… Someone asked if the internal Amazon structure was using LISP for encapsulation. I made the semi-sarcastic comment that if they were using LISP, they probably wouldn’t have so much difficulty supporting IPv6, therefore they probably aren’t using LISP. My statement was taken all sorts of other ways by various people. Nonetheless, the bottom line remains the same: AWS can’t do IPv6 outside of a very tiny limited space which provides a solution only for one particular application (pretending to provide IPv6 web services from an IPv4-only web server through a proxy). People who are building applications and considering hosting their applications in the cloud should seriously consider whether this limitation in AWS matters to them. IMHO, forward-thinking application developers will eschew AWS in favor of clouds that have dual-stack support and build dual-stack capable applications. YMMV. Owen
Matthew Kaufman
(Sent from my iPhone)
On May 31, 2015, at 10:57 AM, Owen DeLong <owen@delong.com> wrote:
Sigh…
IPv6 has huge utility.
AWS’ implementation of IPv6 is brain-dead and mostly useless for most applications.
I think if you will review my track record over the last 5+ years, you will plainly see that I am fully aware of the utility and need for IPv6.
http://lmgtfy.com?q=owen+delong+ipv6 <http://lmgtfy.com/?q=owen+delong+ipv6>
My network (AS1734) is fully dual-stacked, unlike AWS.
If AWS is so convinced of the utility of IPv6, why do they continue to refuse to do a real implementation that provides IPv6 capabilities to users of their current architecture.
Currently, on AWS, the only IPv6 is via ELB for classic EC2 hosts. You cannot put a native IPv6 address on an AWS virtual server at all (EC2 or VPC). Unless your application is satisfied by running an IPv4-only web server which has an IPv6 VIP proxy in front of it with some extra headers added by the proxy to help you parse out the actual source address of the connection, then your application cannot use IPv6 on AWS.
As such, I stand by my statement that there is effectively no meaningful support for IPv6 in AWS, period.
AWS may disagree and think that ELB for classic EC2 is somehow meaningful, but their lack of other support for any of their modern architectures and the fact that they are in the process of phasing out classic EC2 makes me think that’s a pretty hard case to make.
Owen
On May 31, 2015, at 9:01 AM, Blair Trosper <blair.trosper@gmail.com> wrote:
Disagree, and so does AWS. IPv6 has a huge utility: being a universal, inter-region management network (a network that unites traffic between regions on public and private netblocks). Plus, at least the CDN and ELBs should be dual-stack, since more and more ISPs are turning on IPv6.
On Sun, May 31, 2015 at 8:40 AM, Owen DeLong <owen@delong.com <mailto:owen@delong.com>> wrote: I wasn’t being specific about VPC vs. Classic.
The support for IPv6 in Classic is extremely limited and basically useless for 99+% of applications.
I would argue that there is, therefore, effectively no meaningful support for IPv6 in AWS, period.
What you describe below seems to me that it would only make the situation I described worse, not better in the VPC world.
Owen
On May 31, 2015, at 4:23 AM, Andras Toth <diosbejgli@gmail.com <mailto:diosbejgli@gmail.com>> wrote:
Congratulations for missing the point Matt, when I sent my email (which by the way went for moderation) there wasn't a discussion about Classic vs VPC yet. The discussion was "no ipv6 in AWS" which is not true as I mentioned in my previous email. I did not state it works everywhere, but it does work.
In fact as Owen mentioned the following, I assumed he is talking about Classic because this statement is only true there. In VPC you can define your own IP subnets and it can overlap with other customers, so basically everyone can have their own 10.0.0.0/24 <http://10.0.0.0/24> for example. "They are known to be running multiple copies of RFC-1918 in disparate localities already. In terms of scale, modulo the nightmare that must make of their management network and the fragility of what happens when company A in datacenter A wants to talk to company A in datacenter B and they both have the same 10-NET addresses"
Andras
On Sun, May 31, 2015 at 7:18 PM, Matt Palmer <mpalmer@hezmatt.org <mailto:mpalmer@hezmatt.org>> wrote:
On Sun, May 31, 2015 at 01:38:05AM +1000, Andras Toth wrote: Perhaps if that energy which was spent on raging, instead was spent on a Google search, then all those words would've been unnecessary.
Official documentation: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-in... <http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-internet-facing-load-balancers.html#internet-facing-ip-addresses>
Congratulations, you've managed to find exactly the same info as Owen already covered:
"Load balancers in a VPC support IPv4 addresses only."
and
"Load balancers in EC2-Classic support both IPv4 and IPv6 addresses."
- Matt
On 5/31/2015 11:57 AM, Owen DeLong wrote:
People who are building applications and considering hosting their applications in the cloud should seriously consider whether this limitation in AWS matters to them.
It doesn't, because everyone "on the Internet" can reach IPv4-hosted services.
IMHO, forward-thinking application developers will eschew AWS in favor of clouds that have dual-stack support and build dual-stack capable applications.
Forward-thinking developers are using big clouds that have the resources to enable IPv6 long before having IPv6 actually matters. Matthew Kaufman
At re:Invent they started releasing a surprising amount of detail on how they designed the VPC networking (both layering/encapsulation itself and distributing routing data). Like Michael mentioned, they really stuff as much as possible into software on the VM hosts. That presentation is https://www.youtube.com/watch?v=Zd5hsL-JNY4 While looking for that video I stumbled on a couple others that look along those same lines: https://www.youtube.com/watch?v=HexrVfuIY1k (all the connectivity options) https://www.youtube.com/watch?v=YoX_frLHbEs (talks about public IP options) On Thu, May 28, 2015 at 9:34 AM, Luan Nguyen <lnguyen@opsource.net> wrote:
Hi folks, Anyone knows what is used for the AWS Elastic IP? is it LISP?
Thanks. Regards, -lmn
participants (26)
-
Andras Toth
-
Baldur Norddahl
-
Blair Trosper
-
Ca By
-
Christopher Morrow
-
George, Wes
-
Hugo Slabbert
-
Jeremy Mooney
-
Lee Howard
-
Luan Nguyen
-
Mark Andrews
-
Matt Palmer
-
Matthew Kaufman
-
Michael Helmeste
-
mikea
-
Måns Nilsson
-
Nikolay Shopik
-
Owen DeLong
-
Pete Carah
-
Philip Dorr
-
Rafael Possamai
-
Steve Mikulasik
-
Todd Underwood
-
Tony Hain
-
tvest
-
Valdis.Kletnieks@vt.edu