Original I asked because was in the process of thinking out loud what options are there for disaster recovery. I could do anycast BGP, advertise out say a /24 of "elastic IP" and internally have that block running inside our data center interconnect dmvpn tunnels. We do have WAN OPT so it probably will be faster running inside the tunnel - not sure what impact it might have on some applications though. so if a vm moves, i need to generate 2 host routes - public and private ipv4, so those can be the same - some customers have vpn/direct connect so they also need to get to the private ipv4 as well the public one...it could get to be a pain to manage. Dimension Data Cloud does have dual stack for cloud VM. Personally i like IPv6 since it's so much easier to plan around...Customers can bring their own ipv4 address to the cloud, the ipv6 is unique - so if you don't want to nat your private ipv4, you could just use ipv6. Our orchestration tools are dual stack as well...making a rest api call with python to ipv6 is as easy as ipv4. IPv6 makes a big different for us to monitor customer VMs though...since they can bring their own ipv4 and might overlap. Honestly, I don't think people care much about ipv6 yet, but hey, we just dual stack for the future so hey if you want to load balancing ipv6 to either ipv6 or ipv4 real servers, knock yourself out, and for marketing as well...AWS doesn't do it, but we do and with the intercloud solution from cisco - moving from aws to didata would be just a click away :) On Mon, Jun 1, 2015 at 2:43 PM, Todd Underwood <toddunder@gmail.com> wrote:
fb is not a 'cloud provider'.
it's orthogonal to the question.
t
On Mon, Jun 1, 2015 at 2:36 PM, Ca By <cb.list6@gmail.com> wrote:
On Mon, Jun 1, 2015 at 10:49 AM, Matthew Kaufman <matthew@matthew.at> wrote:
On 6/1/2015 12:06 AM, Owen DeLong wrote:
... Here’s the thing… In order to land IPv6 services without IPv6 support on the VM, you’re creating an environment where...
Let's hypothetically say that it is much easier for the cloud provider if they provide just a single choice within their network, but allow both v4 and v6 access from the outside via a translator (to whichever one isn't native internally).
Would you rather have: 1) An all-IPv6 network inside, so the hosts can all talk to each other over IPv6 without using (potentially overlapping copies of) RFC1918 space... but where very little of the open-source software you build your services on works at all, because it either doesn't support IPv6 or they put some IPv6 support in but it is always lagging behind and the bugs don't get fixed in a timely manner. Or,
Facebook selected IPv6-only as outlined above
http://blog.ipspace.net/2014/03/facebook-is-close-to-having-ipv6-only.html
2) An all-IPv4 network inside, with the annoying (but well-known) use
of
RFC1918 IPv4 space and all your software stacks just work as they always have, only now the fraction of users who have IPv6 can reach them over IPv6 if they so choose (despite the connectivity often being worse than the IPv4 path) and the 2 people who are on IPv6-only networks can reach your services too.
Until all of the common stacks that people build upon, including distributed databases, cache layers, web accelerators, etc. all work *better* when the native environment is IPv6, everyone will be choosing #2.
And both #1 and #2 are cheaper and easier to manage that full dual-stack to every single host (because you pay all the cost of supporting v6 everywhere with none of the savings of not having to deal with the ever-increasing complexity of continuing to use v4)
Matthew Kaufman