On Thu, May 28, 2015 at 11:44 AM, Luan Nguyen (CBU) <luan.nguyen@dimensiondata.com> wrote:
What I am trying to get at is yeah, you still need the l2 extension encapsulation, but on top you need something for disaster recovery, machines mobility between data centers, sort of like Vshield Edge using NAT – you can
probably what the vm mobilty looks like is a change in the L2 path, right? why make it anymore complicated than that? inside a single availability domain I would expect the L2 domain a vm sees doesn't change, even if the VM itself is moved from physical machine to physical machine. making it more complex at the vm level is probably a bunch of work that doesn't have to happen.
change the NAT pool and update the DNS record, but the internal would remain
that sounds like a bunch of work though, which I don't think is really necessary. I'm just a plumber, though so I don't actually know what anyone does with this stuff.
the same no matter where you move it to. LISP seems like a simple solution…so as specific host route injection, which for enterprise shouldn’t
lisp wasn't really finalized (still sort of isn't) when aws/ec2 started going like gang busters. They might have changed technology under the hood, but it doesn't seem like they would have had to (not in a drastic 'change encap type' sort of way at least).
be much of a problem, but DRaaS cloud provider, this could ballooning the routing table pretty quickly.
how so? does the external and internal view from the vm have to be the same? do the public /32's have to be individually routed ? inside what scope at the datacenter?
What does Google use? :)
no idea, probably rabbits with different colored carrots?
I can tell you that EC2 Classic and VPC EIPs come from separate netblocks...if that gives you any hints whatsoever. There's no crossover between the two platforms in IP space. On Thu, May 28, 2015 at 12:08 PM, Christopher Morrow < morrowc.lists@gmail.com> wrote:
On Thu, May 28, 2015 at 11:44 AM, Luan Nguyen (CBU) <luan.nguyen@dimensiondata.com> wrote:
What I am trying to get at is yeah, you still need the l2 extension encapsulation, but on top you need something for disaster recovery, machines mobility between data centers, sort of like Vshield Edge using NAT – you can
probably what the vm mobilty looks like is a change in the L2 path, right? why make it anymore complicated than that? inside a single availability domain I would expect the L2 domain a vm sees doesn't change, even if the VM itself is moved from physical machine to physical machine.
making it more complex at the vm level is probably a bunch of work that doesn't have to happen.
change the NAT pool and update the DNS record, but the internal would remain
that sounds like a bunch of work though, which I don't think is really necessary. I'm just a plumber, though so I don't actually know what anyone does with this stuff.
the same no matter where you move it to. LISP seems like a simple solution…so as specific host route injection, which for enterprise shouldn’t
lisp wasn't really finalized (still sort of isn't) when aws/ec2 started going like gang busters. They might have changed technology under the hood, but it doesn't seem like they would have had to (not in a drastic 'change encap type' sort of way at least).
be much of a problem, but DRaaS cloud provider, this could ballooning the routing table pretty quickly.
how so? does the external and internal view from the vm have to be the same? do the public /32's have to be individually routed ? inside what scope at the datacenter?
What does Google use? :)
no idea, probably rabbits with different colored carrots?
participants (2)
-
Blair Trosper
-
Christopher Morrow