On Fri, Jan 22, 2021 at 03:43:43PM -0800, Sabri Berisha wrote:
No, but the NOC that sits in between does need to access both. Sure, you can
A single NOC sitting in the middle of a single address space. I believe I'm detecting an architectural paradigm on the order of "bouncy castle." Tell me, do you also permit customer A's secondary DNS server to reach out and touch customer B's tertiary MongoDB replica in some other AZ for a particular reason? Or are these networks segregated in some meaningful way -- a way which might, say, completely vacate the entire point of having a completely de-conflicted 1918 address space?
use jumphosts, but now you're delaying troubleshooting of a potentially costly outage.
Who's using jumphosts? I very deliberately employed one of my least favorite networking "technologies" in order to give you direct connections. I just had to break a different fundamental networking principle to steal the bits from another header. No biggie. You won't even miss the lack of ICMP or the squished MTU. Honest. It's just "your" stuff anyway. The customers have all that delicious 10/8 to use. Imagine how nice troubleshooting that would be, where anything that's 172.16/12 is "yours" and anything 10/8 is "theirs."
NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even company, is a nightmare. If you've ever been on call for any decently sized network, you'll know that.
And that's different than NATing non-1918 addresses to a 1918 address space how? Four bytes is four bytes, no? Or are 1918 addresses magic when it comes to the mechanical process of address translation? As far as being on call and troubleshooting, I'd think that identically configured rack-based networks would be ideal, no? In the context of the rack, everything is very familiar. That 192.168.0.1 is always the gateway for the rack hosts. That 192.168.3.254 is always the iSCSI target on the SAN. (Or is it more correctly NAS, since any random PDU in Wallawalla WA can hit my disks in Perth via its unique address on a machine which lives "not at all hypothetically" under the raised floor or something. Maybe sitting in the 76-80th RU.) Maybe I should investigate these "jumphosts" of which you speak, too. They might have some advantages. But I'm sure using your spreadsheets to look up everything all the time works even better. Especially when you start having to slice your networks thinner and thinner and renumber stuff. But I'm sure no customer would ever say they needed more address space than was initially allocated to them. It should be trivial to throw them another /24 from elsewhere in the 10 space, get it all routed and filtered and troubleshoot that on call. Much easier than handing them very own 10/8.
We both know that this is
A. An operational nightmare, and B. Simply not the way things work in the real world.
Right. What would I know about the real world? What madman would ever deploy a system in a way other than the flat, star pattern in which you suggest. Who even approaches that scale and scope?
not plan for the networks to grow to the size they became. Just like we would never run out of the 640k of memory, people thought they would never run out of RFC1918 space. Until they did.
Yes. Whoever could have seen that coming. If only we had developed mechanisms for extending the existing IPv4 address space. Maybe by making multiple hosts share a single address by using some kind of "proxy" or committing a horrible sin and stealing bits from a different layer. Or perhaps we could even deploy a different protocol with an even larger address space. It could be done in parallel, even. Well. I can dream, can't I?
And when that James May moment arrives, people start looking at a quick fix (i.e., let's use unannounced public space), rather than redesigning and reimplementing networks that have been in use for a long long time.
A long long time indeed. Why, I remember back in the late 1990s when the cloud wars started. They were saying Microsoft would have to divest Azure. Barnes and Noble had just started selling MMX-optimized instances for machine learning. The enormous web farms at Geocities were really pushing the envelope of the possible when it came to high availability concurrent web connections by leveraging CDNs. Very little has changed since then. We've hardly had the opportunity to look at these networks, let alone consider rebuilding them. Who has the time or opportunity? That Cisco 2600 may be dusty, but it's been holding the fort all this time.
TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't work.
Well thanks for sharing. I think we've all learned a lot. -- . ___ ___ . . ___ . \ / |\ |\ \ . _\_ /__ |-\ |-\ \__