----- On Jan 22, 2021, at 2:42 PM, Izaac izaac@setec.org wrote: Hi,
On Fri, Jan 22, 2021 at 01:03:15PM -0800, Sabri Berisha wrote:
TL;DR: a combination of scale and incompetence means you can run out of 10/8 really quick.
Indeed. Thank you for providing a demonstration of my point.
I'd question the importance of having an console on target in Singapore be able to directly address an BMC controller in Phoenix (wait for it), but I'm sure that's a mission requirement.
No, but the NOC that sits in between does need to access both. Sure, you can use jumphosts, but now you're delaying troubleshooting of a potentially costly outage.
But just in case you'd like to reconsider, can I interest you in NAT? Like nutmeg, a little will add some spice to your recipe -- but too much will cause nausea and hallucinations.
NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even company, is a nightmare. If you've ever been on call for any decently sized network, you'll know that.
Let's just magic a rack controller to handle the NAT. We can just cram it into the extra-dimensional space where the switches live.
And all less than an hour's chin pulling.
We both know that this is A. An operational nightmare, and B. Simply not the way things work in the real world. The people who designed most of the legacy networks I've ever worked on did not plan for the networks to grow to the size they became. Just like we would never run out of the 640k of memory, people thought they would never run out of RFC1918 space. Until they did. And when that James May moment arrives, people start looking at a quick fix (i.e., let's use unannounced public space), rather than redesigning and reimplementing networks that have been in use for a long long time. TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't work. Thanks, Sabri