On Wed, Oct 10, 2018 at 02:21:40PM +0000, Naslund, Steve wrote:
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties.
I concur, and have been designing/building/running based on this premise for a long time. It's usually not very difficult or painful when starting fresh; it can be much more so when modifying an already-operational environment. But even in the latter case, it's worth the effort and expense: it much more than pays for itself the first time it stops something from getting out. The most difficult part of this process is often convincing people that it's sadly necessary. I say "sadly" because it wasn't also so, and that was a kinder, happier time. But that was then and this is now. And now the worst threat often comes from the inside. It also has three perhaps-not-quite-obvious benefits. First, it forces discipline. Things don't "just work", and that's a feature, not a bug. It requires thinking through what's required to make services functional and thus (hopefully) also thinking through what the potential consequences are. I'm no longer surprised how many chief technology officers don't actually know what their technology is doing (to borrow a phrase from Ranum) and are puzzled when they find out. The clarity provided by this approach removes that puzzlement. Second, it greatly reduces the extraneous noise that might make nefarious activity harder to spot. There's an entire market sector built around products that ferret out signal from noise; I find it easier not to allow the noise. Third, every attack we see coming in, every byte of abuse we see arriving, is the consequence of someone else *not* implementing default-deny and the collective cost of that across all operations is enormous. If we can avoid contributing to that, then we've done a small bit of good for everyone else. ---rsk