Yes, the place in question was very understaffed. The long term remediation plan I helped them on after the Blaster case was to deploy SUS and acquire a volume license for an AV (they had very spotty and in some sites nonexistent AV coverage on the client machines). With the pressure from upper management, I got the IT manager to do some "basic" tests of patches (manual install on the computers in the IT office and see if anything blew up) then push the patches via SUS. I have seen some fairly reasonable methodologies for deploying patches. In this day, being behind with patches (especially with Microsoft products) is like playing with fire. (That is not to say that it is a good idea to be behind on your *nix updates, they are just as vulnerable to exploit if they are running old versions of internet accessible apps.) Some of the strategies I have seen that work reasonably well at mitigating the risk of damage caused by patches: -Deploy patches to a small amount of computers (one or two per department). This way you get converge of all the apps used. Then after a day or two of no complaints, push patches out to the rest of the computers. -Maintain a collection of computers running all of the critical apps where you can test each patch on. -Wait a few days before patches. During this time monitor mailings lists/blogs/news sites/etc for any reports of problems, if none exist, patch. It should also be noted that over the last few years Microsoft has got a lot better at internally testing patches (remember the NT4 service packs?). So many times for my smaller and less staffed customers and private individuals I advise them to configure for automatic updating. Adam Stasiniewicz -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Steven M. Bellovin Sent: Sunday, February 11, 2007 12:49 PM To: Dave Pooser Cc: nanog Subject: Re: Every incident is an opportunity (was Re: Hackers hit key Internet traffic computers) On Sun, 11 Feb 2007 10:49:30 -0600 Dave Pooser <dave.nanog@alfordmedia.com> wrote:
He was both right and wrong -- patches do break a lot of stuff. He was facing two problems: the probability of being off the air because of an attack versus the probability of being off the air because of bad interactions between patches and applications. Which is a bigger risk?
That's an argument for an organizational test environment and testing patches before deployment, no? Not an argument against patching. That said, I would LOVE to see MS ship a monthly/quarterly unified updater that's a one-step way to bring fresh systems up to date without slipstreaming the install CD. Then press a zillion of 'em and put them everywhere you can find an AOL CD, for all those folks on dial-up who see a 200MB download and curl up in the fetal position and whimper.
Surveys have shown an inverse correlation between the size of a company and when it installed XP SP2. Yes, you're right; a good test environment is the right answer. As I think most of us on this list know, it's expensive, hard to do right, and still doesn't catch everything. If I recall correctly, the post I was replying to said that it was a non-profit; reading between the lines, it wasn't heavily staffed for IT, or they wouldn't have needed a consultant to help clean up after Blaster. And there's one more thing -- at what point have you done enough testing, given how rapidly some exploits are developed after the patch comes out? --Steve Bellovin, http://www.cs.columbia.edu/~smb