On Tue, 28 Jan 2003, The New York Times wrote:
A spokesman for Microsoft, Rick Miller, confirmed that a number of the company's machines had gone unpatched, and that Microsoft Network services, like many others on the Internet, experienced a significant slowdown. "We, like the rest of the industry, struggle to get 100 percent compliance with our patch management," he said.
Many different companies were hit hard by the Slammer worm, some with better than average reputations for security awareness. They bought finest firewalls, they had two-factor biometric locks on their data centers, they installed anti-virus software, they paid for SAS70 audits by the premier auditors, they hired the best managed security consulting firms. Yet, they still were hit. Its not as simple as don't use microsoft, because worms have hit other popular platforms too. Are there practical answers that actually work in the real world with real users and real business needs?
Sean, --On 28 January 2003 03:10 -0500 Sean Donelan <sean@donelan.com> wrote:
Are there practical answers that actually work in the real world with real users and real business needs?
1. Employ clueful staff 2. Make their operating environment (procedures etc.) best able to exploit their clue In the general case this is a people issue. Sure there are piles of whizzbang technical solutions that address individual problems (some of which your clueful staff might even think of themselves), but in the final analysis, having people with clue architect, develop and operate your systems is far more important than anything CapEx will buy you alone. Note it is not difficult to envisage how this attack could have been far far worse with a few code changes... Alex Bligh
On Tue, 28 Jan 2003 10:42:05 -0000 Alex Bligh wrote:
Sean,
--On 28 January 2003 03:10 -0500 Sean Donelan <sean@donelan.com> wrote:
Are there practical answers that actually work in the real world with real users and real business needs?
1. Employ clueful staff 2. Make their operating environment (procedures etc.) best able to exploit their clue
In the general case this is a people issue. Sure there are piles of whizzbang technical solutions that address individual problems (some of which your clueful staff might even think of themselves), but in the final analysis, having people with clue architect, develop and operate your systems is far more important than anything CapEx will buy you alone.
Note it is not difficult to envisage how this attack could have been far far worse with a few code changes...
Alex Bligh
How does one find a "clueful" person to hire? Can you recognize one by their hat or badge of office? Is there a guild to which they all belong? If one wants to get a "clue", how does one find a master to join as an apprentice? I would argue that sooner or later network security must become an engineering discipline whose practitioners can design a security system that cost-effectively meets the unique needs of each client. Engineering requires that well-accepted ("best") practices be documented and adopted by all practicioners. Over time, there emerges a body of such best practices which provide a foundation upon which new technologies and practices are adopted as technical concensus emerges among the practicioners. Part of the training of an engineer involves learning the existing body of best practices. Engineering also is quantitative, which means that design incorporates measurements and calculations so that the solution is good enough to to the job required, but no more, albeit with commonly accepted margins of safety. Society requires that some kinds of engineers be licensed because they are responsible for the safety of others, such as engineers who design buildings, bridges, roads, nuclear power plants, sanitation, etc. However, some are not (yet?) required to be licensed, like engineers who design cars, trucks, buses, ships, airplanes, factory process control systems and the computer networks that monitor and control them. This is therefore a request for all of those who possess this "clue" to write down their wisdom and share it with the rest of us, so we can address what clearly is a need for discipline in the design of networks and network security, since computer networks are an infrastructure upon which people are becoming dependent, even to the point of their personal safety. - Andy
--On 28 January 2003 10:42 -0600 Andy Putnins <putnins@lett.com> wrote:
How does one find a "clueful" person to hire? Can you recognize one by their hat or badge of office? Is there a guild to which they all belong? If one wants to get a "clue", how does one find a master to join as an apprentice?
In the long term one might presume market forces would provide better answers than speculation & ...
Society requires that some kinds of engineers be licensed
... economic theory suggests that licensing etc. is only a good idea when the externalities of failure cases exceed the benefits of licensing by more than the costs of its imposition (including barriers to entry etc.). I do not think we have come to the point where this has been demonstrated yet. Note licensing does not have a 100% success record in protecting against failure (viz. Andersen).
This is therefore a request for all of those who possess this "clue" to write down their wisdom and share it with the rest of us, so we can
This industry has been pretty good at that, despite recent economic circumstances militating against it. No argument there. Alex Bligh
On Tue, 28 Jan 2003, Andy Putnins wrote:
This is therefore a request for all of those who possess this "clue" to write down their wisdom and share it with the rest of us
I can't tell you what clue is, but I know when I don't see it. In some cases our clients have had Code Red, Nimda, and Sapphire hit the same friggin machines. To borrow from the exploding car analogy, if you're the highway dept. and you notice that only *some* people's cars seem to explode, maybe you build the equivalent of an HOV lane with concrete dividers, and funnel them all into it, so at least they don't blow up the more conscientious drivers/mechanics in the next lane over. Providers who were negatively affected might want to look at their lists, compare with past incident lists and schedule a maintenance window to aggregate the repeat offenders ports where feasible, to isolate impact of the next worm. We've tried to share clue with clients via security announcements, encouraging everyone to get on their vendors' security lists, follow BUGTRAQ, and provide relevant signup URLs. Mike
SD> Date: Tue, 28 Jan 2003 03:10:18 -0500 (EST) SD> From: Sean Donelan [ snip firewalls, audits, et cetera ] As most people on this list hopefully know, security is a process... not a product. Tools are useless if they are not applied properly. SD> Are there practical answers that actually work in the real SD> world with real users and real business needs? It depends. If "real business needs" means management ego gets in the way of letting talented staff do their jobs, having to form a committee to conduct a feasibility study re whether to apply a one-hour patch that closes a critical hole, drooling over paper certs... the answer is no. Automobiles require periodic maintenance. Household appliances require repair from time to time. People get sick and require medicine. Reality is that people need to deal with the need for proper systems administration. It might not be exciting or make people feel good, but it's necessary. Failure has consequences. Inactivity is a vote cast for "it's worth the risk". Sure, worm authors are to blame for their creations. Software developers are to blame for bugs. Admins are to blame for lack of administration. The question is who should take what share, and absorb the pain when something like this occurs. Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
ED> Date: Tue, 28 Jan 2003 12:42:41 +0000 (GMT) ED> From: E.B. Dreger ED> Sure, worm authors are to blame for their creations. ED> Software developers are to blame for bugs. Admins are to s/Admins/Admins and their management/ Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
| Many different companies were hit hard by the Slammer worm, some with | better than average reputations for security awareness. They bought | finest firewalls, they had two-factor biometric locks on their data | centers, they installed anti-virus software, they paid for SAS70 | audits by the premier auditors, they hired the best managed security | consulting firms. Yet, they still were hit. Because they hired people (staff or outsourced) that made them feel comfortable, instead of getting the job done. | Its not as simple as don't use microsoft, because worms have hit other | popular platforms too. But this worm required external access to an internal server (SQL Servers are not front-end ones); even with a bad or no patch management system, this simply wouldn't happen on a properly configured network. Whoever got slammered, has more problems than just this worm. Even with no firewall or screening router, use of RFC1918 private IP address on the SQL Server would have prevented this worm attack | Are there practical answers that actually work in the real world with | real users and real business needs? Yes, the simple ones that are known for decades: - Minimum-privilege networks (access is blocked by default, permitted to known and required traffic) - Hardened systems (only needed components are left on the servers) - Properly coded applications - Trained personnel There are no shortcuts. Rubens Kuhl Jr.
At 11:13 AM 1/28/03 -0200, Rubens Kuhl Jr. et al postulated:
| Are there practical answers that actually work in the real world with | real users and real business needs?
Yes, the simple ones that are known for decades: - Minimum-privilege networks (access is blocked by default, permitted to known and required traffic) - Hardened systems (only needed components are left on the servers) - Properly coded applications - Trained personnel
I would just add, as has been mentioned by others (but bears repeating): - A commitment by management
There are no shortcuts.
Agreed Ted Fischer
Rubens Kuhl Jr.
But this worm required external access to an internal server (SQL Servers are not front-end ones); even with a bad or no patch management system, this simply wouldn't happen on a properly configured network. Whoever got slammered, has more problems than just this worm. Even with no firewall or screening router, use of RFC1918 private IP address on the SQL Server would have prevented this worm attack
RFC1918 addresses would not have prevented this worm attack. RFC1918 != security
But this worm required external access to an internal server (SQL Servers are not front-end ones); even with a bad or no patch management system, this simply wouldn't happen on a properly configured network. Whoever got slammered, has more problems than just this worm. Even with no firewall or screening router, use of RFC1918 private IP address on the SQL Server would have prevented this worm attack
RFC1918 addresses would not have prevented this worm attack. RFC1918 != security Indeed. More accurately though "don't have an SQL server port exposed to
at Thursday, January 30, 2003 12:01 AM, bdragon@gweep.net <bdragon@gweep.net> was seen to say: the general internet you bloody fools" might be closer to the correct advice to customers :) I have been trying *hard* but can't think of a single decent reason a random visitor to a site needs SQL Server access from the outside.
On Tue, Jan 28, 2003 at 11:13:19AM -0200, rkjnanog@ieg.com.br said: [snip]
But this worm required external access to an internal server (SQL Servers are not front-end ones); even with a bad or no patch management system, this simply wouldn't happen on a properly configured network. Whoever got slammered, has more problems than just this worm. Even with no firewall or screening router, use of RFC1918 private IP address on the SQL Server would have prevented this worm attack
Only if the worm's randomly-chosen IP addresses were picked from the valid IP space (i.e. not RFC1918 addresses), and although I am not sure, I doubt the worm's author(s) was that conscientious. Later, on Wed, Jan 29, 2003 at 19:01:25 -0500 (EST), <bdragon@gweep.net> replied:
RFC1918 addresses would not have prevented this worm attack. RFC1918 != security
All too true. However, using NAT/packet filtering can at least prevent casual/automated network scans. Of course, if one was implementing proper filtering, 1434/udp wouldn't be accepting connections from outside sources, whether directly or through NAT/port forwarding. But then, this observation has been made many times already ... -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
Sean, Ultimately, all mass-distributed software is vulnerable to software bugs. Much as we all like to bash Microsoft, the same problem can and has occurred through buffer overruns. One thing that companies can do to mitigate a failure is to detect it faster, and stop the source. Since you don't know what the failure will look like, the best you can do is determine what is ``nominal'' through profiling, and use IDSes to report to NOCs for considered action. There are two reasons companies don't want to do this: 1. It's hard (and expensive). Profiling nominal means installing IDSes everywhere in one's environment at a time when you think things are actually working and making assumptions that *other* behavior is to be reported. Worse, network behavior is often cyclical, and you need to know how that cycle will impact what is nominal. Indeed you can have a daily, weekly, monthly, quarterly, and annual cycle. Add to this ongoing software deployment and you have something of a moving target. 2. It doesn't solve all attacks. Only attacks that break the profile will be captured. Those are going to be those that use new or unusual ports, existing "bad" signatures, or excessive bandwidth. On the other hand, in *some* environments, IDS and an active NOC may improve predictability by reducing time needed to diagnose the problem. Who knows? Perhaps some people did benefit through these methods. I'm very curious in netmatrix's view of the whole matter, as compared to comparable events. NANOG presentation, Peter? Eliot
In a message written on Tue, Jan 28, 2003 at 03:10:18AM -0500, Sean Donelan wrote:
They bought finest firewalls,
A firewall is a tool, not a solution. Firewall companies advertise much like Home Depot (Lowes, etc), "everything you need to build a house". While anyone with 3 brain cells realizes that going into Home Depot and buying truck loads of building materials does not mean you have a house, it's not clear to me that many of the decision makers in companies understand that buying a spiffy firewall does not mean you're secure. Even those that do understand, often only go to the next step. They hire someone to configure the firewall. That's similar to hiring the carpenter with your load of tools and building materials. You're one step closer to the right outcome, but you still have no plans. A carpenter without plans isn't going to build something very useful. Very few companies get to the final step, hiring an architect. Actually, the few that get here usually don't do that, they buy some off the shelf plans (see below, managed security) and hope it's good enough. If you want something that really fits you have to have the architect really understand your needs, and then design something that fits.
they had two-factor biometric locks on their data centers,
This is the part that never made sense to me. Companies are installing new physical security systems at an amazing pace. I know some colos that have had four new security systems in a year. The thing that fascinates me is that unless someone is covering up the numbers /people don't break into data centers/. The common thief isn't too interested. Too much security/video already. People notice when the stuff goes offline. And most importantly too hard to fence for the common man. The thief really interested in what's in the data center, the data, is going to take the easiest vector, which until we fix other problems is going to be the network. I think far too many people spend money on new security systems because they don't know what else to do, which may be a sign that they aren't the people who want to trust with your network data.
they installed anti-virus software,
Which is a completely different problem. Putting the bio-hazard in a secure setting where it can't infect anyone and developing an antidote in case it does are two very different things. One is prevention, one is cure.
they paid for SAS70 audits by the premier auditors,
Which means absolutely nothing. Those audits are the equivalent of walking into a doctor's office, making sure he has a working stethoscope and box of toungue depressors, and maybe, just maybe, making the doctor use both to verify that he knows how to use the them. While interesting, that doesn't mean very much at all that when you walk in with a disease the doctor will cure you. Just like it doesn't mean when the network virus/worm/trojan comes you will be immune.
they hired the best managed security consulting firms.
This goes back to my first comment. Managed security consulting firms do good work, but what they can't do is specialized work. To extend the house analogy they are like the spec architects who make one "ok" plan and then sell it thousands of times to the people who don't want to spend money on a custom architect. It's better than nothing, and in fact for a number of firms it's probably a really good fit. What the larger and more complex firms seem to fail to realize is that as your needs become more complex you need to step up to the fully customized approach, which no matter how hard these guys try to sell it to you they are unlikely to be able to provide. At some level you need someone on staff who understands security, but, and here's the hard part, understands all of your applications as well. How many people have seen the firewall guy say something like "well I opened up port 1234 for xyzsoft for the finance department. I have no idea what that program does or how it works, but their support people told me I needed that port open". Yeah. That's security. Your firewall admin doesn't need to know how to use the finance software, but he'd better have an understanding of what talks to what, what platforms it runs on, what is normal traffic and what is abnormal traffic, and so on.
Are there practical answers that actually work in the real world with real users and real business needs?
I think there are two fundamental problems: * The people securing networks are very often underqualified for the task at hand. If there is one place you need a "generalist" type network/host understands-it-all type person it's in security -- but that's not where you find them. Far too often "network" security people are cross overs from the physical security world, and while they understand security concepts I find much of the time they are lost at how to apply them to the network. * Companies need to hold each other responsible for bad software. Ford is being sued right now because Crown Vic gas tanks blow up. Why isn't Microsoft being sued over buffer overflows? We've known about the buffer overflow problem now for what, 5 years? The fact that new, recent software is coming out with buffer overflows is bad enough, the fact that people are still buying it, and also making the companies own up to their mistakes is amazing. I have to think there's billions of dollars out there for class action lawyers. Right now software companies, and in particular Microsoft, can make dangerously unsafe products and people buy them like crazy, and then don't even complain that much when they break. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Not to sound to pro-MS, but if they are going to sue, they should be able to sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun vendor because some moron shoots himself in the head with it? No. It was never designed for flicking flies off his forehead. And they said, don't use for anything other than nailing stuff together. Likewise, MS told people six months ago to fix the hole. "Lack of planning on your part does not constitute an emergency on my part" was once told to me by a wise man. At some point, people have to take SOME responsibility for their organizations deployment of IT assets and systems. Microsoft is the convenient target right now because they HAVE assets to take. Who's going to pony up when Apache gets sued and loses. Hwo do you sue Apache, or how do you sue Perl, because, afterall, it has bugs. Just because you give it away shouldn't isolate you from liability. Eric
* Companies need to hold each other responsible for bad software. Ford is being sued right now because Crown Vic gas tanks blow up. Why isn't Microsoft being sued over buffer overflows? We've known about the buffer overflow problem now for what, 5 years? The fact that new, recent software is coming out with buffer overflows is bad enough, the fact that people are still buying it, and also making the companies own up to their mistakes is amazing. I have to think there's billions of dollars out there for class action lawyers. Right now software companies, and in particular Microsoft, can make dangerously unsafe products and people buy them like crazy, and then don't even complain that much when they break.
-- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
From: "Eric Germann"
Not to sound to pro-MS, but if they are going to sue, they should be able
to
sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun vendor because some moron shoots himself in the head with it?
With all the resources at their disposal, is MS doing enough to inform the customers of new fixes? Are the fixes and lates security patches in an easy to find location that any idiot admin can spot? Have they done due diligence in ensuring that proper notification is done? I ask because it appears they didn't tell part of their own company that a patch needed to be applied. If I want the latest info on Apache, I hit the main website and the first thing I see is a list of security issues and resolutions. Navigating MS's website isn't quite so simplistic. Liability isn't necessarily in the bug but in the education and notification. Jack Bates BrightNet Oklahoma
XP has autoupdate notifications that nag you. They could make it automatic, but then everyone would sue them if it mucked up their system. And, MS has their HFCHECK program which checks which hotfixes should be installed. Again, not automatic because they would like the USER to sign off on installing it. On the Open Source side, you sort of have that when you build from source. Maybe apache should build a util to routinely go out and scan their source and all the myriad add on modules and build a new version when one of them has a fix to it, but we leave that to the sysadmin. Why, because the permutations are too many. Which is why we have Windows. To paraphrase a phone company line I heard in a sales meeting when reaming them, "we may suck, but we suck less ...". It ain't the best, but for the most part, it does what the user wants and is relatively consistent across a number of machines. User learns at home and can operate at work. No retraining. Sort of like the person who sued McD's when they dumped their own coffee in their lap because it was "too hot". Somewhere in the equation, the sysadmin/enduser, whether Unix or Windows, has to take some responsibility. To turn the argument around, people don't pay for IIS either, but everyone would love to sue MS for its vulnerabilities (i.e. CR/Nimda, etc). As has been said, no one writes perfect software. And again, sometime, the user has to share some responsibility. Maybe if the users get burned enough, the problem will get solved. Either they will get fired, the software will change to another platform, or they'll install the patches. People only change behaviors through pain, either mental or physical. Eric
-----Original Message----- From: Jack Bates [mailto:jbates@brightok.net] Sent: Tuesday, January 28, 2003 10:36 AM To: ekgermann@cctec.com; Leo Bicknell; nanog@merit.edu Cc: Eric Germann Subject: Re: What could have been done differently?
From: "Eric Germann"
Not to sound to pro-MS, but if they are going to sue, they
should be able to
sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun vendor because some moron shoots himself in the head with it?
With all the resources at their disposal, is MS doing enough to inform the customers of new fixes? Are the fixes and lates security patches in an easy to find location that any idiot admin can spot? Have they done due diligence in ensuring that proper notification is done? I ask because it appears they didn't tell part of their own company that a patch needed to be applied. If I want the latest info on Apache, I hit the main website and the first thing I see is a list of security issues and resolutions. Navigating MS's website isn't quite so simplistic. Liability isn't necessarily in the bug but in the education and notification.
Jack Bates BrightNet Oklahoma
On Tue, Jan 28, 2003 at 07:10:52PM -0500, ekgermann@cctec.com said: [snip]
As has been said, no one writes perfect software. And again, sometime, the user has to share some responsibility. Maybe if the users get burned enough, the problem will get solved. Either they will get fired, the software will change to another platform, or they'll install the patches. People only change behaviors through pain, either mental or physical.
There's a difference between having the occasional bug in one's software (Apache, OpenSSH) and having a track record of remotely exploitable vulnerabilities in virtually EVERY revision of EVERY product one ships, on the client-side, the server side and in the OS itself. Microsoft does not care about security, regardless of what their latest marketing ploy may be. If they did, they would not be releasing the same exact bugs in their software year after year after year. </rant> -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
On Tue, 28 Jan 2003 19:10:52 EST, Eric Germann <ekgermann@cctec.com> said:
Sort of like the person who sued McD's when they dumped their own coffee in their lap because it was "too hot". Somewhere in the equation, the sysadmin/enduser, whether Unix or Windows, has to take some responsibility.
Bad Example. Or at least it's a bad example for your point. That particular case has a *LOT* of similarities with the other big-M company we're discussing. Cross out "hot coffee" and write in "buffer overflow" and see how it reads:
1: For years, McDonald's had known they had a problem with the way they make their coffee - that their coffee was served much hotter (at least 20 degrees more so) than at other restaurants. 2: McDonald's knew its coffee sometimes caused serious injuries - more than 700 incidents of scalding coffee burns in the past decade have been settled by the Corporation - and yet they never so much as consulted a burn expert regarding the issue. 3: The woman involved in this infamous case suffered very serious injuries - third degree burns on her groin, thighs and buttocks that required skin grafts and a seven-day hospital stay. 4: The woman, an 81-year old former department store clerk who had never before filed suit against anyone, said she wouldn't have brought the lawsuit against McDonald's had the Corporation not dismissed her request for compensation for medical bills. 5: A McDonald's quality assurance manager testified in the case that the Corporation was aware of the risk of serving dangerously hot coffee and had no plans to either turn down the heat or to post warning about the possibility of severe burns, even though most customers wouldn't think it was possible. 6: After careful deliberation, the jury found McDonald's was liable because the facts were overwhelmingly against the company. When it came to the punitive damages, the jury found that McDonald's had engaged in willful, reckless, malicious, or wanton conduct, and rendered a punitive damage award of 2.7 million dollars. (The equivalent of just two days of coffee sales, McDonalds Corporation generates revenues in excess of 1.3 million dollars daily from the sale of its coffee, selling 1 billion cups each year.) 7: On appeal, a judge lowered the award to $480,000, a fact not widely publicized in the media. 8: A report in Liability Week, September 29, 1997, indicated that Kathleen Gilliam, 73, suffered first degree burns when a cup of coffee spilled onto her lap. Reports also indicate that McDonald's consistently keeps its coffee at 185 degrees, still approximately 20 degrees hotter than at other restaurants. Third degree burns occur at this temperature in just two to seven seconds, requiring skin grafting, debridement and whirlpool treatments that cost tens of thousands of dollars and result in permanent disfigurement, extreme pain and disability to the victims for many months, and in some cases, years.
In a message written on Tue, Jan 28, 2003 at 10:23:09AM -0500, Eric Germann wrote:
Not to sound to pro-MS, but if they are going to sue, they should be able to sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun
IANAL, but I think this is all fairly well worked out, from a legal sense. Big companies are held to a higher standard. Sadly it's often because lawyers pursue the dollars, but it's also because they have the resources to test, and they have a larger public responsibility to do that work. That is, I think there is a big difference between a company the size of Microsoft saying "we've known about this problem for 6 months but didn't consider it serious so we didn't do anything about it", and an open source developer saying "I've known about it for 6 months, but it's a hard problem to solve, I work on this in my spare time, and my users know that." Just like I expect a Ford to pass federal government safety tests, to have been put through a battery of product tests by ford, etc and be generally reliable and safe; but when I go to my local custom shop and have them build me a low volume or one off street rod, or chopper I cannot reasonably expect the same. The responsibility is the sum total of the number of product units out in the market, the risk to the end consumer, the companies ability to foresee the risk, and the steps the company was able to reasonably take to mitigate the risk. So, if someone can make a class action lawsuit against OpenSSH, go right ahead. In all likelyhood though there isn't enough money in it to get the lawyers interested, and even if there was it would be hard to prove that "a couple of guys" should have exhaustively tested the product like a big company should have done. It was once said, "there is risk in hiring someone to do risk analysis."
use for anything other than nailing stuff together. Likewise, MS told people six months ago to fix the hole. "Lack of planning on your part does
It is for this very reason I suspect no one could collect on this specific problem. Microsoft, from all I can tell, acted responsibly in this case. Sean asked for general ways to solve this type of problem. I gave what I thought was the best solution in general. It doesn't apply very directly to the specific events of the last few days. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
On Tue, Jan 28, 2003 at 11:22:13AM -0500, bicknell@ufp.org said: [snip]
That is, I think there is a big difference between a company the size of Microsoft saying "we've known about this problem for 6 months but didn't consider it serious so we didn't do anything about it", and an open source developer saying "I've known about it for 6 months, but it's a hard problem to solve, I work on this in my spare time, and my users know that."
Just like I expect a Ford to pass federal government safety tests, to have been put through a battery of product tests by ford, etc and be generally reliable and safe; but when I go to my local custom shop and have them build me a low volume or one off street rod, or chopper I cannot reasonably expect the same.
The responsibility is the sum total of the number of product units out in the market, the risk to the end consumer, the companies ability to foresee the risk, and the steps the company was able to reasonably take to mitigate the risk.
*applause* Very well stated. I've been trying for some time now to express my thoughts on this subject, and failing - you just expressed _exactly_ what I've been trying to say.
use for anything other than nailing stuff together. Likewise, MS told people six months ago to fix the hole. "Lack of planning on your part does
It is for this very reason I suspect no one could collect on this specific problem. Microsoft, from all I can tell, acted responsibly in this case. Sean asked for general ways to solve this type of problem. I gave what I thought was the best solution in general. It doesn't apply very directly to the specific events of the last few days.
Yes, in this particular case Microsoft did The Right Thing. It's not their fault (this time) that admins failed to apply patches. Of course, when one has a handful of new patches every _week_ for all manner of software from MS, ranging from browsers to mail clients to office software to OS holes to SMTP and HTTP daemons to databases ... well, one can understand why the admins might have missed this patch. It doesn't remove responsibility, but it does make the lack of action understandable. One could easily hire a full-time position, in any medium enterprise that runs MS gear, just to apply patches and stay on top of security issues for MS software. Microsoft is not alone in this - they just happen to be the poster child, and with the market share they have, if they don't lead the way in making security a priority, I can't see anybody else in the commercial software biz taking it seriously. The problem was not this particular software flaw. The problem here is the track record, and the attitude, of MANY large software vendors with regards to security. It just doesn't matter to them, and that will not change until they have a reason to care about it. -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
ekgermann@cctec.com ("Eric Germann") writes:
Not to sound to pro-MS, but if they are going to sue, they should be able to sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. ...
Don't forget BIND, we've had our problems as well. Our license says: /* * [Portions] Copyright (c) xxxx-yyyy by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS * SOFTWARE. */ I believe that Apache and the others you mention do the same. Disclaiming fitness for use, and requiring that the maker be held harmless, only works when the software is fee-free. Microsoft can get you to click "Accept" as often as they want and keep records of the fact that you clicked it, but in every state I know about, fitness for use is implied by the presence of fee and cannot be disclaimed even by explicit agreement from the end user. B2B considerations are different -- I'm talking about consumer rights not overall business liability. In any case, all of these makers (including Microsoft) seem to make a very good faith effort to get patches out when vulnerabilities are uncovered. I wish we could have put time bombs in older BINDs to force folks to upgrade, but that brings more problems than it takes away, so a lot of folks run old broken software even though our web page tells them not to. Note: IANAL. -- Paul Vixie
## On 2003-01-28 17:49 -0000 Paul Vixie typed: PV> PV> In any case, all of these makers (including Microsoft) seem to make a very PV> good faith effort to get patches out when vulnerabilities are uncovered. I PV> wish we could have put time bombs in older BINDs to force folks to upgrade, PV> but that brings more problems than it takes away, so a lot of folks run old PV> broken software even though our web page tells them not to. PV> Hi Paul, What do you think of OpenBSD still installing BIND4 as part of the default base system and recommended as secure by the OpenBSD FAQ ? (See Section 6.8.3 in <http://www.openbsd.org/faq/faq6.html#DNS> ) -- Thanks Rafi
On Tue, Jan 28, 2003 at 08:53:59PM +0200, rafi-nanog@meron.openu.ac.il said: [snip]
Hi Paul,
What do you think of OpenBSD still installing BIND4 as part of the default base system and recommended as secure by the OpenBSD FAQ ? (See Section 6.8.3 in <http://www.openbsd.org/faq/faq6.html#DNS> )
OpenBSD ships a highly-audited, chrooted version of BIND4 that bears little resemblance to the original code (I'm sure Paul can correct me here if I'm off-base). The reasons for the team's decision are well-documented on various lists and FAQs. Given the choices at hand (use the exhaustively audited, chrooted BIND4 already in production; go with a newer BIND version that hasn't been through the wringer yet; write their own dns daemon; use tinydns (licensing issues); use some other less well-known dns software), I think they made the right one. I'm sure they'll move to a newer version when somebody on the team gets a chance to give it a thorough code audit, and run it through sufficient testing prior to release. -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
--On Tuesday, January 28, 2003 18:06:47 -0800 Scott Francis <darkuncle@darkuncle.net> wrote:
I'm sure they'll move to a newer version when somebody on the team gets a chance to give it a thorough code audit, and run it through sufficient testing prior to release.
The -current tree now is at BIND 9.2.2rc-whatever, and has been so for roughly a month. Thank Jakob Schlyter. -- Måns Nilsson Systems Specialist +46 70 681 7204 KTHNOC MN1334-RIPE We're sysadmins. To us, data is a protocol-overhead.
On Mon, Feb 03, 2003 at 11:27:46AM +0100, mansaxel@sunet.se said:
--On Tuesday, January 28, 2003 18:06:47 -0800 Scott Francis <darkuncle@darkuncle.net> wrote:
I'm sure they'll move to a newer version when somebody on the team gets a chance to give it a thorough code audit, and run it through sufficient testing prior to release.
The -current tree now is at BIND 9.2.2rc-whatever, and has been so for roughly a month. Thank Jakob Schlyter.
*nod* Just noticed this when going through misc@ mail earlier. With sufficient testing, this will probably be in the 3.3 release in May ... -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
On Tue, 28 Jan 2003, Eric Germann wrote:
Not to sound to pro-MS, but if they are going to sue, they should be able to sue ALL software makers. And what does that do to open source?
A law can be crafted in such a way so as to create distinction between selling for profit (and assuming liability) and giving for free as-is. In fact, you don't have Goodwill to sign papers to the effect that it won't sue you if they decide later that you've brought junk - because you know they won't win in court. However, that does not protect you if you bring them a bomb disguised as a valuable. The reason for this is: if someone sells you stuff, and it turns out not to be up to your reasonable expectations, you suffered demonstrable loss because vendor has misled you (_not_ because the stuff is bad). I.e. the amount of that loss is the price you paid, and, therefore, this is vendor's direct liability. When someone gives you something for free, his direct liability is, correspondingly, zero. So, what you want is a law permitting direct liability (i.e. the "lemon law", like the ones regulating sale of cars or houses) but setting much higher standards (i.e. willfully deceiptive advertisement, maliciously dangerous software, etc) for suing for punitive damages. Note that in class actions it is often much easier to prove the malicious intent of a defendant in cases concering deceiptive advertisement - it is one thing when someone gets cold feet and claims he's been misled, and quite another when you have thousands of independent complaints. Because there's nothing to gain suing non-profits (unless they're churches:) the reluctance of class action lawyers to work for free would protect non-profits from that kind of abuse. A lemon law for software may actually be a boost for the proprietary software, as people will realize that the vendors have incentive to deliver on promises. --vadim
Not to sound to pro-MS, but if they are going to sue, they should be able to sue ALL software makers. And what does that do to open source? Apache, MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun vendor because some moron shoots himself in the head with it? No. It was never designed for flicking flies off his forehead. And they said, don't use for anything other than nailing stuff together. Likewise, MS told people six months ago to fix the hole. "Lack of planning on your part does not constitute an emergency on my part" was once told to me by a wise man. At some point, people have to take SOME responsibility for their organizations deployment of IT assets and systems. Microsoft is the convenient target right now because they HAVE assets to take. Who's going to pony up when Apache gets sued and loses. Hwo do you sue Apache, or how do you sue Perl, because, afterall, it has bugs. Just because you give it away shouldn't isolate you from liability.
Eric
Similarly, you _pay_ MS for a product. A product which is repeatedly vulnerable. You don't typically pay for Apache. If you pay for a closed-source product, security should be part of the price you've paid. If you acquire an open-source product, you either accept the limitations or you pay to have someone check it over, which is possible, since it is open-source. Some companies which believe certain open source products perform better than certain other closed source products, do just this. They pay someone to support that product. If you only use open-source, or non-commercial closed-source (probably the most dangerous) because it is cheap/free, then you get what you pay for.
Similarly, you _pay_ MS for a product. A product which is repeatedly vulnerable.
I think this is key. People (individuals/corporations) keep buying crappy software. As long as people keep paying the software vendors for these broken products what incentives do they have to actually fix them? Imagine if your car had to be recalled for problems every week (for years and years) [and you had to install the fixes yourself]. Do you think that the manufacturer of that car would still be selling cars, or atleast that model? Not likely. Why do we as consumers put of with this for software but not other products? It doesn't make any sense. - Mike Hogsett
On Tue, Jan 28, 2003 at 03:10:18AM -0500, sean@donelan.com said: [snip]
Many different companies were hit hard by the Slammer worm, some with better than average reputations for security awareness. They bought finest firewalls, they had two-factor biometric locks on their data centers, they installed anti-virus software, they paid for SAS70 audits by the premier auditors, they hired the best managed security consulting firms. Yet, they still were hit.
Its not as simple as don't use microsoft, because worms have hit other popular platforms too.
True. But few platforms have as dismal a record in this regard as MS. Whether that's due to number of bugs or market penetration is a matter for debate. Personally, I think it's clear that the focus, from MS and many other vendors, is on time-to-market and feature creep. Security is an afterthought, at best (regardless of "Trustworthy Computing", which is looking to be just another marketing initiative). The first step towards good security is choosing vendors/software with a reputation for caring about security. I realize that for many of us, this is not an option at this stage of the game. And in some arenas, there just aren't any good choices - the best you can do is to choose the lesser of multiple evils. Which leads me to the next point:
Are there practical answers that actually work in the real world with real users and real business needs?
I think a good place to start is to have at least one person, if not more, who has in their job description to daily check errata/patch lists for the software in use on the network. This can be semi-automated by just subscribing to the right mailing lists. Now, deciding whether or not a patch is worth applying is another story, but there's no excuse for being ignorant of published security updates for software on one's network. Yes, it's a hassle wading through the voluminous cross-site scripting posts on BUGTRAQ, but it's worth it when you do occasionally get that vital bit of information. Sometimes vendors aren't as quick to release bug information, much less patches, as forums like BUGTRAQ/VulnWatch/etc. Stay on top of security releases, and patch anything that is a security issue. I realize this is problematic for larger networks, in which case I would add, start with the most critical machines and work your way down. If this requires downtime, well, better to spend a few hours of rotating downtime to patch holes in your machines than to end up compromised, or contributing to the kind of chaos we saw this last weekend. Simple answer, practical for some folks, maybe less so for others. I know I've been guilty of not following my own advice in this area before, but that doesn't make it any less pertinent. -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui
participants (20)
-
Alex Bligh
-
Andy Putnins
-
bdragon@gweep.net
-
David Howe
-
E.B. Dreger
-
Eliot Lear
-
Eric Germann
-
Jack Bates
-
Leo Bicknell
-
Mike Hogsett
-
Mike Lewinski
-
Måns Nilsson
-
Paul Vixie
-
Rafi Sadowsky
-
Rubens Kuhl Jr.
-
Scott Francis
-
Sean Donelan
-
Ted Fischer
-
Vadim Antonov
-
Valdis.Kletnieks@vt.edu