Does anyone know of any studies on user adoption of security s/w (AV and FW products), including how often people update and how regularly? -Dennis
On Tue, 08 Jun 2004 17:29:51 CDT, Dennis Dayman <dennis@thenose.net> said:
Does anyone know of any studies on user adoption of security s/w (AV and FW products), including how often people update and how regularly?
Two papers that might help: A writeup on the OpenSSL holes, the Slapper worm, and when/why users patched their systems. 17 pages, PDF. http://www.rtfm.com/upgrade.pdf Lots of interesting conclusions about user behavior, which we probably need to consider when planning. Some non-trivial math/stats, but they explain what the results mean in plain English too, so feel free to skip over the formulas to the "this clearly shows...".. Crispin Cowan's presentation from Usenix LISA: http://wirex.com/~crispin/time-to-patch-usenix-lisa02.ps.gz Both of these papers are somewhat flawed in that they focus on the mostly-broken idea that the admin/user would even know a patch if it came by and bit them on the posterior.....
On Wed, 9 Jun 2004 Valdis.Kletnieks@vt.edu wrote:
A writeup on the OpenSSL holes, the Slapper worm, and when/why users patched their systems. 17 pages, PDF.
http://www.rtfm.com/upgrade.pdf
Lots of interesting conclusions about user behavior, which we probably need to consider when planning. Some non-trivial math/stats, but they explain what the results mean in plain English too, so feel free to skip over the formulas to the "this clearly shows..."..
I've been calling this the 40/40 rule. What's interesting is how consistant it remains, regardless of the timeline, exploit or publicity. About 40% of the vulnerable population patches before the exploit. About 40% of the vulnerable population patches after the exploit. The numbers vary a little e.g. 38% or 42%, but the speed or severity or publicity doesn't change them much. If it is six months before the exploit, about 40% will be patched (60% unpatched). If it is 2 weeks, about 40% will be patched (60% unpatched). Its a strange "invisible hand" effect, as the exploits show up sooner the people who were going to patch anyway patch sooner. The ones that don't, still don't. Businesses aren't that different from consumers. A business is like a super-cluster of PCs. Don't think of individual PCs, but of clusters of sysadmins. The difference is the patching occurs in clusters. Sysadmin clusters follow the same 40/40 rule. If you have 1,000 businesses each with 10-1,000 computers, within a sysadmin cluster it tends to be a binary patched/not patched for 99% of the computers in the same cluster. But across 1,000 clusters of PCs; things don't look that different. About 40% of the clusters are patched before the exploit, about 40% are patched after the exploit. Sometimes the cluster has 1,000 patched computers, sometimes the cluster has 10 patched computers, somtimes the cluster has 1,000 unpatched computers. Don't mistake size for better managed.
Both of these papers are somewhat flawed in that they focus on the mostly-broken idea that the admin/user would even know a patch if it came by and bit them on the posterior.....
The good news is after the exploit, thanks to the invisible hand about 80% of the patching behavior occurs without a lot of extra prompting. The bad news is regardless of what actions are taken, about 60% PCs/clusters will be vulnerable when an exploit is released regardless of how long the patch has been available.
On Wed, 09 Jun 2004 18:45:55 EDT, Sean Donelan <sean@donelan.com> said:
The numbers vary a little e.g. 38% or 42%, but the speed or severity or publicity doesn't change them much. If it is six months before the exploit, about 40% will be patched (60% unpatched). If it is 2 weeks, about 40% will be patched (60% unpatched). Its a strange "invisible hand" effect, as the exploits show up sooner the people who were going to patch anyway patch sooner. The ones that don't, still don't.
Remember that the black hats almost certainly had 0-days for the holes, and before the patch comes out, the 0-day is 100% effective. Once the patch comes out and is widely deployed, the usefulness of the 0-day drops. Most probably, 40% is a common value for "I might as well release this one and get some recognition". After that point, the residual value starts dropping quickly. Dave Aucsmith of Microsoft seems to think there's a flurry of activity to reverse engineer the patch: http://news.bbc.co.uk/1/hi/technology/3485972.stm In fact, half of them are just sitting there and playing "chicken" - you wait too long, and somebody else gets the recognition as "best reverse engineer" by Aucsmith, but if you wait too little, you lose your 0-day while it still has some effectiveness. Somebody else can turn the crank on the game-theory machine and figure out what the mathematically optimum release point is....
Valdis.Kletnieks@vt.edu writes:
On Wed, 09 Jun 2004 18:45:55 EDT, Sean Donelan <sean@donelan.com> said:
The numbers vary a little e.g. 38% or 42%, but the speed or severity or publicity doesn't change them much. If it is six months before the exploit, about 40% will be patched (60% unpatched). If it is 2 weeks, about 40% will be patched (60% unpatched). Its a strange "invisible hand" effect, as the exploits show up sooner the people who were going to patch anyway patch sooner. The ones that don't, still don't.
Remember that the black hats almost certainly had 0-days for the holes, and before the patch comes out, the 0-day is 100% effective.
What makes you think that black hats already know about your average hole?
Once the patch comes out and is widely deployed, the usefulness of the 0-day drops.
Most probably, 40% is a common value for "I might as well release this one and get some recognition". After that point, the residual value starts dropping quickly.
I don't think this assessment is likely to be correct. If you look, for instance, at the patching curve on page 1 of "Security holes... Who cares?" (http://www.rtfm.com/upgrade.pdf) theres'a pretty clear flat spot from about 25 days (roughly 60% patch adoption) to 45 days (release of the Slapper worm). So, one that 2-3 week initial period has passed, the value of an exploit is roughly constant for a long period of time. -Ekr
On Thu, 10 Jun 2004 08:50:18 PDT, Eric Rescorla said:
Valdis.Kletnieks@vt.edu writes:
Remember that the black hats almost certainly had 0-days for the holes, and before the patch comes out, the 0-day is 100% effective.
What makes you think that black hats already know about your average hole?
Because unlike a role playing game, in the real world the lawful-good white hats don't have any deity-granted magic ability to spot holes that remain hidden from the chaotic-neutral/evil dark hats. Explain to me why, given that MS03-039, MS03-041, MS03-043, MS03-044, and MS03-045 all affected systems going all the way back to NT/4, and that exploits surfaced quite quickly for all of them, there is *any* reason to think that only white hats who have been sprinkled with magic pixie dust were able to find any of those holes in all the intervening years?
Valdis.Kletnieks@vt.edu writes:
On Thu, 10 Jun 2004 08:50:18 PDT, Eric Rescorla said:
Valdis.Kletnieks@vt.edu writes:
Remember that the black hats almost certainly had 0-days for the holes, and before the patch comes out, the 0-day is 100% effective.
What makes you think that black hats already know about your average hole?
Because unlike a role playing game, in the real world the lawful-good white hats don't have any deity-granted magic ability to spot holes that remain hidden from the chaotic-neutral/evil dark hats.
Explain to me why, given that MS03-039, MS03-041, MS03-043, MS03-044, and MS03-045 all affected systems going all the way back to NT/4, and that exploits surfaced quite quickly for all of them, there is *any* reason to think that only white hats who have been sprinkled with magic pixie dust were able to find any of those holes in all the intervening years?
Actually, I think that the persistence of vulnerabilities is an argument against the theory that the black hats in general know about vulnerabilities before they're released. I.e. given that the white hats put a substantial amount of effort into finding vulnerabilities and yet many vulnerabilities persist in software for a long period of time without being found and disclosed that suggests that the probability of white hats finding any particular vulnerability is relatively small. If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small. For more detail on this general line of argument, see my paper "Is finding security holes a good idea?" at WEIS '04. Paper: http://www.dtc.umn.edu/weis2004/rescorla.pdf Slides: http://www.dtc.umn.edu/weis2004/weis-rescorla.pdf WRT to the relatively rapid appearance of exploits, I don't think that's much of a signal one way or the other. As I understand it, once one knows about a vulnerability it's often (though not always) quite easy to write an exploit. And as you observe, the value of an exploit is highest before people have had time to patch. -Ekr
----- Original Message ----- From: "Eric Rescorla" <ekr@rtfm.com> To: <Valdis.Kletnieks@vt.edu> Cc: "Sean Donelan" <sean@donelan.com>; "'Nanog'" <nanog@merit.edu> Sent: Thursday, June 10, 2004 2:37 PM Subject: Re: AV/FW Adoption Sudies -- snip ---
If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small.
and yet, some of the most damaging vulns were kept secret for months before they got leaked and published. i won't pretend to have the answer, but fact remains fact. paul
Paul G <paul@rusko.us> wrote:
----- Original Message ----- From: "Eric Rescorla" <ekr@rtfm.com> To: <Valdis.Kletnieks@vt.edu> Cc: "Sean Donelan" <sean@donelan.com>; "'Nanog'" <nanog@merit.edu> Sent: Thursday, June 10, 2004 2:37 PM Subject: Re: AV/FW Adoption Sudies
-- snip ---
If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small.
and yet, some of the most damaging vulns were kept secret for months before they got leaked and published. i won't pretend to have the answer, but fact remains fact.
I don't think that this contradicts what I was saying. My hypothesis is that the sets of bugs independently found by white hats and black hats are basically disjoint. So, you'd definitely expect that there were bugs found by the black hats and then used as zero-days and eventually leaked to the white hats. So, what you describe above is pretty much what one would expect. -Ekr
----- Original Message ----- From: "Eric Rescorla" <ekr@rtfm.com>
Paul G <paul@rusko.us> wrote:
----- Original Message ----- From: "Eric Rescorla" <ekr@rtfm.com>
-- snip ---
If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small.
and yet, some of the most damaging vulns were kept secret for months before they got leaked and published. i won't pretend to have the answer, but fact remains fact.
I don't think that this contradicts what I was saying.
My hypothesis is that the sets of bugs independently found by white hats and black hats are basically disjoint. So, you'd definitely expect that there were bugs found by the black hats and then used as zero-days and eventually leaked to the white hats. So, what you describe above is pretty much what one would expect.
there is a fair chance that the same bug will be found if several people audit the same piece of code, such as a very widespread, high profile piece of software. in fact, i know of at least one serious bug that was discovered independently by two different groups of people. in general, however, what you are saying makes complete sense. paul
On Thu, 10 Jun 2004 11:54:31 PDT, Eric Rescorla said:
My hypothesis is that the sets of bugs independently found by white hats and black hats are basically disjoint. So, you'd definitely expect that there were bugs found by the black hats and then used as zero-days and eventually leaked to the white hats. So, what you describe above is pretty much what one would expect.
Well.. for THAT scenario to happen, two things have to be true: 1) Black hats are able to find bugs too 2) The white hats aren't as good at finding bugs as we might think, because some of their finds are leaked 0-days rather than their own work, inflating their numbers. Remember what you said:
relatively small. If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small.
More likely, the software actually leaks like a sieve, and NEITHER group has even scratched the surface.. Remember - every single 0-day that surfaces was something the black hats found first. The only thing you're really measuring by looking at the 0-day rate is the speed at which an original black exploit gets leaked from a black hat to a very dark grey hat to a medium grey hat and so on, until it gets to somebody who's hat is close enough to white to publish openly. Data point: When did Steve Bellovin point out the issues with non-random TCP ISNs? When did Mitnick use an exploit for this against Shimomura? And now ask yourself - when did we *first* start seeing SYN flood attacks (which were *originally* used to shut the flooded machine up while and prevent it from talking while you spoofed its address to some OTHER machine?)
Valdis.Kletnieks@vt.edu writes:
On Thu, 10 Jun 2004 11:54:31 PDT, Eric Rescorla said:
My hypothesis is that the sets of bugs independently found by white hats and black hats are basically disjoint. So, you'd definitely expect that there were bugs found by the black hats and then used as zero-days and eventually leaked to the white hats. So, what you describe above is pretty much what one would expect.
Well.. for THAT scenario to happen, two things have to be true:
1) Black hats are able to find bugs too
2) The white hats aren't as good at finding bugs as we might think, because some of their finds are leaked 0-days rather than their own work, inflating their numbers.
Both of these seem fairly likely to me. I've certainly seen white hat bug reports that are clearly from leaks (i.e. where they acknowledge that openly).
Remember what you said:
relatively small. If we assume that the black hats aren't vastly more capable than the white hats, then it seems reasonable to believe that the probability of the black hats having found any particular vulnerability is also relatively small.
More likely, the software actually leaks like a sieve, and NEITHER group has even scratched the surface..
That's more or less what I believe the situation to be, yes. I'm not sure we disagree. All I was saying was that I don't think we have a good reason to believe that the average bug found independently by a white hat is already known to a black hat. Do you disagree? -Ekr
On Thu, 10 Jun 2004 12:23:42 PDT, Eric Rescorla said:
I'm not sure we disagree. All I was saying was that I don't think we have a good reason to believe that the average bug found independently by a white hat is already known to a black hat. Do you disagree?
Actually, yes. Non-obvious bugs (ones with a non 100% chance of being spotted on careful examination) will often be found by both groups. Let's say we have a bug that has a 0.5% chance of being found at any given attempt to find it. Now take 100 white hats and 100 black hats - compute the likelyhood that at least 1 attempt in either group finds it (I figure it as some 39% (1 - (0.995^100)). For bonus points, extend a bit further, and make multiple series of attempts, and compute the probability that for any given pair of 100 attempts, exactly one finds it, or neither finds it, or both find it. And it turns out that for that 39% chance, 16% of the time both groups will find it, 36% of the time exactly one will find it, and 48% of the time *neither* will find it. And in fact, the chance of overlap is much higher, because the two series of 100 runs *aren't* independent. Remember that for the most part, the info that suggested "Look over HERE" to the white hat was also available to the black hat.....
Valdis.Kletnieks@vt.edu writes:
On Thu, 10 Jun 2004 12:23:42 PDT, Eric Rescorla said:
I'm not sure we disagree. All I was saying was that I don't think we have a good reason to believe that the average bug found independently by a white hat is already known to a black hat. Do you disagree?
Actually, yes.
Non-obvious bugs (ones with a non 100% chance of being spotted on careful examination) will often be found by both groups. Let's say we have a bug that has a 0.5% chance of being found at any given attempt to find it. Now take 100 white hats and 100 black hats - compute the likelyhood that at least 1 attempt in either group finds it (I figure it as some 39% (1 - (0.995^100)). For bonus points, extend a bit further, and make multiple series of attempts, and compute the probability that for any given pair of 100 attempts, exactly one finds it, or neither finds it, or both find it. And it turns out that for that 39% chance, 16% of the time both groups will find it, 36% of the time exactly one will find it, and 48% of the time *neither* will find it.
The problem with this a priori analysis is that it predicts an incredibly high probability that any given bug will be found by white hats. However, in practice, we know that bugs persist for years without being found, so we know that that probability as a function of time must actually be quite low. Otherwise, we wouldn't see the data we actually see, which is a more or less constant stream of bugs at a steady rate. On the other hand, if the probability that a given bug will be found is low [0], then the chance that when you find a bug it will also be found by someone else is correspondingly low. -Ekr [0] Note that this doesn't require that the chance of finding any particular bug upon inspection of the code be very low high, but merely that there not be very deep coverage of any particular code section.
On Thu, 10 Jun 2004 13:30:41 PDT, Eric Rescorla said:
[0] Note that this doesn't require that the chance of finding any particular bug upon inspection of the code be very low high, but merely that there not be very deep coverage of any particular code section.
Right. However, if you hand the team of white hats and the team of black hats the same "Chatter has it there's a 0-day in Apache's mod_foo handler".... Note that the rumored 0-day doesn't even have to exist - one has to wonder how many of the bugs found in Windows by all color hats were inspired by Allchin's comment under oath that there was an API flaw in Windows so severe that publishing the API could endanger national security.....
Valdis.Kletnieks@vt.edu writes:
On Thu, 10 Jun 2004 13:30:41 PDT, Eric Rescorla said:
[0] Note that this doesn't require that the chance of finding any particular bug upon inspection of the code be very low high, but merely that there not be very deep coverage of any particular code section.
Right. However, if you hand the team of white hats and the team of black hats the same "Chatter has it there's a 0-day in Apache's mod_foo handler"....
Ok, now we're getting somewhere. I'm asking the question: If you find some bug in the normal course of your operations (i.e. nobody told you where to look) how likely is it that someone else has already found it? And you're asking a question more like: Given that you hear about a bug before its release, how likely is it that some black hat alredy knows? I think that the answer to the first question is probably "fairly low". I agree that the answer to the second question is probably "reasonably high". -Ekr
On Thu, 10 Jun 2004 13:50:47 PDT, Eric Rescorla said:
I'm asking the question: If you find some bug in the normal course of your operations (i.e. nobody told you where to look) how likely is it that someone else has already found it?
And you're asking a question more like: Given that you hear about a bug before its release, how likely is it that some black hat alredy knows?
I think that the answer to the first question is probably "fairly low". I agree that the answer to the second question is probably "reasonably high".
Third case: Exploit in one package identified because of info from a similar exploit against some *other* package.... Back in March 2000, I spotted a rather nasty security bug in Sendmail (fixed in 8.10.1) when running under AIX or SunOS. Since the problem is a documented *feature* of the system linker, a *lot* of software had the problem - and the Sendmail release notes give enough info to make it "game over". At that point, the 3 big things left were (a) writing a general-case exploit (trivial if you use one of the another one of the basic design goals of the AIX linker against itself), (b) creating a shell one-liner to identify vulnerable programs, and (c) running the script from (b). Of the three, (c) was actually the most time-consuming. 3 years later, another package (OpenSSH) hit the same hole: http://www.securityfocus.com/archive/1/320149/2003-04-30/2003-05-06/0 And it was a known issue months before I tripped over it: http://mail.gnome.org/archives/gtk-devel-list/1999-November/msg00047.html I'd be most surprised if black hats did *not* have an exploit for the OpenSSH variant, having been pointed at the issue due to my finding a similar issue in Sendmail..... And there's *plenty* of evidence that when a novel attack is found, you see lots of people posting "So I was bored and decided to see what *else* had the same sort of bug..." (think "buffer overflow" ;)
In message <200406101919.i5AJJVUM000657@turing-police.cc.vt.edu>, Valdis.Kletni eks@vt.edu writes: Actually, it was Morris, not me, who first pointed it out.
Data point: When did Steve Bellovin point out the issues with non-random TCP ISNs? When did Mitnick use an exploit for this against Shimomura?
And now ask yourself - when did we *first* start seeing SYN flood attacks (whi ch were *originally* used to shut the flooded machine up while and prevent it from talking while you spoofed its address to some OTHER machine?)
That's not quite correct. While flooding can work, Morris found an implementation bug that made it easier to gag the alleged source. I'd have to spend a while trying to figure out the exact details; roughly, though, you picked a port on which the alleged source was in LISTEN state, created enough half-open connections to fill its queue, and then used that port (in the privileged range) in launching your spoofing attack on the real victim. The SYN+ACK packets would be dropped, rather than eliciting an RST, because they appeared to be SYNs for a service with a full queue. The difference is is that this scheme takes many fewer packets than a SYN flood -- 5, back in 1985 when the attack was published -- and works very reliably, with no statistical dependencies. That bug has long-since been fixed on just about everything out there, but in the mean time we've seen lots more ways to take hosts off the air... --Steve Bellovin, http://www.research.att.com/~smb
More likely, the software actually leaks like a sieve, and NEITHER group has even scratched the surface..
How many leaks did the OpenBSD team find when they proactively audited their entire codebase for the first time a few years ago? This would be an indication of just how leaky an O/S might be expected to be.
Remember - every single 0-day that surfaces was something the black hats found first.
And 0-day exploits are only the ones that the blackhats are willing to talk about. If they keep quiet about an exploit and only use it for industrial espionage and other electronic crimes then we are unlikely to hear about it until a whitehat stumbles across the blackhat's activities. Rather like the cuckoo's egg or the recent complex exploit involving IE and the MS Help tool. Have any of your customers ever asked you for a traffic audit report showing every IP address that has ever sourced traffic to them or received traffic from them? --Michael Dillon
[unattributed wrote:]
Remember - every single 0-day that surfaces was something the black hats found first.
* Michael.Dillon@radianz.com [Fri 11 Jun 2004, 12:29 CEST]:
And 0-day exploits are only the ones that the blackhats are willing to talk about. If they keep quiet about an exploit and only use it for industrial espionage and other electronic crimes then we are unlikely to hear about it until a whitehat stumbles across the blackhat's activities. Rather like the cuckoo's egg or the recent complex exploit involving IE and the MS Help tool.
This "black hat" vs. other shade "hats" is unnecessarily polarising. A security researcher may, during the normal course of his employment, find a security vulnerability. Not talking about it could be a commercial advantage (if she does security audits, the discovery could potentially be used to gain access to otherwise closed portions of a customer's network) and not necessarily a sign of an evil mind.
Have any of your customers ever asked you for a traffic audit report showing every IP address that has ever sourced traffic to them or received traffic from them?
Surely this would be for comparison against their own logs of what they sent and received and not because they aren't logging their own very important data traffic? -- Niels.
participants (8)
-
Dennis Dayman
-
Eric Rescorla
-
Michael.Dillon@radianz.com
-
Niels Bakker
-
Paul G
-
Sean Donelan
-
Steven M. Bellovin
-
Valdis.Kletnieks@vt.edu