Security team successfully cracks SSL using 200 PS3's and MD5 flaw.
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified. http://hackaday.com/2008/12/30/25c3-hackers-completely-break-ssl-using-200-p... http://phreedom.org/research/rogue-ca/ -- [ Rodrick R. Brown ] http://www.rodrickbrown.com http://www.linkedin.com/in/rodrickbrown
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified.
http://hackaday.com/2008/12/30/25c3-hackers-completely-break-ssl-using-200-p... http://phreedom.org/research/rogue-ca/
That's a bit of a stretch. It doesn't seem that they've actually broken "a core piece of Internet technology." It's more like they've nibbled at a known potential problem enough to make it a real problem. According to the quoted article, "They collected 30K certs from Firefox trusted CAs. 9K of them were MD5 signed. 97% of those came from RapidSSL." I've seen other discussions of the topic that suggest that a variety of CA's (one such discussion talked about "VeriSign resellers", and I believe RapidSSL ~== VeriSign) are vulnerable. Anyways, I was under the impression that the whole purpose of the revocation capabilities of SSL was to deal with problems like this, and that a large part of the justification of the cost of an SSL certificate was the administrative burden associated with guaranteeing and maintaining the security of the chain. It seems like the major browser vendors and anyone else highly reliant on SSL should be putting VeriSign and any other affected CA's on notice that their continued existence as trusted CA's in software distributions is dependent on their rapidly forcing customers to update their certificates, an obligation that they should have expected to undertake every now and then, even though they'll obviously not *want* to have to do that. I'm aware that the VeriSign position is that their existing certificates are "not vulnerable" to "this attack," which I believe may be the case for some values of those terms. However, it is often the case that a limited- effectiveness example such as this is soon replaced by a more generally- effective exploit, and the second URL suggests to me that what VeriSign is saying may not be true anyways. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Fri, 2 Jan 2009, Joe Greco wrote:
Anyways, I was under the impression that the whole purpose of the revocation capabilities of SSL was to deal with problems like this, and
How to revoke the CA is actually in the file. The fake CA they created didn't have any revokation. MD5 is broken, don't use it for anything important. The reason for their exercise is just as you said, they executed in practice what had been said to be possible since around 2004-2006. This is obviously needed for people to start paying attention. -- Mikael Abrahamsson email: swmike@swm.pp.se
Hank Nussbacher wrote:
On Fri, 2 Jan 2009, Mikael Abrahamsson wrote:
MD5 is broken, don't use it for anything important.
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
MD5 on BGP sessions has already been proven to not being that effective anyhow, for the purpose that it was intended for. I don't think these findings will make any difference there. Kind regards, Martin List-Petersen -- Airwire - Ag Nascadh Pobal an Iarthar http://www.airwire.ie Phone: 091-865 968
On Sat, 3 Jan 2009, Hank Nussbacher wrote:
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
Well, most likely MD5 is better than the alterantive today which is to run no authentication/encryption at all. But we should push whoever is developing these standards to go for SHA-1 or equivalent instead of MD5 in the longer term. -- Mikael Abrahamsson email: swmike@swm.pp.se
At 06:44 PM 03-01-09 +0100, Mikael Abrahamsson wrote:
On Sat, 3 Jan 2009, Hank Nussbacher wrote:
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
Well, most likely MD5 is better than the alterantive today which is to run no authentication/encryption at all.
But we should push whoever is developing these standards to go for SHA-1 or equivalent instead of MD5 in the longer term.
Who is working on this? I don't find anything here: http://www.ietf.org/html.charters/idr-charter.html All I can find is: http://www.ietf.org/rfc/rfc2385.txt http://www.ietf.org/rfc/rfc3562.txt http://www.ietf.org/rfc/rfc4278.txt Nothing on replacing MD5 for BGP. -Hank
* Hank Nussbacher:
Who is working on this? I don't find anything here: http://www.ietf.org/html.charters/idr-charter.html
I think this belongs to the tcpm WG or the btns WG.
Yeap: http://www.ietf.org/internet-drafts/draft-ietf-tcpm-tcp-auth-opt-02.txt TCPM WG J. Touch Internet Draft USC/ISI Obsoletes: 2385 A. Mankin Intended status: Proposed Standard Johns Hopkins Univ. Expires: May 2009 R. Bonica Juniper Networks November 3, 2008 Rubens On Sun, Jan 4, 2009 at 8:26 AM, Florian Weimer <fw@deneb.enyo.de> wrote:
* Hank Nussbacher:
Who is working on this? I don't find anything here: http://www.ietf.org/html.charters/idr-charter.html
I think this belongs to the tcpm WG or the btns WG.
There is a discussion of this going on in CFRG. https://www.irtf.org/mailman/listinfo/cfrg Regards Marshall On Jan 4, 2009, at 2:22 AM, Hank Nussbacher wrote:
At 06:44 PM 03-01-09 +0100, Mikael Abrahamsson wrote:
On Sat, 3 Jan 2009, Hank Nussbacher wrote:
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
Well, most likely MD5 is better than the alterantive today which is to run no authentication/encryption at all.
But we should push whoever is developing these standards to go for SHA-1 or equivalent instead of MD5 in the longer term.
Who is working on this? I don't find anything here: http://www.ietf.org/html.charters/idr-charter.html
All I can find is: http://www.ietf.org/rfc/rfc2385.txt http://www.ietf.org/rfc/rfc3562.txt http://www.ietf.org/rfc/rfc4278.txt
Nothing on replacing MD5 for BGP.
-Hank
On Sun, Jan 4, 2009 at 9:37 AM, Marshall Eubanks <tme@multicasttech.com> wrote:
There is a discussion of this going on in CFRG.
sadly, and apropos I suppose, www.irtf.org is serving up a *.ietf.org ssl cert :( and the archives require membership to view them. -chris
On Sun, Jan 4, 2009 at 11:40 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Sun, Jan 4, 2009 at 9:37 AM, Marshall Eubanks <tme@multicasttech.com> wrote:
There is a discussion of this going on in CFRG.
sadly, and apropos I suppose, www.irtf.org is serving up a *.ietf.org ssl cert :( and the archives require membership to view them.
oops I should have included the cert ... subject=/O=*.ietf.org/CN=*.ietf.org/OU=Domain Control Validated issuer=/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certificates.starfieldtech.com/repository/CN=Starfield Secure Certification Authority/serialNumber=10688435 with the chain: Certificate chain 0 s:/O=*.ietf.org/CN=*.ietf.org/OU=Domain Control Validated i:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certificates.starfieldtech.com/repository/ CN=Starfield Secure Certification Authority/serialNumber=10688435 1 s:/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certificates.starfieldtech.com/repository/ CN=Starfield Secure Certification Authority/serialNumber=10688435 i:/C=US/O=Starfield Technologies, Inc./OU=Starfield Class 2 Certification Authority 2 s:/C=US/O=Starfield Technologies, Inc./OU=Starfield Class 2 Certification Authority i:/L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.val icert.com//emailAddress=info@valicert.com 3 s:/L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.val icert.com//emailAddress=info@valicert.com i:/L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.val icert.com//emailAddress=info@valicert.com -chris
Date: Sun, 04 Jan 2009 09:22:06 +0200 From: Hank Nussbacher <hank@efes.iucc.ac.il>
At 06:44 PM 03-01-09 +0100, Mikael Abrahamsson wrote:
On Sat, 3 Jan 2009, Hank Nussbacher wrote:
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
Well, most likely MD5 is better than the alterantive today which is to run no authentication/encryption at all.
But we should push whoever is developing these standards to go for SHA-1 or equivalent instead of MD5 in the longer term.
Who is working on this? I don't find anything here: http://www.ietf.org/html.charters/idr-charter.html
All I can find is: http://www.ietf.org/rfc/rfc2385.txt http://www.ietf.org/rfc/rfc3562.txt http://www.ietf.org/rfc/rfc4278.txt
Nothing on replacing MD5 for BGP.
I don't see why this is an issue (today). As far as I understand it, the vulnerability in MD5 is that, with time and cycles, it is possible to create a collision where two files have the same MD5 hash, so the counterfeit cert would check as valid. For the MD5 signature on a TCP packet, this is not relevant. Am I missing something? (I will admit to not being a cryptography person, so I may totally misunderstand.) I don't object to moving to a stronger hash, but, considering the expense and time involved, I'd suggest waiting for the new hash algorithm that the NIST challenge will hopefully provide. In other words, stick to MD5 in places where it is not believed to be vulnerable and where converting to SHA-1 or SHA-256 would be expensive. Where it IS believed vulnerable, the cost/benefit ratio would have to determine when the conversion is justified. For X.509 certs, I believe the answer is clearly that it is justified and has been for at least 2 years. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
Hank Nussbacher wrote:
You mean like for BGP neighbors? Wanna suggest an alternative? :-)
tcp/md5 + gtsm (assuming directly connected peers) makes messing around with bgp sessions rather difficult. Filtering BGP packets at the edge and borders slightly more so. If you have CPU and sufficient quantities of administrivium to spare, you can use ipsec on your routers for these sessions. The real issue is how to make compromising bgp sessions sufficiently difficult to make it an unattractive target. Given that the cost of getting write access to the DFZ is not really very high either technically or financially, I'd propose that while gtsm / md5 / filtering aren't perfect, they raise the bar high enough to make it not really worth someone's while trying to break them; and IPsec more so. Nick
* Hank Nussbacher:
On Fri, 2 Jan 2009, Mikael Abrahamsson wrote:
MD5 is broken, don't use it for anything important.
You mean like for BGP neighbors?
Good point. However, as a defense against potential blind injection attacks, even an unhashed password in a TCP option would do the trick (at least in the non-IXP case, IXPs may pose different challenges).
Wanna suggest an alternative? :-)
Just switch on IPsec. 8-)
On Fri, 02 Jan 2009 09:58:05 CST, Joe Greco said:
Anyways, I was under the impression that the whole purpose of the revocation capabilities of SSL was to deal with problems like this, and that a large part of the justification of the cost of an SSL certificate was the administrative burden associated with guaranteeing and maintaining the security of the chain.
What percentage of deployed browsers handle CRL's correctly? Consider this snippet from the phreedom.org page (section 6.1): "One interesting observation from our work is that the rogue certificate we have created is very hard to revoke using the revocation mechanism available in common browsers. There are two protocols for certificate revocation, CRL and OSCP. Until Firefox 3 and IE 7, certificate revocation was disabled by default. Even in the latest versions, the browsers rely on the certificate to include a URL pointing to a revocation server. Our rogue CA certificate had very limited space and it was impossible to include such a URL, which means that by default both Internet Explorer and Firefox are unable to find a revocation server to check our certificate against." Hmm... so basically all deployed FireFox and IE either don't even try to do a CRL, or they ask the dodgy certificate "Who can I ask if you're dodgy?" What's wrong with this picture? (Personally, I consider this a potentially bigger problem than the MD5 issue...)
On Fri, Jan 2, 2009 at 5:44 PM, <Valdis.Kletnieks@vt.edu> wrote:
Hmm... so basically all deployed FireFox and IE either don't even try to do a CRL, or they ask the dodgy certificate "Who can I ask if you're dodgy?"
Hmm. Don't the shipped-with-the-browser trusted root certificates include a CRL URL?
On Fri, 2 Jan 2009 17:53:55 +0100 "Terje Bless" <link@pobox.com> wrote:
On Fri, Jan 2, 2009 at 5:44 PM, <Valdis.Kletnieks@vt.edu> wrote:
Hmm... so basically all deployed FireFox and IE either don't even try to do a CRL, or they ask the dodgy certificate "Who can I ask if you're dodgy?"
Hmm. Don't the shipped-with-the-browser trusted root certificates include a CRL URL?
Every CA runs its own CRL server -- it has to be that way. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On 3/01/2009, at 6:06 AM, Steven M. Bellovin wrote:
On Fri, 2 Jan 2009 17:53:55 +0100 "Terje Bless" <link@pobox.com> wrote:
On Fri, Jan 2, 2009 at 5:44 PM, <Valdis.Kletnieks@vt.edu> wrote:
Hmm... so basically all deployed FireFox and IE either don't even try to do a CRL, or they ask the dodgy certificate "Who can I ask if you're dodgy?"
Hmm. Don't the shipped-with-the-browser trusted root certificates include a CRL URL?
Every CA runs its own CRL server -- it has to be that way.
But the engineered certificate won't be considered trusted if its whole chain back to the root isn't trusted, and one or more of the certificates in that chain should have been shipped with the browser and hopefully includes a CRL URL. Although they won't want to, surely the roots should revoke their root certificates that issued MD5-signed certificates, and issue new root certificates for issuing SHA-1-signed certificates. Browsers would then stop trusting all the MD5-signed certificates due to them not having a trusted chain back to the root, assuming they bother to check all the certificates in the chain for revocation. Of course, this will just make the browsers pop up dialog boxes which everyone will click OK on... -- Jasper Bryant-Greene Network Engineer, Unleash ddi: +64 3 978 1222 mob: +64 21 129 9458
Of course, this will just make the browsers pop up dialog boxes which everyone will click OK on...
And brings us to an even more interesting question, since everything is trusting their in-browser root CAs and such. How trustable is the auto-update process? If one does provoke a mass-revocation of certificates and everyone needs to update their browsers... how do the auto-update daemons *know* that what they are getting is the real deal? [I haven't looked into this, just bringing it up. I'm almost certain its less secure than the joke that is SSL certification]. Happy New Year! Deepak
On Fri, 2 Jan 2009 15:49:24 -0500 Deepak Jain <deepak@ai.net> wrote:
Of course, this will just make the browsers pop up dialog boxes which everyone will click OK on...
And brings us to an even more interesting question, since everything is trusting their in-browser root CAs and such. How trustable is the auto-update process? If one does provoke a mass-revocation of certificates and everyone needs to update their browsers... how do the auto-update daemons *know* that what they are getting is the real deal?
[I haven't looked into this, just bringing it up. I'm almost certain its less secure than the joke that is SSL certification].
If done properly, that's actually an easier task: you build the update key into the browser. When it pulls in an update, it verifies that it was signed with the proper key. --Steve Bellovin, http://www.cs.columbia.edu/~smb
If done properly, that's actually an easier task: you build the update key into the browser. When it pulls in an update, it verifies that it was signed with the proper key.
If you build it into the browser, how do you revoke it when someone throws 2000 PS3s to crack it, or your hash, or your [pick algorithmic mistake here]. Deepak
For IE and other things using CryptoAPI on Windows, this should be handled through the automagic root certificate update through Windows Update (if one hasn't disabled it), AFAIK. The question is really whether that mechanism requires a cert rooted at a Microsoft authority or not. The danger being that someone could use an intermediate CA rooted at an md5-signing CA and present a seemingly valid cert through that with the right common name. Some other Microsoft things (i.e. KMCS) require certs rooted to a single specific root and not just *any* global root, so it's possible that the same is done for root certificate update blobs; however, I don't know for certain, and some research would need to be done. I don't think any of the MS issuing roots use md5, though. - S -----Original Message----- From: Deepak Jain [mailto:deepak@ai.net] Sent: Friday, January 02, 2009 4:14 PM To: Steven M. Bellovin Cc: NANOG Subject: RE: Security team successfully cracks SSL using 200 PS3's and MD5 flaw.
If done properly, that's actually an easier task: you build the update key into the browser. When it pulls in an update, it verifies that it was signed with the proper key.
If you build it into the browser, how do you revoke it when someone throws 2000 PS3s to crack it, or your hash, or your [pick algorithmic mistake here]. Deepak
On Fri, 2 Jan 2009 16:13:45 -0500 Deepak Jain <deepak@ai.net> wrote:
If done properly, that's actually an easier task: you build the update key into the browser. When it pulls in an update, it verifies that it was signed with the proper key.
If you build it into the browser, how do you revoke it when someone throws 2000 PS3s to crack it, or your hash, or your [pick algorithmic mistake here].
If you use bad crypto, you lose no matter what. If you use good crypto, 2,000,000,000 PS3s won't do the job. --Steve Bellovin, http://www.cs.columbia.edu/~smb
If you use bad crypto, you lose no matter what. If you use good crypto, 2,000,000,000 PS3s won't do the job.
Even if you use good crypto, and someone steals your key (say, a previously in-access person) you need a way to reliably, completely, revoke it. This has been a problem with SSL since its [implementation] inception. Lots of math (crypto) is good on paper and fails at the implementation stage. Deepak
On Fri, 02 Jan 2009 09:58:05 CST, Joe Greco said:
Anyways, I was under the impression that the whole purpose of the revocation capabilities of SSL was to deal with problems like this, and that a large part of the justification of the cost of an SSL certificate was the administrative burden associated with guaranteeing and maintaining the security of the chain.
What percentage of deployed browsers handle CRL's correctly?
Consider this snippet from the phreedom.org page (section 6.1):
"One interesting observation from our work is that the rogue certificate we have created is very hard to revoke using the revocation mechanism available in common browsers. There are two protocols for certificate revocation, CRL and OSCP. Until Firefox 3 and IE 7, certificate revocation was disabled by default. Even in the latest versions, the browsers rely on the certificate to include a URL pointing to a revocation server. Our rogue CA certificate had very limited space and it was impossible to include such a URL, which means that by default both Internet Explorer and Firefox are unable to find a revocation server to check our certificate against."
Hmm... so basically all deployed FireFox and IE either don't even try to do a CRL, or they ask the dodgy certificate "Who can I ask if you're dodgy?"
What's wrong with this picture? (Personally, I consider this a potentially bigger problem than the MD5 issue...)
I suppose I wasn't sufficiently clear. It seems that part of the proposed solution is to get people to move from MD5-signed to SHA1-signed. There will be a certain amount of resistance. What I was suggesting was the use of the revocation mechanism as part of the "stick" (think carrot-and-stick) in a campaign to replace MD5-based certs. If there is a credible threat to MD5-signed certs, then forcing their retirement would seem to be a reasonable reaction, but everyone here knows how successful "voluntary" conversion strategies typically are. Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-) As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue. We cannot continue to justify security failure on the basis that a significant percentage of the clients don't support it, or are broken in their support. That's an argument for fixing the clients. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 2 Jan 2009, at 12:33, Joe Greco wrote:
We cannot continue to justify security failure on the basis that a significant percentage of the clients don't support it, or are broken in their support. That's an argument for fixing the clients.
At a more basic level, though, isn't failure guaranteed for these kind of clients (web browsers) so long as users are conditioned to click OK/ Continue for every SSL certificate failure that is reported to them? If I was attempting a large-scale man-in-the-middle attack, perhaps I'd be happier to do no work and intercept 5% of sessions (those who click OK on a certificate that is clearly bogus) than I would to do an enormous amount of work and intercept 100% (those who would see no warnings). And surely 5% is a massive under-estimate. Joe
On 2 Jan 2009, at 12:33, Joe Greco wrote:
We cannot continue to justify security failure on the basis that a significant percentage of the clients don't support it, or are broken in their support. That's an argument for fixing the clients.
At a more basic level, though, isn't failure guaranteed for these kind of clients (web browsers) so long as users are conditioned to click OK/ Continue for every SSL certificate failure that is reported to them?
Yes. This is a major problem.
If I was attempting a large-scale man-in-the-middle attack, perhaps I'd be happier to do no work and intercept 5% of sessions (those who click OK on a certificate that is clearly bogus) than I would to do an enormous amount of work and intercept 100% (those who would see no warnings). And surely 5% is a massive under-estimate.
Yet I do not particularly wish to ignore the problem, just because we do not have a completely comprehensive solution to the problem that solves every case and prevents every mistake. The Firefox changes to really draw attention to certificate issues is, regardless of what people have said about "usability" and "practicality," an important step. However, there's something else being highlighted here. SSL certificates have a major failing in that it is really spectacularly annoying and difficult for some people to acquire them, and/or the value in paying more than a trivial sum (or any sum) is hard to justify, etc. For example, I have absolutely no desire to pay even a modest $15/year per device to get all my various networking devices to have legitimate SSL certificates; instead, we run our own local CA and import our root CA cert into browsers. It's cheaper, *more* secure, etc. Nobody but us will be logging into our devices, and our browsers have the local root CA added. Now, many sites just don't see the need, and self-signed certs are the result. This would seem to point out some critical shortcomings in the current SSL system; these shortcomings are not necessarily technological, but rather social/psychological. We need the ability for Tom, Dick, or Harry to be able to crank out a SSL cert with a minimum of fuss or cost; having to learn the complexities of SSL is itself a "fuss" which has significantly and negatively impacted Internet security. Somehow, we managed to figure out how to do this with PGP and keysigning, but it all fell apart (I can hear the "it doesn't scale" already) with SSL. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Fri, Jan 02, 2009 at 15:33:05 -0600, Joe Greco wrote:
This would seem to point out some critical shortcomings in the current SSL system; these shortcomings are not necessarily technological, but rather social/psychological. We need the ability for Tom, Dick, or Harry to be able to crank out a SSL cert with a minimum of fuss or cost; having to learn the complexities of SSL is itself a "fuss" which has significantly and negatively impacted Internet security.
Somehow, we managed to figure out how to do this with PGP and keysigning, but it all fell apart (I can hear the "it doesn't scale" already) with SSL.
If we had DNSSEC, we could do away with SSL CAs entirely. The owner of each domain or host could publish a self-signed cert in a TXT RR, and the DNS chain of trust would be the only form of validation needed.
On 2009-01-05, at 15:18, Jason Uhlenkott wrote:
If we had DNSSEC, we could do away with SSL CAs entirely. The owner of each domain or host could publish a self-signed cert in a TXT RR,
... or even in a CERT RR, as I heard various clever people talking about in some virtual hallway the other day. <http://www.isi.edu/in-notes/rfc2538.txt
.
and the DNS chain of trust would be the only form of validation needed.
Joe
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put? randy
On 2009-01-05, at 15:47, Randy Bush wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
If I can get secure answers to "www.bank.example IN CERT?" and "www.bank.example IN A?" then perhaps when I connect to www.bank.example:443 I can decide to trust the certificate presented by the server based on the trust anchor I extracted from the DNS, rather than whatever trust anchors were bundled with my browser. That presumably would mean that the organisation responsible for bank.example could run their own CA and publish their own trust anchor, without having to buy that service from one of the traditional CA companies. No doubt there is more to it than that. I don't know anything much about X.509. Joe
On 09.01.06 05:59, Joe Abley wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
If I can get secure answers to "www.bank.example IN CERT?" and "www.bank.example IN A?" then perhaps when I connect to www.bank.example:443 I can decide to trust the certificate presented by the server based on the trust anchor I extracted from the DNS, rather than whatever trust anchors were bundled with my browser.
That presumably would mean that the organisation responsible for bank.example could run their own CA and publish their own trust anchor, without having to buy that service from one of the traditional CA companies.
No doubt there is more to it than that. I don't know anything much about X.509.
x.509 is not the issue. it is your assumption that dns trust is formally transferrable to ssl/tls cert trust. to use your example, the contractor who serves dns for www.bank.example could insert a cert and then fake the web site having (a child of) that cert. whereas, if the site had its cert a descendant of the ca for all banks, this attack would fail. and i am not interested in quibbling about banks and who issues root cas. the point is that there are two different trust models here, and trust is not transitive. but then again, i have not even had coffee yet this morning. randy
On Tue, 06 Jan 2009 06:09:34 +0900, Randy Bush said:
to use your example, the contractor who serves dns for www.bank.example could insert a cert and then fake the web site having (a child of) that cert. whereas, if the site had its cert a descendant of the ca for all banks, this attack would fail.
All you've done *there* is transfer the trust from the contractor to the company that's the "ca for the bank". Yes, the ca-for-banks.com has a vested interest in making sure none of its employees go rogue and do something naughty - but so does the DNS contractor. One could equally well argue that if a site was using the DNS for certs would be immune to an attack on a CA.
On 09.01.06 05:59, Joe Abley wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
If I can get secure answers to "www.bank.example IN CERT?" and "www.bank.example IN A?" then perhaps when I connect to www.bank.example:443 I can decide to trust the certificate presented by the server based on the trust anchor I extracted from the DNS, rather than whatever trust anchors were bundled with my browser.
That presumably would mean that the organisation responsible for bank.example could run their own CA and publish their own trust anchor, without having to buy that service from one of the traditional CA companies.
No doubt there is more to it than that. I don't know anything much about X.509.
x.509 is not the issue. it is your assumption that dns trust is formally transferrable to ssl/tls cert trust.
to use your example, the contractor who serves dns for www.bank.example could insert a cert and then fake the web site having (a child of) that cert. whereas, if the site had its cert a descendant of the ca for all banks, this attack would fail.
and i am not interested in quibbling about banks and who issues root cas. the point is that there are two different trust models here, and trust is not transitive.
Sure it is. At least, sometimes it is. One of the problems I mentioned previously was that there's such an amount of fuss involved in obtaining SSL certs for relatively-low-value uses, and the end result is that many sites self-sign or simply don't bother. In the case where I've hosted a box with $BigHosterInc, and they've got DNS control of my zones, and they've got hands on the physical box(es), it becomes difficult to determine just how to prevent a bad actor at $BigHosterInc from do malicious things. On the flip side, there is very clearly value in differentiating between "a certificate that merely guarantees that the communications between the server and your client is secure" and "not only that, but we certify that you are talking to a FooBank-owned web server." Trust is all relative. I might trust you, Randy Bush, in some particular way. But if a group of gunmen storm your home and force you to reveal some bit of confidential data I've given to you, is my trust misplaced, or is it simply that there are necessarily some limits and risks in sharing with you that confidential data? What is the difference if the information is something that gets someone killed, vs information that merely results in my company's business plans being known to a competitor? With that in mind, there could certainly be great uses for delegating some forms of trust through the DNS chain. Not all, though, not all. Banks are a good example of the circumstance where you'd want separation.
but then again, i have not even had coffee yet this morning.
Then have some coffee. ;-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Tue, Jan 06, 2009 at 06:09:34 +0900, Randy Bush wrote:
to use your example, the contractor who serves dns for www.bank.example could insert a cert and then fake the web site having (a child of) that cert. whereas, if the site had its cert a descendant of the ca for all banks, this attack would fail.
To be pedantic, it'd have to be the contractor who holds the signing key for the bank.example zone (which may be a separate entity from whoever has operational control of the nameservers). You're correct that this proposal treats control of a DNS zone as a strong proof of identity, but I'd argue that that's the case already -- whoever controls the zone can easily get a CA to issue them a cert which is valid for the host "www.bank.example". Whether the organization name is "Example Bancorp" or "DomainSquatters'R'Us, Inc." is irrelevant, since nobody ever looks at that. I'd go so far as to argue that the hostname is the proper *definition* of identity in this context. The client identifies the destination it wishes to connect to by hostname, not by organization name. The purpose of the cert ought to be to ensure that we're talking to the host identified by that hostname (according to a necessarily trusted DNS). Ensuring that the hostname belongs to someone the user really wants to speak to is an orthogonal problem which is impossible to solve without a clueful user in the loop, and at which the current model is failing miserably.
Randy Bush wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
randy
It wouldn't, which is why the original suggestion is a bad idea. They're different issues (finding the actual address of the server you want to talk to vs. authenticating that the server is the server you want to talk to), and the trust doesn't transfer for multiple reasons. Mostly it isn't a good idea because there's a big "too many eggs in one basket" problem here... compromise of the DNS root keys not only would cause address lookups to be as insecure as they are now (which still works much of the time for many people), but inserting fake self-signed certs becomes trivial. This is nearly as bad as the argument I've seen that if we had DNSSEC we wouldn't even need SSL's authentication, because you'd be sure you were talking to the right server (never mind that there's demonstrated examples of just how easy it is to reroute someone else's packets from far away). Of course we could secure the entire routing system as well... Matthew Kaufman
On 01/05/09 12:47, Randy Bush wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
Because I have to trust the DNS anyway. If the DNS redirects my users to a bad site, they may not notice that they are actually entering their personal information into the perfectly-SSL-secured www.bankofamerca.com. Given the willingness of some CAs (which are trusted by browsers) to give out certs with no verification at all[1], I am not sure there is much to be trusted in the current CA-cartel arrangement, with the exception of EV certs. So banks can continue to use the equivalent of EV certs, and the rest of us who don't need an extra layer of trust can switch to using root certs in the DNS secured via DNSSEC. The trust hierarchy is already there. I agree that there are two different trust models, one of which I am required to trust and the other of which I don't trust at all. michael [1]http://www.theregister.co.uk/2008/12/29/ca_mozzilla_cert_snaf/
On 2009/01/05 10:47 PM Randy Bush wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
I must also be slow. Can someone tell me how DNSSEC is supposed to encrypt my TCP/IP traffic?
In message <4962E096.7070409@karnaugh.za.net>, Colin Alston writes:
On 2009/01/05 10:47 PM Randy Bush wrote:
perhaps i am a bit slow. but could someone explain to me how trust in dns data transfers to trust in an http partner and other uses to which ssl is put?
I must also be slow. Can someone tell me how DNSSEC is supposed to encrypt my TCP/IP traffic?
DNSSEC allows you to go from dns name -> CERT in a secure manner. The application then checks that the cert used to establish the ssl session is one from the CERT RRset. Basically when you pay your $70 or whatever for the CERT record you are asking the CA to assert that you have the right to use the domain name. It's expensive because they are not part of existing DNS trust relationship setup when the domain was delegated in the first place. The natural place to look for DNS trust is in the DNS. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: Mark_Andrews@isc.org
Joe Abley <jabley@hopcount.ca> writes:
On 2009-01-05, at 15:18, Jason Uhlenkott wrote:
If we had DNSSEC, we could do away with SSL CAs entirely. The owner of each domain or host could publish a self-signed cert in a TXT RR,
... or even in a CERT RR, as I heard various clever people talking about in some virtual hallway the other day. <http://www.isi.edu/in-notes/rfc2538.txt>.
i wasn't clever but i was in that hallway. it's more complicated than RFC 2538, but there does seem to be a way forward involving SSL/TLS (to get channel encryption) but where a self-signed key could be verified using a CERT RR (to get endpoint identity authentication). the attacks recently have been against MD5 (used by some X.509 CA's) and against an X.509 CA's identity verification methods (used at certificate granting time). no recent attack has shaken my confidence in SSL/TLS negotiation or encryption, but frankly i'm a little worried about nondeployability of X.509 now that i see what the CA's are doing operationally when they start to feel margin pressure and need to keep volume up + costs down. i don't have a specific proposal. (yet.) but i'm investigating, and i recommend others do likewise. -- Paul Vixie
In message <20090105201859.GC15107@ferrum.uhlenkott.net>, Jason Uhlenkott write s:
On Fri, Jan 02, 2009 at 15:33:05 -0600, Joe Greco wrote:
This would seem to point out some critical shortcomings in the current SSL system; these shortcomings are not necessarily technological, but rather social/psychological. We need the ability for Tom, Dick, or Harry to be able to crank out a SSL cert with a minimum of fuss or cost; having to learn the complexities of SSL is itself a "fuss" which has significantly and negatively impacted Internet security.
Somehow, we managed to figure out how to do this with PGP and keysigning, but it all fell apart (I can hear the "it doesn't scale" already) with SSL.
If we had DNSSEC, we could do away with SSL CAs entirely. The owner of each domain or host could publish a self-signed cert in a TXT RR, and the DNS chain of trust would be the only form of validation needed.
Or one could use the CERT to publish a cert :-) Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: Mark_Andrews@isc.org
Something worth noting. I am not sure about Firefox, but with IE 7 (and IE 6 when CRL validation is enabled) when a the browser encounters a revoked certificate, it does not present the usual "yes/no" box. Instead, one gets a message basically saying "certificate is revoked, you can't continue, period". Now, if the user is smart enough they can go into the advanced settings and turn off CRL validation and get around the error. Though not bullet-proof, that is better than just a "yes/no" box. In a perfect world, all certificate checks should result in a non-bypass-able error message. But the wide spread nature of self-signed certs and the lack of knowledge of many tech staff would make this a very hard position for any web browser vendor to swallow. In the near term, it would be nice to see browsers start to implement mandatory CRL checking for all non-self signed/root-level certificates. Ensuring that a each tier of the PKI has a time/signature valid CRL published (note, many root certificates do not publish CRL paths for themselves, hence the exception for them and self-signed). Speaking of this attack specifically. Publishing a CRL that would revoke this certificate would be basically useless. Since the CRL path that is used to valid a certificate is contained in the certificate, not the issuing CA's certificate. And a potential attacker would have little reason to point to a CRL that would incriminate themselves. (Note, there are a *few* applications that actually do mandate strict CRL checking, and thereby require the certificate to point to a valid CRL signed by the issuing CA, though they are not very common). My $0.02, Adam Stasiniewicz -----Original Message----- From: Joe Abley [mailto:jabley@hopcount.ca] Sent: Friday, January 02, 2009 11:40 AM To: Joe Greco Cc: nanog@nanog.org Subject: Re: Security team successfully cracks SSL using 200 PS3's and MD5 On 2 Jan 2009, at 12:33, Joe Greco wrote:
We cannot continue to justify security failure on the basis that a significant percentage of the clients don't support it, or are broken in their support. That's an argument for fixing the clients.
At a more basic level, though, isn't failure guaranteed for these kind of clients (web browsers) so long as users are conditioned to click OK/ Continue for every SSL certificate failure that is reported to them? If I was attempting a large-scale man-in-the-middle attack, perhaps I'd be happier to do no work and intercept 5% of sessions (those who click OK on a certificate that is clearly bogus) than I would to do an enormous amount of work and intercept 100% (those who would see no warnings). And surely 5% is a massive under-estimate. Joe
Joe Greco wrote:
[ .... ]
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue.
[ ... ]
... JG
F Y I, see: SSL Blacklist 4.0 - for a Firefox extension able to detect 'bad' certificates @ http://www.codefromthe70s.org/sslblacklist.aspx Best.
On 2-Jan-09, at 9:56 AM, Robert Mathews (OSIA) wrote:
Joe Greco wrote:
[ .... ]
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue.
[ ... ]
... JG
F Y I, see:
SSL Blacklist 4.0 - for a Firefox extension able to detect 'bad' certificates @ http://www.codefromthe70s.org/sslblacklist.aspx
Best.
Snort rule to detect said... url: http://vrt-sourcefire.blogspot.com/2009/01/md5-actually-harmful.html alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"POLICY Weak SSL OSCP response -- MD5 usage"; content:"content-type: application/ ocsp-response"; content:"2A 86 48 86 F7 0D 01 01 05"; metadata: policy security-ips drop, service http; reference: url, www.win.tue.nl/hashclash/rogue-ca/ ; classtype: policy-violation; sid:1000001;) cheers, --dr -- World Security Pros. Cutting Edge Training, Tools, and Techniques Vancouver, Canada March 16-20 2009 http://cansecwest.com London, U.K. May 27/28 2009 http://eusecwest.com pgpkey http://dragos.com/ kyxpgp
On Fri, 2 Jan 2009, Dragos Ruiu wrote:
www.win.tue.nl/hashclash/rogue-ca/; classtype: policy-violation; sid:1000001;)
You can't really use any snort rule to detect SHA-1 certs created by a fake authority created using the MD5 issue. Yes, this is a serious matter, but it hardly has any operational impact to speak of for users and none for NSPs. Gadi.
On 2-Jan-09, at 6:53 PM, Gadi Evron wrote:
Yes, this is a serious matter, but it hardly has any operational impact to speak of for users and none for NSPs.
Dunno. Last I checked NSPs had web servers too. :-P cheers, --dr -- World Security Pros. Cutting Edge Training, Tools, and Techniques Vancouver, Canada March 16-20 2009 http://cansecwest.com London, U.K. May 27/28 2009 http://eusecwest.com pgpkey http://dragos.com/ kyxpgp
On Fri, Jan 2, 2009 at 10:44 PM, Dragos Ruiu <dr@kyx.net> wrote:
On 2-Jan-09, at 6:53 PM, Gadi Evron wrote:
Yes, this is a serious matter, but it hardly has any operational impact to speak of for users and none for NSPs.
Dunno. Last I checked NSPs had web servers too. :-P
so, aside from 'get a re-issued cert signed SHA-1 from an approved CA that's SHA-1 signed as well' what's the recourse for an NSP? -chris
Dragos Ruiu wrote:
On 2-Jan-09, at 9:56 AM, Robert Mathews (OSIA) wrote:
Joe Greco wrote:
[ .... ]
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue.
[ ... ]
... JG
F Y I, see:
SSL Blacklist 4.0 - for a Firefox extension able to detect 'bad' certificates @ http://www.codefromthe70s.org/sslblacklist.aspx
Best.
Snort rule to detect said...
url: http://vrt-sourcefire.blogspot.com/2009/01/md5-actually-harmful.html
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"POLICY Weak SSL OSCP response -- MD5 usage"; content:"content-type: application/ocsp-response"; content:"2A 86 48 86 F7 0D 01 01 05"; metadata: policy security-ips drop, service http; reference: url, www.win.tue.nl/hashclash/rogue-ca/; classtype: policy-violation; sid:1000001;)
cheers, --dr
-- World Security Pros. Cutting Edge Training, Tools, and Techniques Vancouver, Canada March 16-20 2009 http://cansecwest.com London, U.K. May 27/28 2009 http://eusecwest.com pgpkey http://dragos.com/ kyxpgp
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So we trade MD5 for SHA-1? This makes no sense.
Would using the combination of both MD5 and SHA-1 raise the computational bar enough for now, or are there other good prospects for a harder to crack hash? On Sat, Jan 3, 2009 at 9:35 AM, William Warren < hescominsoon@emmanuelcomputerconsulting.com> wrote:
Dragos Ruiu wrote:
On 2-Jan-09, at 9:56 AM, Robert Mathews (OSIA) wrote:
Joe Greco wrote:
[ .... ]
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue.
[ ... ]
... JG
F Y I, see:
SSL Blacklist 4.0 - for a Firefox extension able to detect 'bad' certificates @ http://www.codefromthe70s.org/sslblacklist.aspx
Best.
Snort rule to detect said...
url: http://vrt-sourcefire.blogspot.com/2009/01/md5-actually-harmful.html
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"POLICY Weak SSL OSCP response -- MD5 usage"; content:"content-type: application/ocsp-response"; content:"2A 86 48 86 F7 0D 01 01 05"; metadata: policy security-ips drop, service http; reference: url, www.win.tue.nl/hashclash/rogue-ca/; classtype: policy-violation; sid:1000001;)
cheers, --dr
-- World Security Pros. Cutting Edge Training, Tools, and Techniques Vancouver, Canada March 16-20 2009 http://cansecwest.com London, U.K. May 27/28 2009 http://eusecwest.com pgpkey http://dragos.com/ kyxpgp
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So
we trade MD5 for SHA-1? This makes no sense.
On Jan 3, 2009, at 9:38 AM, Dorn Hetzel wrote:
Would using the combination of both MD5 and SHA-1 raise the computational bar enough for now,
I have never seen this recommended (and I do try and follow this).
or are there other good prospects for a harder to crack hash?
The Federal Information Processing Standard 180-2, Secure Hash Standard, specifies algorithms for computing five cryptographic hash functions — SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512. SHA-256 is thought to be still safe, unlike SHA-1 http://eprint.iacr.org/2008/271 http://www.schneier.com/blog/archives/2005/02/cryptanalysis_o.html and its use is recommended by RFC4509. http://tools.ietf.org/html/rfc4509 So, I would use SHA-256 if possible. (SHA-224 is a truncation of -256 - see rfc3874.) There is, BTW, a competition to find a replacement. http://csrc.nist.gov/groups/ST/hash/sha-3/index.html Regards Marshall
On Sat, Jan 3, 2009 at 9:35 AM, William Warren < hescominsoon@emmanuelcomputerconsulting.com> wrote:
Dragos Ruiu wrote:
On 2-Jan-09, at 9:56 AM, Robert Mathews (OSIA) wrote:
Joe Greco wrote:
[ .... ]
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
As for the points raised in your message, yes, there are additional problems with clients that have not taken this seriously. It is, however, one thing to have locks on your door that you do not lock, and another thing entirely not to have locks (and therefore completely lack the ability to lock). I hope that there is some serious thought going on in the browser groups about this sort of issue.
[ ... ]
... JG
F Y I, see:
SSL Blacklist 4.0 - for a Firefox extension able to detect 'bad' certificates @ http://www.codefromthe70s.org/sslblacklist.aspx
Best.
Snort rule to detect said...
url: http://vrt-sourcefire.blogspot.com/2009/01/md5-actually-harmful.html
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"POLICY Weak SSL OSCP response -- MD5 usage"; content:"content-type: application/ocsp-response"; content:"2A 86 48 86 F7 0D 01 01 05"; metadata: policy security-ips drop, service http; reference: url, www.win.tue.nl/hashclash/rogue-ca/; classtype: policy-violation; sid:1000001;)
cheers, --dr
-- World Security Pros. Cutting Edge Training, Tools, and Techniques Vancouver, Canada March 16-20 2009 http://cansecwest.com London, U.K. May 27/28 2009 http://eusecwest.com pgpkey http://dragos.com/ kyxpgp
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So
we trade MD5 for SHA-1? This makes no sense.
On Sat, 03 Jan 2009 09:35:06 -0500 William Warren <hescominsoon@emmanuelcomputerconsulting.com> wrote:
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So we trade MD5 for SHA-1? This makes no sense.
(a) SHA-1 was not broken as badly. The best attack is, as I recall, 2^63, which is computationally infeasible without special-purpose hardware. (b) Per a paper Eric Rescorla and I wrote, there's no usable alternative, since too many protocols (including TLS) don't negotiate hash functions before presenting certificates. In particular, this means that a web site can't use SHA-256 because (1) most clients won't support it; and (2) it can't tell which ones do. (Note that this argument applies just as much to combinations of hash functions -- anything that *the large majority of today's* browsers don't implement isn't usable.) These two points lead us to (c): security is a matter of economics, not algorithms. Switching now to something else loses more in connectivity or customers than you would lose from such an expensive attack. --Steve Bellovin, http://www.cs.columbia.edu/~smb
On Sat, Jan 3, 2009 at 10:49 AM, Steven M. Bellovin <smb@cs.columbia.edu> wrote:
On Sat, 03 Jan 2009 09:35:06 -0500 William Warren <hescominsoon@emmanuelcomputerconsulting.com> wrote:
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So we trade MD5 for SHA-1? This makes no sense.
(a) SHA-1 was not broken as badly. The best attack is, as I recall, 2^63, which is computationally infeasible without special-purpose hardware.
special purpose? or lots of commodity? like the Amazon-EC2 example used in the cert issue? (or PS3s or...)
(b) Per a paper Eric Rescorla and I wrote, there's no usable alternative, since too many protocols (including TLS) don't negotiate hash functions before presenting certificates. In particular, this means that a web site can't use SHA-256 because (1) most clients won't support it; and (2) it can't tell which ones do. (Note that this argument applies just as much to combinations of hash functions -- anything that *the large majority of today's* browsers don't implement isn't usable.)
This is a function of an upgrade (firefox3.5 coming 'soon!') for browsers, and for OS's as well, yes? So, given a future flag-day (18 months from today no more MD5, only SHA-232323 will be used!!) browsers for the majority of the market could be upgraded. Certainly there are non-browsers out there (eudora, openssl, wget, curl..bittorrent-clients, embedded things) which either will lag more or break all together.
These two points lead us to (c): security is a matter of economics, not algorithms. Switching now to something else loses more in connectivity or customers than you would lose from such an expensive attack.
only if not staged out with enough time to roll updates in first, right? -Chris
On Sat, 3 Jan 2009 12:31:53 -0500 "Christopher Morrow" <morrowc.lists@gmail.com> wrote:
On Sat, Jan 3, 2009 at 10:49 AM, Steven M. Bellovin <smb@cs.columbia.edu> wrote:
On Sat, 03 Jan 2009 09:35:06 -0500 William Warren <hescominsoon@emmanuelcomputerconsulting.com> wrote:
Everyone seems to be stampeding to SHA-1..yet it was broken in 2005. So we trade MD5 for SHA-1? This makes no sense.
(a) SHA-1 was not broken as badly. The best attack is, as I recall, 2^63, which is computationally infeasible without special-purpose hardware.
special purpose? or lots of commodity? like the Amazon-EC2 example used in the cert issue? (or PS3s or...)
No -- special-purpose chips, along the lines of Deep Crack (http://en.wikipedia.org/wiki/EFF_DES_cracker). Let's do the arithmetic. 'openssl speed sha1' on my desktop -- a 3.4 Ghz Dell -- manages 1583237 16-byte blocks in 2.92 seconds, or ~542204/second. Let's assume that for an attack to be economical, the calculations have to be completed within 30 days. My machine could do 1405B hashes in that time frame. But I need 2^63 of them, which means I need 6.5 million machines cooperating. Not impossible for BOINC, but I don't think that EC2 could handle it.
(b) Per a paper Eric Rescorla and I wrote, there's no usable alternative, since too many protocols (including TLS) don't negotiate hash functions before presenting certificates. In particular, this means that a web site can't use SHA-256 because (1) most clients won't support it; and (2) it can't tell which ones do. (Note that this argument applies just as much to combinations of hash functions -- anything that *the large majority of today's* browsers don't implement isn't usable.)
This is a function of an upgrade (firefox3.5 coming 'soon!') for browsers, and for OS's as well, yes? So, given a future flag-day (18 months from today no more MD5, only SHA-232323 will be used!!) browsers for the majority of the market could be upgraded. Certainly there are non-browsers out there (eudora, openssl, wget, curl..bittorrent-clients, embedded things) which either will lag more or break all together.
Have you looked at the statistics on upgrades lately? Not a pretty picture... See, among others, http://www.ews.uiuc.edu/bstats/latest.html http://www.upsdell.com/BrowserNews/stat_trends.htm http://marketshare.hitslink.com/browser-market-share.aspx?qprid=2 http://www.techzoom.net/publications/insecurity-iceberg/index.en
These two points lead us to (c): security is a matter of economics, not algorithms. Switching now to something else loses more in connectivity or customers than you would lose from such an expensive attack.
only if not staged out with enough time to roll updates in first, right?
From all the data I've seen, very many machines are *never* upgraded, so the proper metric for "enough time" is "computer lifetime". Firefox 3 does handle SHA-256/384/512; I don't think IE7 does. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Christopher Morrow wrote:
This is a function of an upgrade (firefox3.5 coming 'soon!') for browsers, and for OS's as well, yes? So, given a future flag-day (18 months from today no more MD5, only SHA-232323 will be used!!) browsers for the majority of the market could be upgraded. Certainly there are non-browsers out there (eudora, openssl, wget, curl..bittorrent-clients, embedded things) which either will lag more or break all together.
I think you might be downplaying the size of the problem here. X.509 and TLS/SSL isn't just used for browsers, but for a wide variety of places where there is a requirement for PKI based security. So when you talk about a flag day for dealing with SHA-X (where X != 1), have you considered the logistical problems of upgrading all those embedded devices around the world? The credit card terminals? The tiny CPE vpn units? The old machine in the corner which handles corporate sign-on, where the vendor has now gone bust and no-one has the source code. And the large web portal which had a whole bunch of local apache customisations based on apache 1.3.x and where the original developers left for greener pa$ture$, and no-one in-house really understands what they did any longer. Etc, etc. It's different if you have a protocol which allows parameter negotiation to deal with issues like this, but not so good when you don't. Nick
* Nick Hilliard:
I think you might be downplaying the size of the problem here. X.509 and TLS/SSL isn't just used for browsers, but for a wide variety of places where there is a requirement for PKI based security. So when you talk about a flag day for dealing with SHA-X (where X != 1), have you considered the logistical problems of upgrading all those embedded devices around the world?
They won't be affected by the flag day, because the flag day is set by the relying party (that is, the browser), not the CA. If you've got a real PKI deployment, by definition, you've got procedures to deal with sudden advances in published cryptanalysis (even if it involves posting guards at certain buildings, instead of relying on smartcards for access control). The problematic areas are those where cryptography is used to comply with some checklist (or for PR purposes), and not for its security properties. In those environments, algorithm changes can never justify the associated costs.
On Sat, Jan 3, 2009 at 1:41 PM, Nick Hilliard <nick@foobar.org> wrote:
Christopher Morrow wrote:
This is a function of an upgrade (firefox3.5 coming 'soon!') for browsers, and for OS's as well, yes? So, given a future flag-day (18 months from today no more MD5, only SHA-232323 will be used!!) browsers for the majority of the market could be upgraded. Certainly there are non-browsers out there (eudora, openssl, wget, curl..bittorrent-clients, embedded things) which either will lag more or break all together.
I think you might be downplaying the size of the problem here. X.509 and
I wasn't, not intentionally.. I was trying to address the problem which the researchers harped on, and which seems like the hot-button for many folks: "oh my, someone can intercept https silently!!" I understand there are LOTS of things out there using certs for all manner of not-http things. I also understand that by telling a browser class that they shouldn't accept anything but sha-X seems workable. I suppose having the CA's kick out ONLY SHA-X is a bad plan, but ... maybe letting cert requestors select the hash funciton (not md5) is better? (or a step in the right direction at least)
TLS/SSL isn't just used for browsers, but for a wide variety of places where there is a requirement for PKI based security. So when you talk about a flag day for dealing with SHA-X (where X != 1), have you considered the logistical problems of upgrading all those embedded devices around the world? The credit card terminals? The tiny CPE vpn units? The old
I had... yup.
machine in the corner which handles corporate sign-on, where the vendor has now gone bust and no-one has the source code. And the large web portal which had a whole bunch of local apache customisations based on apache 1.3.x and where the original developers left for greener pa$ture$, and no-one in-house really understands what they did any longer. Etc, etc.
It's different if you have a protocol which allows parameter negotiation to deal with issues like this, but not so good when you don't.
agreed, 100%. There are also lots of folks using certs internally for all manner of oddball reasons... signed on their own CA (perhaps chained to a 'real' CA, perhaps not). They'll have to be accomodated as well, of course. -chris
* Joe Greco:
It seems that part of the proposed solution is to get people to move from MD5-signed to SHA1-signed. There will be a certain amount of resistance. What I was suggesting was the use of the revocation mechanism as part of the "stick" (think carrot-and-stick) in a campaign to replace MD5-based certs. If there is a credible threat to MD5-signed certs, then forcing their retirement would seem to be a reasonable reaction, but everyone here knows how successful "voluntary" conversion strategies typically are.
A CA statement that they won't issue MD5-signed certificates in the future should be sufficient. There's no need to reissue old certificates, unless the CA thinks other customers have attacked it.
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
I doubt the NSA is interested in MITM attacks which can be spotted by comparing key material. 8-)
* Joe Greco:
It seems that part of the proposed solution is to get people to move from MD5-signed to SHA1-signed. There will be a certain amount of resistance. What I was suggesting was the use of the revocation mechanism as part of the "stick" (think carrot-and-stick) in a campaign to replace MD5-based certs. If there is a credible threat to MD5-signed certs, then forcing their retirement would seem to be a reasonable reaction, but everyone here knows how successful "voluntary" conversion strategies typically are.
A CA statement that they won't issue MD5-signed certificates in the future should be sufficient. There's no need to reissue old certificates, unless the CA thinks other customers have attacked it.
That would seem to be at odds with what the people who documented this problem believe.
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
I doubt the NSA is interested in MITM attacks which can be spotted by comparing key material. 8-)
Doubting that the NSA might be interested in any given technique is, of course, good for the NSA. Our national security people have been known to use imperfect interception technologies when it suits the task at hand. Do people here really so quickly forget things? There was a talk on Carnivore given in 2000 at NANOG 20, IIRC, and I believe that one of the instigating causes of that talk was problems that Earthlink had experienced when the FBI had deployed Carnivore there. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Fri, Jan 2, 2009 at 3:29 PM, Joe Greco <jgreco@ns.sol.net> wrote:
* Joe Greco: [snip
Either we take the potential for transparent MitM attacks seriously, or we do not. I'm sure the NSA would prefer "not." :-)
I doubt the NSA is interested in MITM attacks which can be spotted by comparing key material. 8-)
Doubting that the NSA might be interested in any given technique is, of course, good for the NSA. Our national security people have been known to use imperfect interception technologies when it suits the task at hand. Do people here really so quickly forget things? There was a talk on Carnivore given in 2000 at NANOG 20, IIRC, and I believe that one of the instigating causes of that talk was problems that Earthlink had experienced when the FBI had deployed Carnivore there.
Naturally. The NSA isn't filled with theorists who want to get the job done the "right" way. They have a mission to fulfill, and they'll use whatever tool works to get it done.
Neil wrote:
Do people here really so quickly forget things? There was a talk on Carnivore given in 2000 at NANOG 20, IIRC, and I believe that one of the instigating causes of that talk was problems that Earthlink had experienced when the FBI had deployed Carnivore there.
Naturally. The NSA isn't filled with theorists who want to get the job done the "right" way. They have a mission to fulfill, and they'll use whatever tool works to get it done.
Just a slight point of order, here. Carnivore (and its successor, DCS1000) were used by, and developed by, the FBI. The NSA use other stuff. Let's get back to reality, anyway. -- Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. Brian W. Kernighan
Neil wrote:
Do people here really so quickly forget things? There was a talk on Carnivore given in 2000 at NANOG 20, IIRC, and I believe that one of the instigating causes of that talk was problems that Earthlink had experienced when the FBI had deployed Carnivore there.
Naturally. The NSA isn't filled with theorists who want to get the job done the "right" way. They have a mission to fulfill, and they'll use whatever tool works to get it done.
Just a slight point of order, here. Carnivore (and its successor, DCS1000) were used by, and developed by, the FBI. The NSA use other stuff. Let's get back to reality, anyway.
Both the FBI and NSA are our "national security people." Since there are not many visible examples of failures of NSA technology, and because such failures are more likely to be covered up, pointing out an obvious screwup that was more readily visible seems reasonable. As a further point of order, I expect that the NSA has access to any tools that the FBI has developed, if and when they ask for them, especially now, in our new and less compartmentalized homeland security infrastructure. Unless, of course, you're saying that you work for the NSA and you actually know this not to be the case, in which case, do tell. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 2, 2009, at 3:29 PM, Joe Greco wrote:
* Joe Greco:
It seems that part of the proposed solution is to get people to move from MD5-signed to SHA1-signed. There will be a certain amount of resistance. What I was suggesting was the use of the revocation mechanism as part of the "stick" (think carrot-and-stick) in a campaign to replace MD5- based certs. If there is a credible threat to MD5-signed certs, then forcing their retirement would seem to be a reasonable reaction, but everyone here knows how successful "voluntary" conversion strategies typically are.
A CA statement that they won't issue MD5-signed certificates in the future should be sufficient. There's no need to reissue old certificates, unless the CA thinks other customers have attacked it.
That would seem to be at odds with what the people who documented this problem believe.
I do not wish to be rude, so don't think that's my intent--however, clarification is required here I believe. From section 7 of http://www.win.tue.nl/hashclash/rogue-ca/ : "An interesting question is whether CAs should revoke existing certificates signed using MD5. One may argue that the present attack scenario has in principle been possible since May 2007, and that therefore all certificates (or all CA certificates) signed with MD5 that have been issued after this date may have been compromised. Whether they really have been compromised is not relevant. What is relevant is that the relying party who needs to trust the certificate does not have a proper way of checking whether the certificate is to be trusted or not. One may even argue that all older certificates based on MD5 should be revoked, as for an attacker constructing rogue certificates it is easy to backdate them to any date in the past he likes, so any MD5-based certificate may be a forgery. On the other hand, one may argue that the likelihood of these scenarios is quite small, and that the cost of replacing many MD5- based certificates may be substantial, so that therefore the risks of continued use of existing MD5-based certificates may be seen as acceptable. Regardless, MD5 should no longer be used for new certificates." Note that they aren't actually recommending that all certs with MD5 signatures be replaced. The authors present two sides of the argument. The only absolute statement is that MD5 should not be used to sign _new_ certificates. This is because the attack doesn't allow the impersonation of the vulnerable CA; the attack merely creates a new intermediate CA that maintains the "chain of trust", so that certificates issued by the rogue intermediate CA will be trusted by most browsers. The weakness isn't that the vulnerable CA root certificate is signed by MD5, the weakness is that it uses MD5 to sign CSRs. Since I'm probably not explaining this very well, a picture is worth a thousand words: http://www.win.tue.nl/hashclash/rogue-ca/images/certificate4.png Additionally, from second 8: "Question. Are all digital certificates/signatures broken? Answer. No. When digital certificates and signatures are based on secure cryptographic hash functions, our work yields no reason to doubt their security. Our result only applies when digital certificates are signed using the hash function MD5, which is known to be broken. With our method it is only possible to copy digital signatures based on MD5 between two specially constructed digital certificates. It is not possible to copy digital signatures based on MD5 from digital certificates unless the certificates are specially constructed. Even so, our result shows that MD5 is NOT suited for digital signatures. Certification Authorities should stop using the known broken MD5 and move to the widely available, more secure alternatives, such as SHA-2." and "Question. What should websites do that have digital certificates signed with MD5? Answer. Nothing at this point. Digital certificates legitimately obtained from all CAs can be believed to be secure and trusted, even if they were signed with MD5. Our method required the purchase of a specially crafted digital certificate from a CA and does not affect certificates issued to any other regular website." My apologies if you were commenting on some other aspect, or if my understand is in some way flawed. -- bk CA cert: http://www.smtps.net/pub/smtps-dot-net-ca-2.pem
* Brian Keefer:
My apologies if you were commenting on some other aspect, or if my understand is in some way flawed.
I don't think so. There's a rule of thumb which is easy to remembe: Never revoke anything just because some weak algorithm is involved. The rationale is that that revocation is absolute and (usually) retroactive, but we generally want a more nuanced approach. If certain algorithms are too weak to be used, this is up to the relying party to decide whether it's fine in a particular case. On the other hand, replacing MD5-signed certificates in the browser PKI is costly, but the overhead is very finely dispersed (assuming that reissuing certificates has very little overhead at the CA). I think it's doable if the browser vendors could agree on a flag date after which MD5 signatures on certificates are no longer considered valid. (The implicit assumptions in that rule of thumb do not always apply. For instance, if weak RSA keys are discovered which occur with sufficiently high probability as the result of the standard key generating algorithms to pose a real problem, the public key may not reveal this property immediately, it may only be evident from the private key, or only after a rather expensive computation. In the latter case, we would be in very deep trouble.)
* Brian Keefer:
My apologies if you were commenting on some other aspect, or if my understand is in some way flawed.
I don't think so.
There's a rule of thumb which is easy to remembe: Never revoke anything just because some weak algorithm is involved. The rationale is that that revocation is absolute and (usually) retroactive, but we generally want a more nuanced approach. If certain algorithms are too weak to be used, this is up to the relying party to decide whether it's fine in a particular case. On the other hand, replacing MD5-signed certificates in the browser PKI is costly, but the overhead is very finely dispersed (assuming that reissuing certificates has very little overhead at the CA). I think it's doable if the browser vendors could agree on a flag date after which MD5 signatures on certificates are no longer considered valid.
(The implicit assumptions in that rule of thumb do not always apply. For instance, if weak RSA keys are discovered which occur with sufficiently high probability as the result of the standard key generating algorithms to pose a real problem, the public key may not reveal this property immediately, it may only be evident from the private key, or only after a rather expensive computation. In the latter case, we would be in very deep trouble.)
Other faulty assumptions are that the "relying party" (usually part/ies/) are actually made aware, and actually make an informed decision, or that revocation is the first step in efforts to motivate replacement of a cert, which probably is exactly opposite what I have suggested... Rules of thumbs about weakness of algorithms are suspect because things change over time; your rule of thumb above might have been applied to 40-bit encryption, but I don't see much 40-bit stuff around anymore. :-) The opinions on whether or not it is necessary to replace certs seems to vary depending on whose opinion you're listening to, but a relatively safe rule of thumb for this sort of security issue is to take the path that is most likely to avoid risk, which would seem to be replacing certs. To the extent that VeriSign is already doing this, it would seem that there is a certain level of agreement with that assessment. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 4, 2009, at 12:05 PM, Joe Greco wrote:
The opinions on whether or not it is necessary to replace certs seems to vary depending on whose opinion you're listening to, but a relatively safe rule of thumb for this sort of security issue is to take the path that is most likely to avoid risk, which would seem to be replacing certs. To the extent that VeriSign is already doing this, it would seem that there is a certain level of agreement with that assessment.
I would attribute that much more to desire to avoid the risk of bad PR, rather than the risk that it's possible to clone existing certs. "SSL is cracked, VeriSign to blame!" was pretty much the top security story for several days. They had to do something to turn around the perception, despite accurate analysis and publications by organizations such as Microsoft. Perception is reality, and regardless of the technical merits, a significant amount of people seemed to believe that any certificates that mentioned MD5 anywhere in them are at risk of some unknown, but really scary Badness(tm). I agree with VeriSign that offering to reissue certs is the smartest business decision they can make, considering their tagline is "The Value of Trust". I disagree that it was technically necessary. Reissuing existing certificates signed by MD5 accomplishes nothing. Participation is voluntary, so if someone had managed to create a rogue CA, they certainly would not voluntarily destroy it by having their cert reissued! Technically the only thing necessary to prevent this attack has already been done, and that is to stop issuing certs signed with MD5 so that no one else can create a rogue CA via this means. If they truly believed that there was a risk anyone else had done this already, they would need to revoke the CA cert, i.e. every vendor who shipped their CA cert in the default trusted issuer bundle would need to remove or invalidate it with a software update, but that would break _all_ the valid certificates signed by the CA. In order to do that, they would need to proactively contact every customer with a valid cert to make sure they were updated. What percentage of their customers do you think they would be able to reach (haven't changed contact information, etc)? How many application vendors would actually remove the old CA and add the new one in a timely manner? How many of those vendors' customers would actually upgrade to the new version? So they've done what they need to in order to prevent future exploits, and obviously they aren't that worried that the exploit has actually been performed maliciously in the past. Offering to reissue existing certs is a PR smokescreen (although a necessary one). I think there's a huge fundamental misunderstanding. It seems that the popular belief is that it's possible to use an existing MD5 signature for any evil bits that you choose, which is not the case. The actual exploit in this case is the ability to "unlock" a normal certificate to make it a CA certificate. Of course phrasing it that way wouldn't be quite so sensational (and wouldn't have accomplished the researcher's goal of raising awareness to the weakness of MD5), so now we have mass misperception, which has become reality since anything that is published is automatically true. I'm not saying it's bad that people are shying aware from MD5, I just like to be accurate. In any case, it has spawned some healthy discussions so I would say it was worthwhile. -- bk CA cert: http://www.smtps.net/pub/smtps-dot-net-ca-2.pem
"SSL is cracked, VeriSign to blame!" was pretty much the top security story for several days. They had to do something to turn around the perception, despite accurate analysis and publications by organizations such as Microsoft. Perception is reality, and regardless of the technical merits, a significant amount of people seemed to believe that any certificates that mentioned MD5 anywhere in them are at risk of some unknown, but really scary Badness(tm).
Perception is, sadly, not reality, no matter what you wish to argue. (Sorry!) For years, some people had a "perception" that DNS was reasonably safe and secure by virtue of the transaction ID and the difficulty in slipping in a bad update. Some of us were aware that increases in bandwidth and processor power would reduce the difficulty, and certainly the issue had been discussed in some detail even back in the 1990's. The "perception" of DNS security turned into the reality of Our-DNS-House-Is-On-Fire last year.
I agree with VeriSign that offering to reissue certs is the smartest business decision they can make, considering their tagline is "The Value of Trust". I disagree that it was technically necessary.
Reissuing existing certificates signed by MD5 accomplishes nothing.
Incorrect. As the number of MD5-signed certificates dwindles, the feasibility of removing or disabling support for MD5-signed certs increases. Of course that assumes the reissues are signed by SHA.
Participation is voluntary, so if someone had managed to create a rogue CA, they certainly would not voluntarily destroy it by having their cert reissued!
Of course.
Technically the only thing necessary to prevent this attack has already been done, and that is to stop issuing certs signed with MD5 so that no one else can create a rogue CA via this means.
Are we certain that existing certs cannot be subverted?
If they truly believed that there was a risk anyone else had done this already, they would need to revoke the CA cert, i.e. every vendor who shipped their CA cert in the default trusted issuer bundle would need to remove or invalidate it with a software update, but that would break _all_ the valid certificates signed by the CA. In order to do that, they would need to proactively contact every customer with a valid cert to make sure they were updated. What percentage of their customers do you think they would be able to reach (haven't changed contact information, etc)? How many application vendors would actually remove the old CA and add the new one in a timely manner? How many of those vendors' customers would actually upgrade to the new version?
I don't know. We've had fires before. Fires with less obvious solutions and higher costs-to-implement/fix.
So they've done what they need to in order to prevent future exploits, and obviously they aren't that worried that the exploit has actually been performed maliciously in the past. Offering to reissue existing certs is a PR smokescreen (although a necessary one).
I would disagree; we are simply *aware* that MD5 certs have been subverted in this particular way, but clearly this shows a weakness exists, and are you prepared to guarantee that there are no other ways to subvert the current MD5 system, possibly in a much different way? Getting rid of the bad crypto - and come on, it's crypto we have known for several years is bad - is not a PR smokescreen. It's a smart move. Why wait for something truly bad to happen?
I think there's a huge fundamental misunderstanding. It seems that the popular belief is that it's possible to use an existing MD5 signature for any evil bits that you choose, which is not the case. The actual exploit in this case is the ability to "unlock" a normal certificate to make it a CA certificate. Of course phrasing it that way wouldn't be quite so sensational (and wouldn't have accomplished the researcher's goal of raising awareness to the weakness of MD5), so now we have mass misperception, which has become reality since anything that is published is automatically true.
So, any current MD5-signed cert carries with it some vague risk that it could potentially be subverted. I'm ... failing ... to see the huge fundamental misunderstanding you refer to.
I'm not saying it's bad that people are shying aware from MD5, I just like to be accurate.
In any case, it has spawned some healthy discussions so I would say it was worthwhile.
Certainly. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sun, 04 Jan 2009 15:58:34 CST, Joe Greco said:
Technically the only thing necessary to prevent this attack has already been done, and that is to stop issuing certs signed with MD5 so that no one else can create a rogue CA via this means.
Are we certain that existing certs cannot be subverted?
The attack depends on being able to to jigger up *two* certs that have the same MD5 hash. Therefor, attacking an existing cert would require either: 1) That the existing cert be one of a pair (in other words, somebody else already knew about the current attack and also did it). or 2) Somebody has found a way to cause a collision to a specified MD5 hash (which is still impractical, AFAIK). If anybody has a subvertible cert, it's pretty safe to guess that they *know* they have such a cert, because they themselves *built* the cert that way.
* Joe Greco:
A CA statement that they won't issue MD5-signed certificates in the future should be sufficient. There's no need to reissue old certificates, unless the CA thinks other customers have attacked it.
That would seem to be at odds with what the people who documented this problem believe.
What do they believe? That the CA should reissue certificates even if the CA assumes that there haven't been other attacks? Or that the CA should not reissue, despite evidence of other attacks?
On 2009-01-02, at 09:04, Rodrick Brown wrote:
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified.
I read a comment somewhere else that while this is interesting, and good work, and well done, in practice it's much easier to social- engineer a certificate with a stolen credit card from a real CA than it is to create a fake CA. (I'd give proper attribution if I could remember who it was, but it put things into perspective for me at the time so I thought I'd share.) Joe
Joe Abley wrote:
On 2009-01-02, at 09:04, Rodrick Brown wrote:
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified.
I read a comment somewhere else that while this is interesting, and good work, and well done, in practice it's much easier to social-engineer a certificate with a stolen credit card from a real CA than it is to create a fake CA.
(I'd give proper attribution if I could remember who it was, but it put things into perspective for me at the time so I thought I'd share.)
It is. But this issue might open for man-in-the-middle attacks, which is much harder for issued certificates. Issued certificates usually also incorporate a check, that you control a domain etc. With engineered certificates you can practically avoid that whole process. Kind regards, Martin List-Petersen -- Airwire - Ag Nascadh Pobal an Iarthar http://www.airwire.ie Phone: 091-865 968
On Fri, 2 Jan 2009, Joe Abley wrote:
On 2009-01-02, at 09:04, Rodrick Brown wrote:
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified.
I read a comment somewhere else that while this is interesting, and good work, and well done, in practice it's much easier to social-engineer a certificate with a stolen credit card from a real CA than it is to create a fake CA.
(I'd give proper attribution if I could remember who it was, but it put things into perspective for me at the time so I thought I'd share.)
My facebook status? :P
Rodrick Brown wrote:
A team of security researchers and academics has broken a core piece of Internet technology. They made their work public at the 25th Chaos Communication Congress in Berlin today. The team was able to create a rogue certificate authority and use it to issue valid SSL certificates for any site they want. The user would have no indication that their HTTPS connection was being monitored/modified.
http://hackaday.com/2008/12/30/25c3-hackers-completely-break-ssl-using-200-p... http://phreedom.org/research/rogue-ca/
-- [ Rodrick R. Brown ] http://www.rodrickbrown.com http://www.linkedin.com/in/rodrickbrown
ssl itself wasn't cracked they simply exploited the known vulnerable md5 hashing. Another hashing method needs to be used.
ssl itself wasn't cracked they simply exploited the known vulnerable md5 hashing. Another hashing method needs to be used.
The encryption algorithm wasn't hacked. Correct. Another hashing method may help. Yup. My problem is with the chain-of-trust and a lack of reasonable or reasonably reliable (pick) ways of revoking certificates. Deepak
participants (34)
-
Brian Keefer
-
Christopher Morrow
-
Colin Alston
-
Deepak Jain
-
Dorn Hetzel
-
Dragos Ruiu
-
Etaoin Shrdlu
-
Florian Weimer
-
Gadi Evron
-
Hank Nussbacher
-
Jason Uhlenkott
-
Jasper Bryant-Greene
-
Joe Abley
-
Joe Greco
-
Kevin Oberman
-
Mark Andrews
-
Marshall Eubanks
-
Martin List-Petersen
-
Matthew Kaufman
-
Michael Sinatra
-
Mikael Abrahamsson
-
Neil
-
Nick Hilliard
-
Paul Vixie
-
Randy Bush
-
Robert Mathews (OSIA)
-
Rodrick Brown
-
Rubens Kuhl Jr.
-
Skywing
-
Stasiniewicz, Adam
-
Steven M. Bellovin
-
Terje Bless
-
Valdis.Kletnieks@vt.edu
-
William Warren