Your assertion that using "bought" certificates provides any security benefit whatsoever assumes facts not in evidence. Given recent failures in this space I would posit that the requirement to use certificates purchased from entities "under the thumb" of government control, clearly motivated only by profit, and with highly questionable moral and ethical standards represents a huge increase in risk of passive attack and confidentiality failure where such rosk did not previously exist. Sent from Samsung Mobile -------- Original message -------- From: Jimmy Hess <mysidia@gmail.com> Date: To: Randy <nanog@afxr.net> Cc: NANOG list <nanog@nanog.org> Subject: Re: Gmail and SSL
On Sun, Dec 30, 2012 at 3:30 PM, Keith Medcalf <kmedcalf@dessus.com> wrote:
Your assertion that using "bought" certificates provides any security benefit whatsoever assumes facts not in evidence.
Given recent failures in this space I would posit that the requirement to use certificates purchased from entities "under the thumb" of government control, clearly motivated only by profit, and with highly questionable moral and ethical standards represents a huge increase in risk of passive attack and confidentiality failure where such rosk did not previously exist.
backing up some, I think the problem trying to be solved by requiring 'legitimate' certificates is stopping the obvious problems of mitm attacks, ala mallory-proxy. in the longer term, if the client can know that the server was supposed to present a cert with fingerprint XFOOBYFOOB and it can see that fingerprint for the cert presented in the session we all win, right?
On 12/30/12, Keith Medcalf <kmedcalf@dessus.com> wrote:
Your assertion that using "bought" certificates provides any security benefit whatsoever assumes facts not in evidence.
I would say those claiming certificates from a public CA provide no assurance of authentication of server identity greater than that of a self-signed one would have the burden of proof to show that it is no less likely for an attempted forger to be able to obtain a false "bought" certificate from a public trusted CA that has audited certification practices statement, a certificate improperly issued contrary to their CPS, than to have created a self-issued false self-signed certificate. It is certainly contrary to some basis on which web browser implementations of HTTPS and TLS in practice rely upon. While there have been failure in that area, regarding some particular CAs, and some particular certificates, the reported occurrences of this were sufficiently rare, that one doubts "obtaining an improperly issued certificate from a widely trusted CA" is an easy feat for the most likely attackers to accomplish. So I would be very interested in any data you had to show that a CA signature provides no additional assurance; Especially, when combined with a policy of requiring manual human verification of the certificate fingerprint, and manual human agreement that the CA's CPS is strict enough for this certificate usage, after all the automatic checks that it was properly signed by a well-known CA with an audited CPS statement, with the usage of the certificate key matching an allowed usage declared by the Type/EKU/CA attributes of the subject and issuer certs. -- -JH
I would say those claiming certificates from a public CA provide no assurance of authentication of server identity greater than that of a self-signed one would have the burden of proof to show that it is no less likely for an attempted forger to be able to obtain a false "bought" certificate from a public trusted CA that has audited certification practices statement, a certificate improperly issued contrary to their CPS, than to have created a self-issued false self-signed certificate.
Do you ever buy SSL certificates? For cheap certificates ($9 Geotrust, $8 Comodo, free Startcom, all accepted by Gmail), the entirety of the identity validation is to send an email message to an address associated with the domain, typically one of the WHOIS addresses, or hostmaster@domain, and look for a click on an embedded URL. Sometimes they flag names that look particularly funky, such as typos of famous names, but usually they don't. So the only assurance a signed cert provides is that the person who got the cert has some authority over a name that points to the mail client, which need have no connection to any email address used in mail sent from that server. That doesn't sound like "authentication of server identity" to me. R's, John
On 12/30/12, John Levine <johnl@iecc.com> wrote:
Do you ever buy SSL certificates? For cheap certificates ($9 Geotrust, $8 Comodo, free Startcom, all accepted by Gmail), the entirety of the identity validation is to send an email message to an address associated with the domain, typically one of the WHOIS addresses, or hostmaster@domain, and look for a click on an embedded
These CA's will normally require interactions be done through a web site, there will often be captchas or other methods involved in applying for a certificate that are difficult to automate. They require payment, which requires a credit card, and obtaining a massive number of certificates is not a practical thing for malware to perform, unless they also possess a mass amount of stolen credit cards, and stolen WHOIS e-mail address contacts; on the other hand, self-signed certificates can be generated on the fly by malware, using a simple command or series of CryptoAPI calls. I am aware of the procedure the CAs follow, and I am well aware that there are significant theoretical weaknesses inherent to the procedures that are followed to authenticate such "Turbo", "Domain auth" based SSL certificates. (They use an unencrypted e-mail message to send the equivalent of a PIN number, for getting a certificate signed, in reliance of WHOIS information downloaded over unencrypted connection: WHOIS data may be tampered with, a MITM may be used to alter WHOIS response in transit to the CA --- the PIN number in confirmation e-mail can be sniffed in transit, or the contact e-mail address may be hosted by a 3rd party insecure service provider and/or no longer belong to the authorized contact). All of these practices have considerable risks, and the risk that _some_ fraudulent requests are approved is signicant. The very e-mail server the certificate is to be issued to, might be the one that receives the e-mail, and a passive sniffer there may capture the PIN required to authorize the certificate. However, the procedures required to exploit these weaknesses are slightly more complicated than simply producing a self-signed certificate on the fly for man in the middle use -- they require planning, a waiting period, because CAs do not typically issue immediately. And the use of credit card numbers; either legitimate ones, which provide a trail to trace the attacker, or stolen ones, which is a requirement, that reduces the possible size of an attack (since a worm, or other malware infection, won't have an infinite supply of those to apply for certificates). But "Does the CA's signature actually represent a guaranteed authentication" wasn't the question. The only question is... Does it provide an assurance that is at all stronger than a self-signed certificate that can be made on the fly? And it does... not a strong one, but a slightly stronger one.
mail sent from that server. That doesn't sound like "authentication of server identity" to me.
R's, John
-- -JH
On Sun, Dec 30, 2012 at 10:26:36PM -0600, Jimmy Hess wrote:
These CA's will normally require interactions be done through a web site, there will often be captchas or other methods involved in applying for a certificate that are difficult to automate.
You're kidding, right? Captchas have been quite, quite thoroughly beaten for some time now. See, among others: http://www.physorg.com/news/2011-11-stanford-outsmart-captcha-codes.html http://cintruder.sourceforge.net/ http://arstechnica.com/security/2012/05/google-recaptcha-brought-to-its-knee... http://arstechnica.com/news.ars/post/20080415-gone-in-60-seconds-spambot-cra... http://www.troyhunt.com/2012/01/breaking-captcha-with-automated-humans.html http://it.slashdot.org/article.pl?sid=08/10/14/1442213 ---rsk
However, the procedures required to exploit these weaknesses are slightly more complicated than simply producing a self-signed certificate on the fly for man in the middle use -- they require planning, a waiting period, because CAs do not typically issue immediately.
Hmmn, I guess I was right, you haven't bought any certs lately. Startcom typically issues on the spot, Comodo and Geotrust mail them to you within 15 minutes. I agree that 15 minutes is not exactly the same as immediately, but so what?
And the use of credit card numbers; either legitimate ones, which provide a trail to trace the attacker, or stolen ones, ...
or a prepaid card bought for cash at a convenience or grocery store. Really, this isn't hard to understand. Current SSL signers do no more than tie the identity of the cert to the identity of a domain name. Anyone who's been following the endless crisis at ICANN about bogus WHOIS knows that domain names do not reliably identify anyone.
The only question is... Does it provide an assurance that is at all stronger than a self-signed certificate that can be made on the fly?
And it does... not a strong one, but a slightly stronger one.
I supose to the extent that 0.2% is greater than 0.1%, perhaps. But not enough for any sensible person to care. Also keep in mind that this particular argument is about the certs used to submit mail to Gmail, which requires a separate SMTP AUTH within the SSL session before you can send any mail. This isn't belt and suspenders, this is belt and a 1/16" inch piece of duct tape. R's, John
On Mon, Dec 31, 2012 at 9:07 AM, John R. Levine <johnl@iecc.com> wrote:
Also keep in mind that this particular argument is about the certs used to submit mail to Gmail, which requires a separate SMTP AUTH within the SSL session before you can send any mail. This isn't belt and suspenders, this is belt and a 1/16" inch piece of duct tape.
wait, no... this was gmail's pop crawlers gathering mail from remote pop services wasn't it? (or that was my impression at least). so this is, I think, an attempt by gmail/google to protect their users from intermediaries presenting a certificate for 'floof' self-signed instead of iecc.com ... -chris
On Mon, Dec 31, 2012 at 6:07 AM, John R. Levine <johnl@iecc.com> wrote:
Really, this isn't hard to understand. Current SSL signers do no more than tie the identity of the cert to the identity of a domain name. Anyone who's been following the endless crisis at ICANN about bogus WHOIS knows that domain names do not reliably identify anyone.
So you're saying that you'd have no problems getting a well-known-CA signed certificate for, say, pop.mail.yahoo.com? If you can't, then it would seem that the current process provides (at least) a better mechanism than just blindly accepting self-signed certificates, no? Also keep in mind that this particular argument is about the certs used to
submit mail to Gmail, which requires a separate SMTP AUTH within the SSL session before you can send any mail. This isn't belt and suspenders, this is belt and a 1/16" inch piece of duct tape.
Err.. no it's not. It's about the certs used when Gmail connects to a 3rd-party host to collect mail. ie, Google is the client, not the server. Scott
On Sun, Dec 30, 2012 at 10:46 PM, John Levine <johnl@iecc.com> wrote:
So the only assurance a signed cert provides is that the person who got the cert has some authority over a name that points to the mail client
What other assurance are you looking for? The only point of a signed server certificate, the ONLY point, is to prevent a man-in-the-middle attack where someone who doesn't control the name decrypts the traffic from the server, reads it, and then re-encrypts it with his own self-signed key before sending it to you. If the signature accomplishes that goal, it has done 100% of what it's designed to do. In theory a signature can mean anything the signing authority defines it to mean. In practice, that also requires special handling from the users... behavior web browser users don't engage in. As for Google (and anyone else) it escapes me why you would require a signed certificate for any connection that you're willing to also permit completely unencrypted. Encryption stops nearly every purely passive packet capture attack, with or without a signed certificate. Even without a signed cert an encrypted data flow is much more secure than an unencrypted one. It's not an all-or-nothing deal. Encrypted with a signed or otherwise verified cert is more secure than merely encrypted which is more secure than unencrypted on a switched path which is more secure than unencrypted on a hub. None of these things is wholly insecure and none are 100% secure. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Wed, Jan 2, 2013 at 1:08 PM, William Herrin <bill@herrin.us> wrote:
As for Google (and anyone else) it escapes me why you would require a signed certificate for any connection that you're willing to also permit completely unencrypted. Encryption stops nearly every purely
raising the bar for observers is potentially a goal, no? making it simple for people to get 'more secure' email isn't a bad thing. (admittedly, requiring a signed cert now is more painful, though startssl.com makes it less so).
passive packet capture attack, with or without a signed certificate. Even without a signed cert an encrypted data flow is much more secure than an unencrypted one. It's not an all-or-nothing deal. Encrypted with a signed or otherwise verified cert is more secure than merely encrypted which is more secure than unencrypted on a switched path which is more secure than unencrypted on a hub. None of these things is wholly insecure and none are 100% secure.
boiling down the above you mean: goodness-scale (goodness to the left) signed > self-signed > unsigned I don't think there's much disagreement about that... the sticky wicket though is 'how much better is 'signed' vs 'self-signed' ? and I think the feeling is that: 'if we can verify that the cert is proper/signed, we have more assurance that the end user meant for this cert to be presented. A self-signed cert could be any intermediary between me/you... we have no way to verify who is presenting the cert.' -chris (note the use of 'we' here is the 'royal we', I have no idea what the real reason is, but the above makes some sense to me, at least.)
On Wed, Jan 2, 2013 at 1:39 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
goodness-scale (goodness to the left) signed > self-signed > unsigned
Hi Chris, Self-signed and unsigned are identical. The "goodness" scale is: Encrypted & Verified (signed) > Encrypted Unsigned (or self-signed, same difference) > Unencrypted but physically protected > Unprotected
I don't think there's much disagreement about that... the sticky wicket though is 'how much better is 'signed' vs 'self-signed' ? and I think the feeling is that:
I don't see how "feeling" plays into it. Communications using an unverified public key are trivially vulnerable to a man-in-the-middle attack where the connection is decrypted, captured in its unencrypted form and then undetectably re-encrypted with a different key. Communications using a key signed by a trusted third party suffer such attacks only with extraordinary difficulty on the part of the attacker. It's purely a technical matter. The information you're trying to protect is either sensitive enough that this risk is unacceptable or it isn't. That's purely a question for the information owner. No one else's opinion matters for squat. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Wed, Jan 2, 2013 at 11:36 AM, William Herrin <bill@herrin.us> wrote:
Communications using a key signed by a trusted third party suffer such attacks only with extraordinary difficulty on the part of the attacker. It's purely a technical matter.
While I agree with your general characterization of MIIM, the "extraordinary difficulty" here is not supported. As has been demonstrated, the bar for getting certs from some trusted CAs is in some cases low enough that it's not even difficult, much less extraordinarily difficult. Getting certs to a well known domain may be somewhat harder, it might be useful to see how far someone got trying to get a "mail.google.com" cert from all the commonly trusted vendors without resorting to illegal penetrations or layer 8+ hacking / social engineering / threats / intimidation / politics, but even if we exclude those threats the general envelope for not-well-known domains seems risky. Google is setting a higher bar here, which may be sufficient to deter a lot of bots and script kiddies for the next few years, but it's not enough against nation-state or serious professional level attacks. The advantage for the deterrence it can give may well be worth it anyways, for the near future. Every measure in security that does not involve the off switch is a half-measure, at least in the long term, even very large key crypto, but enough incremental steps form a useful cushion. -- -george william herbert george.herbert@gmail.com
On Wed, Jan 2, 2013 at 3:10 PM, George Herbert <george.herbert@gmail.com> wrote:
On Wed, Jan 2, 2013 at 11:36 AM, William Herrin <bill@herrin.us> wrote:
Communications using a key signed by a trusted third party suffer such attacks only with extraordinary difficulty on the part of the attacker. It's purely a technical matter.
While I agree with your general characterization of MIIM, the "extraordinary difficulty" here is not supported.
AFAICT someone finds a way to get themselves a certificate for a domain they don't control every couple years or so. The hole is promptly plugged (and the certs revoked) before much actually happens as a result. Has your experience been different? Are you, at this moment, able to acquire a falsely signed certificate for www.herrin.us that my web browser will accept? You're right that false certificates have been issued in the past. You're right that false certificates will be issued again in the future. No security apparatus is 100% effective. But if despite your resources you in particular can't make it happen in a timely manner, that's a meaningful barrier to mounting a man-in-the-middle attack against someone using properly signed certificates. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Are you, at this moment, able to acquire a falsely signed certificate for www.herrin.us that my web browser will accept?
Me, no, although I have read credible reports that otherwise reputable SSL signers have issued MITM certs to governments for their filtering firewalls. Regards, John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly
On Wed, Jan 2, 2013 at 5:38 PM, John R. Levine <johnl@iecc.com> wrote:
Are you, at this moment, able to acquire a falsely signed certificate for www.herrin.us that my web browser will accept?
Me, no, although I have read credible reports that otherwise reputable SSL signers have issued MITM certs to governments for their filtering firewalls.
The governments in question are watching for exfiltration and they largely use a less risky approach: they issue their own root key and, in most cases, install it in the government employees' browser before handing them the machine. A "reputable" SSL signer would have to get outed just once issuing a government a resigning cert and they'd be kicked out of all the browsers. They'd be awfully easy to catch. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Jan 2, 2013 7:36 PM, "William Herrin" <bill@herrin.us> wrote:
Me, no, although I have read credible reports that otherwise reputable
SSL
signers have issued MITM certs to governments for their filtering firewalls.
That's not the case join is referring to.
The governments in question are watching for exfiltration and they
No, not really. Some are busy tracking "dissidents" among their populations.
largely use a less risky approach: they issue their own root key and, in most cases, install it in the government employees' browser before handing them the machine.
Not just for employees.
A "reputable" SSL signer would have to get outed just once issuing a government a resigning cert and they'd be kicked out of all the browsers. They'd be awfully easy to catch.
Oh! You mean like cyber trust and etilisat? Right... That's working just perfectly...
Regards, Bill Herrin
-- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Wed, Jan 2, 2013 at 8:03 PM, Christopher Morrow <christopher.morrow@gmail.com> wrote:
On Jan 2, 2013 7:36 PM, "William Herrin" <bill@herrin.us> wrote:
Me, no, although I have read credible reports that otherwise reputable SSL signers have issued MITM certs to governments for their filtering firewalls.
That's not the case join is referring to.
The governments in question are watching for exfiltration and they
No, not really. Some are busy tracking "dissidents" among their populations.
largely use a less risky approach: they issue their own root key and, in most cases, install it in the government employees' browser before handing them the machine.
Not just for employees.
A "reputable" SSL signer would have to get outed just once issuing a government a resigning cert and they'd be kicked out of all the browsers. They'd be awfully easy to catch.
Oh! You mean like cyber trust and etilisat? Right... That's working just perfectly...
should have included this reference link: <https://www.eff.org/deeplinks/2010/08/open-letter-verizon>
On Wed, Jan 2, 2013 at 8:39 PM, Christopher Morrow <christopher.morrow@gmail.com> wrote:
On Wed, Jan 2, 2013 at 8:03 PM, Christopher Morrow <christopher.morrow@gmail.com> wrote:
On Jan 2, 2013 7:36 PM, "William Herrin" <bill@herrin.us> wrote:
A "reputable" SSL signer would have to get outed just once issuing a government a resigning cert and they'd be kicked out of all the browsers. They'd be awfully easy to catch.
Oh! You mean like cyber trust and etilisat? Right... That's working just perfectly...
should have included this reference link: <https://www.eff.org/deeplinks/2010/08/open-letter-verizon>
Hi Christopher, That was nearly 30 months ago. At the time there were no reports of fake Etilisat certs, merely concern that the UAE's regulatory environment was "institutionally hostile to the existence and use of secure cryptosystems." Has the EFF's SSL Observatory project detected even one case of a fake certificate under Etilisat's trust chain since then? There's a reason Etilisat's cert is still valid and it isn't Honest Achmed's. https://bugzilla.mozilla.org/show_bug.cgi?id=647959 Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Wed, Jan 2, 2013 at 8:51 PM, William Herrin <bill@herrin.us> wrote:
secure cryptosystems." Has the EFF's SSL Observatory project detected even one case of a fake certificate under Etilisat's trust chain since then?
it's possible that the observatory won't see these in the wild, if the observatory is on the wrong side of the connection. According to the code EFF uses: <https://git.eff.org/?p=observatory.git;a=blob;f=README;h=235117a992ff83b7c04c66ba928bc1907cf76944;hb=HEAD> it looks like they simply portscanned 0/0 for tcp/443 listeners, then grabbed certs from the respondents. In the cases we're talking about in this thread EFF's observatory may never be in the middle of the conversation. In the cases of Etisalat (or one use they may have) the scanners may not be behind etisalat's piece of gear which uses the CA cert in question. "not observed in the wild" isn't really a good judge for this particular problem I think :( As to why the Etisalat cert isn't yet removed, I wouldn't know... it seems a bit fishy though. -chris
On Wed, Jan 02, 2013 at 07:35:49PM -0500, William Herrin wrote:
A "reputable" SSL signer would have to get outed just once issuing a government a resigning cert and they'd be kicked out of all the browsers. They'd be awfully easy to catch.
I believe Honest Achmed said it best: "In any case by the time he's issued enough certificates he'll be regarded as too big to fail by the browser vendors, so an expensive audit doesn't really matter." as well as "Achmed's business plan is to sell a sufficiently large number of certificates as quickly as possible in order to become too big to fail" and "Achmed guarantees that no certificate will be issued without payment having been received, as per the old latin proverb "nil certificati sine lucre"." - Matt
On Wed, Jan 2, 2013 at 2:27 PM, William Herrin <bill@herrin.us> wrote:
On Wed, Jan 2, 2013 at 3:10 PM, George Herbert <george.herbert@gmail.com> wrote:
On Wed, Jan 2, 2013 at 11:36 AM, William Herrin <bill@herrin.us> wrote:
Communications using a key signed by a trusted third party suffer such attacks only with extraordinary difficulty on the part of the attacker. It's purely a technical matter.
While I agree with your general characterization of MIIM, the "extraordinary difficulty" here is not supported.
AFAICT someone finds a way to get themselves a certificate for a domain they don't control every couple years or so. The hole is promptly plugged (and the certs revoked) before much actually happens as a result. Has your experience been different?
Are you, at this moment, able to acquire a falsely signed certificate for www.herrin.us that my web browser will accept?
You're right that false certificates have been issued in the past. You're right that false certificates will be issued again in the future. No security apparatus is 100% effective. But if despite your resources you in particular can't make it happen in a timely manner, that's a meaningful barrier to mounting a man-in-the-middle attack against someone using properly signed certificates.
Regards, Bill Herrin
There are three vectors of attack: One, asking a CA for a cert in someone else's name and it gets issued. As you noted, generally discovered pretty quickly and shut down, but there's no robust external verification for the discovery process. Also, the verifications the CAs perform to validate the user could be subverted, as noted earlier in conversation, so they could receive false assurances that it was the right entity asking for the keys. That subversion could happen via registrar account hacking (known problem) among other places, along with technical measures to monitor unencrypted validation emails sent to proper authoritative domain contact emails. Two, a CA's keys can go walking (either due to technical penetration or human corruption), and then external parties can issue their own certs as if they were the CA. If identified the CA can revoke its own key and re-issue all the client certs from a new one, but someone needs to identify that it happened. This is alleged to have happened at least twice, once of which the CA was shut down over, the other one of which became opaque and ambiguous, and therefore untrustworthy. Three, there may be crypto flaws we don't know about still lingering, or a CA could choose easily factored numbers by bad luck and someone could luck out grinding them. Not a high risk (anyone SHOULD grind their own keys some to check them for that) but nonzero. Can I go get a key for your site right now? I'm not going to spend the afternoon trying (I'm working for a living) but I am reasonably sure I could do so. Lax checks by CAs are well described elsewhere. If push came to shove and minor legalities were not restraining me, I recall (without checking) your domain's emails come to your home, and your DSL or cable line is sniffable, so any of the CA who email URL validators out could be trivially temporarily spoofed (until you read your email and responded) by tapping your data lines. BGP games to snarf your traffic are another venue, possibly not yet even covered by wiretap laws that I know of, though I'm not currently an ISP in a position to personally do that to you. The same is possible but slightly harder for midsized corporate entities. Still possible but much harder for large ones. If you're going to argue that that's cheating, that IS the threat envelope... -- -george william herbert george.herbert@gmail.com
On Wed, Jan 2, 2013 at 5:43 PM, George Herbert <george.herbert@gmail.com> wrote:
If push came to shove and minor legalities were not restraining me, I recall (without checking) your domain's emails come to your home, and your DSL or cable line is sniffable, so any of the CA who email URL validators out could be trivially temporarily spoofed (until you read your email and responded) by tapping your data lines. BGP games to snarf your traffic are another venue, possibly not yet even covered by wiretap laws that I know of, though I'm not currently an ISP in a position to personally do that to you.
And none of this describes an extraordinary effort? The quote you're trying to refute was, "suffer such attacks only with extraordinary difficulty on the part of the attacker."
If you're going to argue that that's cheating, that IS the threat envelope...
You're quite right about the scope of the threat envelope. And it's one to two orders of magnitude more difficult to penetrate than man-in-the-middle with an unverified key. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Yo William! On Wed, 2 Jan 2013 19:42:16 -0500 William Herrin <bill@herrin.us> wrote:
On Wed, Jan 2, 2013 at 5:43 PM, George Herbert <george.herbert@gmail.com> wrote:
If push came to shove and minor legalities were not restraining me, I recall (without checking) your domain's emails come to your home, and your DSL or cable line is sniffable, so any of the CA who email URL validators out could be trivially temporarily spoofed (until you read your email and responded) by tapping your data lines. BGP games to snarf your traffic are another venue, possibly not yet even covered by wiretap laws that I know of, though I'm not currently an ISP in a position to personally do that to you.
And none of this describes an extraordinary effort? The quote you're trying to refute was, "suffer such attacks only with extraordinary difficulty on the part of the attacker."
I would say it is pretty easy, and I have caught people doing it many times. All a hacker needs to do is get a sniffer near your email traffic. Then they can grab any challange emails sent to any of you domain contacts. Pretty trvial to do in a coffee shop environment. RGDS GARY --------------------------------------------------------------------------- Gary E. Miller Rellim 109 NW Wilmington Ave., Suite E, Bend, OR 97701 gem@rellim.com Tel:+1(541)382-8588
On Wed, 02 Jan 2013 12:10:55 -0800, George Herbert said:
Google is setting a higher bar here, which may be sufficient to deter a lot of bots and script kiddies for the next few years, but it's not enough against nation-state or serious professional level attacks.
To be fair though - if I was sitting on information of sufficient value that I was a legitimate target for nation-state TLAs and similarly well funded criminal organizations, I'd have to think long and hard whether I wanted to vector my e-mails through Google. It isn't even the certificate management issue - it's because if I was in fact the target of such attention, my threat model had better well include "adversary attempts to use legal and extralegal means to get at my data from within Google's infrastructure". "Operation Aurora".
On Wed, Jan 2, 2013 at 7:31 PM, <Valdis.Kletnieks@vt.edu> wrote:
On Wed, 02 Jan 2013 12:10:55 -0800, George Herbert said:
Google is setting a higher bar here, which may be sufficient to deter a lot of bots and script kiddies for the next few years, but it's not enough against nation-state or serious professional level attacks.
To be fair though - if I was sitting on information of sufficient value that I was a legitimate target for nation-state TLAs and similarly well funded criminal organizations, I'd have to think long and hard whether I wanted to vector my e-mails through Google. It isn't even the certificate management issue - it's because if I was in fact the target of such attention, my threat model had better well include "adversary attempts to use legal and extralegal means to get at my data from within Google's infrastructure".
"Operation Aurora".
I probably fit into that description; while I vector my personal email through Google, the actual sensitive stuff does not touch any wired or wireless network. Because I know. -- -george william herbert george.herbert@gmail.com
On Wed, 02 Jan 2013 12:10:55 -0800, George Herbert said:
Google is setting a higher bar here, which may be sufficient to deter a lot of bots and script kiddies for the next few years, but it's not enough against nation-state or serious professional level attacks.
To be fair though - if I was sitting on information of sufficient value that I was a legitimate target for nation-state TLAs and similarly well funded criminal organizations, I'd have to think long and hard whether I wanted to vector my e-mails through Google. It isn't even the certificate management issue - it's because if I was in fact the target of such attention, my
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/2/2013 10:31 PM, Valdis.Kletnieks@vt.edu wrote: threat
model had better well include "adversary attempts to use legal and extralegal means to get at my data from within Google's infrastructure".
"Operation Aurora".
Well, the "bar" started at something as trivial as FireSheep. And I'm sure many more silly (in retrospect) exploits remain to be discovered in any cloud-based infrastructure (the bigger the cloud, the bigger the target, the greater the potential damages/losses). And a lot of infrastructure remains vulnerable to something as trivial as FireSheep. Jeff -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlDk/dUACgkQiwXJq373XhYS6QCgtUyTSNHg8zXA5JxECi/c1Jd+ oDsAn0sSG3nZXSmKWUz2+wZ/1P3EXsps =B0X3 -----END PGP SIGNATURE-----
On Wed, Jan 2, 2013 at 7:31 PM, <Valdis.Kletnieks@vt.edu> wrote:
On Wed, 02 Jan 2013 12:10:55 -0800, George Herbert said:
Google is setting a higher bar here, which may be sufficient to deter a lot of bots and script kiddies for the next few years, but it's not enough against nation-state or serious professional level attacks.
To be fair though - if I was sitting on information of sufficient value that I was a legitimate target for nation-state TLAs and similarly well funded criminal organizations, I'd have to think long and hard whether I wanted to vector my e-mails through Google. It isn't even the certificate management issue - it's because if I was in fact the target of such attention, my threat model had better well include "adversary attempts to use legal and extralegal means to get at my data from within Google's infrastructure".
"Operation Aurora".
[Full disclosure: I work at Google, though the opinions stated below are mine alone.] Aurora compromised at least 20 other companies, failed at its assumed objective of seeing user data, and Google was the only organization to notice, let alone have the guts to expose the attack [0]. And you're going to hold that against them? If you're the target of a state-sponsored attacker, Google is by far the best place to host your mail. Good luck finding another provider that enables SSL by default [1], offers 2-factor authentication [2], warns you when you're being targeted by state-sponsored attackers [3], and actually fights overly-broad subpoenas from governments [4]. While I'm writing, I'll also point out that the Diginotar hack which came up in this discussion as an example of why CAs can't be trusted was discovered due to a feature of Google's Chrome browser when a cert was being used to spy on users in Iran [5]. Note that it also provides a good example of the difficulty of getting away with such attacks. [0] http://googleblog.blogspot.com/2010/01/new-approach-to-china.html [1] http://gmailblog.blogspot.com/2010/01/default-https-access-for-gmail.html [2] http://support.google.com/accounts/bin/answer.py?hl=en&answer=180744 [3] http://googleonlinesecurity.blogspot.com/2012/06/security-warnings-for-suspe... [4] http://www.google.com/transparencyreport/userdatarequests/ [5] http://googleonlinesecurity.blogspot.com/2011/08/update-on-attempted-man-in-... Damian
On Wed, 02 Jan 2013 19:59:35 -0800, Damian Menscher said:
Aurora compromised at least 20 other companies, failed at its assumed objective of seeing user data, and Google was the only organization to notice, let alone have the guts to expose the attack [0]. And you're going to hold that against them?
I didn't say that. What I *said* was "one should *expect* a nation-state adversary to go after your mail hosting company via multiple avenues of attack, because it's already been tried before". Google is indeed one of the better actors. But if you're a target, maybe it's time to reconsider whether the phrase "hosting company" should be included in your environment *at all*.
On Wed, Jan 2, 2013 at 8:52 PM, <Valdis.Kletnieks@vt.edu> wrote:
On Wed, 02 Jan 2013 19:59:35 -0800, Damian Menscher said:
Aurora compromised at least 20 other companies, failed at its assumed objective of seeing user data, and Google was the only organization to notice, let alone have the guts to expose the attack [0]. And you're going to hold that against them?
I didn't say that. What I *said* was "one should *expect* a nation-state adversary to go after your mail hosting company via multiple avenues of attack, because it's already been tried before". Google is indeed one of the better actors. But if you're a target, maybe it's time to reconsider whether the phrase "hosting company" should be included in your environment *at all*.
Thanks for clarifying. We're off-topic, but that decision needs to be weighed against the alternatives. If your alternative is running your own mailserver at home, then your risks are: - They can come into your home and walk off with your machines. Even if your hard drives are encrypted, your backups might not be... or maybe you don't have backups? - If you browse from the server they can get you with a trojan impacting Flash or Java. - Even if you don't browse from your mailserver they can try to compromise it remotely if it's not fully patched. How good are you at keeping your system patched. Does it fall a day or two behind when you're on vacation? - Speaking of vacation, how do you authenticate to your system? Does it support 2-factor? Or maybe you don't think you need 2-factor because you have an SSL cert. Did you self-sign it and tell your browser to ignore all other CAs (to approximate Chrome's certificate pinning)? - How does your email arrive/leave? They could be tapping your line... or they could just DoS you off the net. If you really think you can get all of that right, all the time, then I wish you the best of luck. But remembering that most targets are not cypherpunks, telling them to do it themselves is incredibly bad advice. Back on topic: encryption without knowing who you're talking to is worse than useless (hence no self-signed certs which provide a false sense of security), and there are usability difficulties with exposing strong security to the average user (asking users to generate and upload a self-signed cert would be a customer-support disaster, not to mention all the outages that would occur when those certs expired). Real-world security is all about finding a reasonable balance and adapting to the current threats. Damian
On Wed, 02 Jan 2013 21:14:31 -0800, Damian Menscher said:
We're off-topic, but that decision needs to be weighed against the alternatives. If your alternative is running your own mailserver at home, then your risks are:
Let's face it - if a nation-state has you in the crosshairs, digital or real, your life is going to suck. All the rest is implementation details....
On 01/02/2013 09:14 PM, Damian Menscher wrote:
Back on topic: encryption without knowing who you're talking to is worse than useless (hence no self-signed certs which provide a false sense of security),
In fact, it's very useful -- what do you think the initial diffie-hellman exchanges are doing with pfs? Encryption without (strong) authentication is still useful for dealing with passive listening. It's a shame, for example, that wifi security doesn't encrypt everything on an open AP to require attacks be active rather than passive. It's really easy to just scan the airwaves, but I probably don't need to remind you of that. Mike
On Thu, Jan 3, 2013 at 12:14 AM, Damian Menscher <damian@google.com> wrote:
Back on topic: encryption without knowing who you're talking to is worse than useless (hence no self-signed certs which provide a false sense of security), and there are usability difficulties with exposing strong security to the average user (asking users to generate and upload a self-signed cert would be a customer-support disaster, not to mention all the outages that would occur when those certs expired). Real-world security is all about finding a reasonable balance and adapting to the current threats.
The most recent change to POP3 mail retrieval over SSL is not a reasonable balance. My organization uses Google Apps for mail hosting, but a number of users also have us.army.mil accounts. They used to pull mail from their .mil account into Google Apps via POP3. Army servers do not allow unencrypted connections and their root certificates are not part of the Mozilla Root CA list (and, as you can guess, I have no control over their servers). Google didn't just block the use of self-signed certs; you broke communication with all servers using perfectly legitimate PKIs that are not part of the Mozilla Root CA list. Thus, instead of "self-signed certs = false sense of security," your argument is really "not on some arbitrary root CA list = false sense of security," which is absolute nonsense. I talked to Google Apps support a few weeks ago, sent them a link to this discussion, but all they could do is file a feature request. IMHO, this change should never have been allowed to go into production until there is an interface for uploading our own root certificates. Of course, any root (i.e. self-signed) certificate can be used by the POP3 server directly, so this would also solve the problem for people trying to use self-signed certs not part of any PKI. Finally, "asking users to generate and upload a self-signed cert would be a customer-support disaster," so you just block their access completely? Anyone who doesn't know how to generate and upload a certificate would probably avoid encryption altogether, don't you think? And as for "outages that would occur when those certs expired," what do you think people in my organization are dealing with right now? Only an expired cert can be renewed or replaced, whereas our access has been blocked and there is nothing we can do about it. - Max
On 1/3/13, Maxim Khitrov <max@mxcrypt.com> wrote:
On Thu, Jan 3, 2013 at 12:14 AM, Damian Menscher <damian@google.com> wrote:
I talked to Google Apps support a few weeks ago, sent them a link to this discussion, but all they could do is file a feature request.
I am not sure why this would be classified as a feature request. If it is impacting you, and you had service before, then is an Outage/Defect/Bug, full stop. Describing working service for a previously supported scenario as a "feature request" would be beyond ridiculous :)
- Max -- -JH
On 1/3/2013 9:08 PM, Jimmy Hess wrote:
I am not sure why this would be classified as a feature request. If it is impacting you, and you had service before, then is an Outage/Defect/Bug, full stop. Describing working service for a previously supported scenario as a "feature request" would be beyond ridiculous :)
Clouds in the sky tend to look pretty until the day they dump rain on you and then disappear. "Cloud apps" are kind of like that. ;) Not to say that SaaS doesn't have its place in enterprise architecture, but one of the things that should have a huge, gigantic neon sign on it when you're doing your cost-risk-benefit analysis is that you're being put at the whim of your SaaS provider. If they make a change that breaks functionality that only a subset of their clients use, you'd better hope that one of those clients has enough financial clout with the provider to make that functionality come back, otherwise you've just painted yourself into a corner. - Pete
This email, right here? This is Exhibit 1 in my "not all the tradeoffs of outsourcing your $SERVICE are visible or trivial" list. Thanks. Cheers, -- jra ----- Original Message -----
From: "Maxim Khitrov" <max@mxcrypt.com> To: "Damian Menscher" <damian@google.com> Cc: nanog@nanog.org Sent: Thursday, January 3, 2013 9:01:09 AM Subject: Re: Gmail and SSL On Thu, Jan 3, 2013 at 12:14 AM, Damian Menscher <damian@google.com> wrote:
Back on topic: encryption without knowing who you're talking to is worse than useless (hence no self-signed certs which provide a false sense of security), and there are usability difficulties with exposing strong security to the average user (asking users to generate and upload a self-signed cert would be a customer-support disaster, not to mention all the outages that would occur when those certs expired). Real-world security is all about finding a reasonable balance and adapting to the current threats.
The most recent change to POP3 mail retrieval over SSL is not a reasonable balance. My organization uses Google Apps for mail hosting, but a number of users also have us.army.mil accounts. They used to pull mail from their .mil account into Google Apps via POP3. Army servers do not allow unencrypted connections and their root certificates are not part of the Mozilla Root CA list (and, as you can guess, I have no control over their servers).
Google didn't just block the use of self-signed certs; you broke communication with all servers using perfectly legitimate PKIs that are not part of the Mozilla Root CA list. Thus, instead of "self-signed certs = false sense of security," your argument is really "not on some arbitrary root CA list = false sense of security," which is absolute nonsense.
I talked to Google Apps support a few weeks ago, sent them a link to this discussion, but all they could do is file a feature request. IMHO, this change should never have been allowed to go into production until there is an interface for uploading our own root certificates. Of course, any root (i.e. self-signed) certificate can be used by the POP3 server directly, so this would also solve the problem for people trying to use self-signed certs not part of any PKI.
Finally, "asking users to generate and upload a self-signed cert would be a customer-support disaster," so you just block their access completely? Anyone who doesn't know how to generate and upload a certificate would probably avoid encryption altogether, don't you think? And as for "outages that would occur when those certs expired," what do you think people in my organization are dealing with right now? Only an expired cert can be renewed or replaced, whereas our access has been blocked and there is nothing we can do about it.
- Max
-- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
On Thu, Jan 3, 2013 at 4:59 AM, Damian Menscher <damian@google.com> wrote:
While I'm writing, I'll also point out that the Diginotar hack which came up in this discussion as an example of why CAs can't be trusted was discovered due to a feature of Google's Chrome browser when a cert was
Similar to http://googleonlinesecurity.blogspot.ch/2013/01/enhancing-digital-certificat... -- Matthias
On Jan 3, 2013, at 3:52 PM, Matthias Leisi <matthias@leisi.net> wrote:
On Thu, Jan 3, 2013 at 4:59 AM, Damian Menscher <damian@google.com> wrote:
While I'm writing, I'll also point out that the Diginotar hack which came up in this discussion as an example of why CAs can't be trusted was discovered due to a feature of Google's Chrome browser when a cert was
Similar to http://googleonlinesecurity.blogspot.ch/2013/01/enhancing-digital-certificat...
Thanks; I was just about to post that link to this thread. Certificates don't spread virally, and random browsers don't go looking for whatever interesting certificates they find. They also don't like certs that say "*.google.com" when the user is trying to go somewhere else; that web site would be non-functional unless it was trying to impersonate a Google domain. Taken all together, this sounds to me like deliberate mischief by someone. In fact, were it not for the facts that the blog post says that Google learned of this on December 24 and this thread started on December 14, I'd wonder if there was a connection -- was this the incident that made Google reassess its threat model? Of course, this attack was carried out within the official PKI framework... --Steve Bellovin, https://www.cs.columbia.edu/~smb
other relevant links for this: http://krebsonsecurity.com/2013/01/turkish-govt-enabled-phishers-to-spoof-go... http://technet.microsoft.com/en-us/security/advisory/2798897 On Thu, Jan 3, 2013 at 4:25 PM, Steven Bellovin <smb@cs.columbia.edu> wrote:
On Jan 3, 2013, at 3:52 PM, Matthias Leisi <matthias@leisi.net> wrote:
On Thu, Jan 3, 2013 at 4:59 AM, Damian Menscher <damian@google.com> wrote:
While I'm writing, I'll also point out that the Diginotar hack which came up in this discussion as an example of why CAs can't be trusted was discovered due to a feature of Google's Chrome browser when a cert was
Similar to http://googleonlinesecurity.blogspot.ch/2013/01/enhancing-digital-certificat...
Thanks; I was just about to post that link to this thread.
Certificates don't spread virally, and random browsers don't go looking for whatever interesting certificates they find. They also don't like certs that say "*.google.com" when the user is trying to go somewhere else; that web site would be non-functional unless it was trying to impersonate a Google domain. Taken all together, this sounds to me like deliberate mischief by someone. In fact, were it not for the facts that the blog post says that Google learned of this on December 24 and this thread started on December 14, I'd wonder if there was a connection -- was this the incident that made Google reassess its threat model?
Of course, this attack was carried out within the official PKI framework...
--Steve Bellovin, https://www.cs.columbia.edu/~smb
-- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer
On Wed, Jan 2, 2013 at 2:36 PM, William Herrin <bill@herrin.us> wrote:
On Wed, Jan 2, 2013 at 1:39 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
goodness-scale (goodness to the left) signed > self-signed > unsigned
Hi Chris,
Self-signed and unsigned are identical. The "goodness" scale is:
Encrypted & Verified (signed) > Encrypted Unsigned (or self-signed, same difference) > Unencrypted but physically protected > Unprotected
I don't think there's much disagreement about that... the sticky wicket though is 'how much better is 'signed' vs 'self-signed' ? and I think the feeling is that:
I don't see how "feeling" plays into it.
Communications using an unverified public key are trivially vulnerable to a man-in-the-middle attack where the connection is decrypted, captured in its unencrypted form and then undetectably re-encrypted with a different key. Communications using a key signed by a trusted third party suffer such attacks only with extraordinary difficulty on the part of the attacker. It's purely a technical matter.
The information you're trying to protect is either sensitive enough that this risk is unacceptable or it isn't. That's purely a question for the information owner. No one else's opinion matters for squat.
I think we're talking past eachother :( I also think we're mostly saying the same thing... I think though that the 'a question for the information owner' is great, except that I doubt most of them are equipped with enough information to make the judgement themselves. -chris
On Wed, Jan 2, 2013 at 3:24 PM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
I think though that the 'a question for the information owner' is great, except that I doubt most of them are equipped with enough information to make the judgement themselves.
Much of the evil in the world starts with the presumption that otherwise competent individuals can't make good decisions for themselves. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Sun, 30 Dec 2012 19:25:04 -0600, Jimmy Hess said:
I would say those claiming certificates from a public CA provide no assurance of authentication of server identity greater than that of a self-signed one would have the burden of proof to show that it is no less likely for an attempted forger to be able to obtain a false "bought" certificate from a public trusted CA that has audited certification practices statement, a certificate improperly issued contrary to their CPS, than to have created a self-issued false self-signed certificate.
There's a bit more trust (not much, but a bit) to be attached to a cert signed by a reputable CA over and above that you should attach to a self-signed cert you've never seen before. However, if you trust a CA-signed cert more than you trust a self-signed cert *that you yourself created*, there's probably a problem there someplace. (In other words, you should be able to tell Gmail "yes, you should expect to see a self-signed cert with fingerprint 'foo' - only complain if you see some *other* fingerprint". To the best of my knowledge, there's no currently known attack that allows the forging of a certificate with a pre-specified fingerprint. Though I'm sure Steve Bellovin will correct me if I'm wrong... :)
On Jan 2, 2013, at 7:53 AM, valdis.kletnieks@vt.edu wrote:
On Sun, 30 Dec 2012 19:25:04 -0600, Jimmy Hess said:
I would say those claiming certificates from a public CA provide no assurance of authentication of server identity greater than that of a self-signed one would have the burden of proof to show that it is no less likely for an attempted forger to be able to obtain a false "bought" certificate from a public trusted CA that has audited certification practices statement, a certificate improperly issued contrary to their CPS, than to have created a self-issued false self-signed certificate.
There's a bit more trust (not much, but a bit) to be attached to a cert signed by a reputable CA over and above that you should attach to a self-signed cert you've never seen before.
However, if you trust a CA-signed cert more than you trust a self-signed cert *that you yourself created*, there's probably a problem there someplace.
(In other words, you should be able to tell Gmail "yes, you should expect to see a self-signed cert with fingerprint 'foo' - only complain if you see some *other* fingerprint". To the best of my knowledge, there's no currently known attack that allows the forging of a certificate with a pre-specified fingerprint. Though I'm sure Steve Bellovin will correct me if I'm wrong... :)
No, you're quite correct. Depending on what you assume, that would take a preimage or second preimage attack. None are known for any current hash functions, even MD5. I think, though, that that isn't the real issue. We're talking about a feature that would be used by about .0001% of gmail users. Apart from code development and database maintenance by Google -- and even for Google, neither is free -- it requires a UI that is comprehensible, robust, and doesn't confuse the 99.9999% of people who think that a certificate is something you hang on the wall. (Aside: do you remember how Netscape displayed certs -- in a frame with a curlicue border? These are *certificates*; they should look the part, right? I'm just glad that the signature wasn't denoted by 3-D shadowing on a "raised" seal....) Furthermore, the UI has to have a gentle way of telling people that the cert has changed, which may be correct. (Recall that for some of these users, they didn't create the cert; it was done by the admin of a site they use.) Do you run Cert Patrol (a Firefox extension) in your browser? It's amazing how much churn there is among certificates used by big sites (including Google itself). Certificate pinning is a great idea for experts, but it requires expert maintenance. I haven't yet seen a scalable, comprehensible version. I wish Google did support this, but I don't think it's unreasonable of them not to. Recall that they've been targeted by governments around the world, precisely the sort of adversary who can launch active attacks. Now, if you want to say that these adversaries can also corrupt CAs, whether they do it technically, procedurally, financially, or by sending around several large visitors who know where the CEO's kids go to school -- well, I won't argue; I certainly remember the Diginotar case. There may even be a lesser threat from using self-signed certs, since these large individuals operate on a human time frame, so it's more scalable to hit a few large CAs than a few thousand dissidents or other targets of interest. I think, though, that there are arguments on both sides. (The issue of you yourself accepting your own certs is quite different, of course.) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Do you run Cert Patrol (a Firefox extension) in your browser?
yes, but my main browser is chrome (ff does poorly with nine windows and 60+ tabs). there is some sort of pinning, or at least discussion of it. but it is not clear what is actually provided. and i don't see evidence of churn reporting. randy
On Jan 2, 2013, at 7:15 PM, Randy Bush <randy@psg.com> wrote:
Do you run Cert Patrol (a Firefox extension) in your browser?
yes, but my main browser is chrome (ff does poorly with nine windows and 60+ tabs). there is some sort of pinning, or at least discussion of it. but it is not clear what is actually provided. and i don't see evidence of churn reporting.
Google uses certificate pinning for a very, very few sites. From http://blog.chromium.org/2011/06/new-chromium-security-features-june.html : In addition in Chromium 13, only a very small subset of CAs have the authority to vouch for Gmail (and the Google Accounts login page). You can turn it on for other sites but: Advanced users can enable stronger security for some web sites by visiting the network internals page: chrome://net-internals/#hsts You can now force HTTPS for any domain you want, and even “pin” that domain so that only a more trusted subset of CAs are permitted to identify that domain. _It’s an exciting feature but we’d like to warn that it’s easy to break things! We recommend that only experts experiment with net internals settings._ Emphasis theirs. The only Chrome browser I have lying around right now is on a Nexus 7 tablet; I don't see any way to list the pinned certs from the browser. There is a list at http://www.chromium.org/administrators/policy-list-3, and while I don't know how current it is you'll notice a decided dearth of interesting sites with the exceptions of paypal.com and lastpass.com. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Steven Bellovin writes:
The only Chrome browser I have lying around right now is on a Nexus 7 tablet; I don't see any way to list the pinned certs from the browser. There is a list at http://www.chromium.org/administrators/policy-list-3, and while I don't know how current it is you'll notice a decided dearth of interesting sites with the exceptions of paypal.com and lastpass.com.
You can see the current list of cert pins and HSTS preloads in the Chromium source tree at https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security... or https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security... -- Seth David Schoen <schoen@loyalty.org> | No haiku patents http://www.loyalty.org/~schoen/ | means I've no incentive to FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150 | -- Don Marti
On Jan 2, 2013, at 8:25 PM, Seth David Schoen <schoen@loyalty.org> wrote:
Steven Bellovin writes:
The only Chrome browser I have lying around right now is on a Nexus 7 tablet; I don't see any way to list the pinned certs from the browser. There is a list at http://www.chromium.org/administrators/policy-list-3, and while I don't know how current it is you'll notice a decided dearth of interesting sites with the exceptions of paypal.com and lastpass.com.
You can see the current list of cert pins and HSTS preloads in the Chromium source tree at
https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security...
or
https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security...
Thanks. The list is longer, but with the exception of Twitter (and possibly intuit -- a subdomain is shown), not a lot more interesting. I don't see major banks, I don't see Facebook or Hotmail, I don't see the big CAs, etc. --Steve Bellovin, https://www.cs.columbia.edu/~smb
On 1/2/13, Steven Bellovin <smb@cs.columbia.edu> wrote: [snip] It's ashame they've stuck with a hardcoded list of "Acceptable CAs" for certain certificates; that would be very difficult to update. The major banks, Facebook, Hotmail, etc, possibly have not made a promise to anyone, that all their future new renewal certificates will be from a specific CA; would be more interesting, if the Chrome devs provided for a mechanism for making a remote query or receiving a digitally signed "PINned cert list" download, that could be updated dynamically, /and/ provided policy and mechanisms to have sites included in the list. One of the broken things about X500, is a certificate can only have one signature. The trust could be strengthened, if there were a mechanism allowing multiple 3rd party attestations to be made (eg PGP-like multiple signatures), or a browser could be configured to only accept a certificate, if BOTH (i) Signed by a CA, and (ii) The certificate's information, or the CA information for the cert is published in a 3rd party corroborating database, that also requires proof of ID/authorization to publish in that DB. (iii) And the server does the work of querying the 3rd party databases listed by the client, by sending the CA ID, Certificate ID, through a query to some standardized URL format, and returns the timestamped digitally signed result (query answer, or affirmative proof of no entry in the DB), during authentication, together with the certificate. Depending on the authenticating browser's config, a domain not found in the 3rd party corroborating datasources, or listed by the 3rd party source with an attestation level of "Only domain control validated", might result in the CA's signature being ignored. That is: the browser (or the user) should pick how strong the certificate has to be, depending on the kind of business they will be executing over the SSL channel. CA's could later become required to check at least 2 3rd party databases, to ensure any prior certificate issued by another CA was actually revoked or expired, before allowing the signing of a new certificate.
Thanks. The list is longer, but with the exception of Twitter (and possibly intuit -- a subdomain is shown), not a lot more interesting. I don't see major banks, I don't see Facebook or Hotmail, I don't see the big CAs, etc.
--Steve Bellovin, https://www.cs.columbia.edu/~smb
-- -JH
participants (25)
-
Christopher Morrow
-
Christopher Morrow
-
Damian Menscher
-
Gary E. Miller
-
George Herbert
-
Jay Ashworth
-
Jeff Kell
-
Jimmy Hess
-
John Levine
-
John R. Levine
-
Keith Medcalf
-
Kyle Creyts
-
Masataka Ohta
-
Matthew Palmer
-
Matthias Leisi
-
Maxim Khitrov
-
Michael Thomas
-
Peter Kristolaitis
-
Randy Bush
-
Rich Kulawiec
-
Scott Howard
-
Seth David Schoen
-
Steven Bellovin
-
Valdis.Kletnieks@vt.edu
-
William Herrin