Re: Software router state of the art
On Wed, Jul 23, 2008 at 2:03 PM, Naveen Nathan <naveen@lastninja.net> wrote:
The Endace DAG cards claim they can move 7 gbps over a PCI-X bus from the NIC to main DRAM. They claim a full 10gbps on a PCIE bus.
I wonder, has anyone heard of this used for IDS? I've been looking at building a commodity SNORT solution, and wondering if a powerful network card will help, or would the bottleneck be in processing the packets and overhead from the OS?
The first bottleneck is the interrupts from the NIC. With a generic Intel NIC under Linux, you start to lose a non-trivial number of packets around 700mbps of "normal" traffic because it can't service the interrupts quickly enough. The DAG card can be dropped in to replace the interface used for a libpcap-based application. When I tested the 1gbps PCIE version, I lost no packets to 1gbps and my capture application's CPU usage dropped to about 1/5th of what it was with the generic NIC. YMMV. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Date: Wed, 23 Jul 2008 14:17:53 -0400 From: "William Herrin" <herrin-nanog@dirtside.com>
On Wed, Jul 23, 2008 at 2:03 PM, Naveen Nathan <naveen@lastninja.net> wrote:
The Endace DAG cards claim they can move 7 gbps over a PCI-X bus from the NIC to main DRAM. They claim a full 10gbps on a PCIE bus.
I wonder, has anyone heard of this used for IDS? I've been looking at building a commodity SNORT solution, and wondering if a powerful network card will help, or would the bottleneck be in processing the packets and overhead from the OS?
The first bottleneck is the interrupts from the NIC. With a generic Intel NIC under Linux, you start to lose a non-trivial number of packets around 700mbps of "normal" traffic because it can't service the interrupts quickly enough.
Most modern high performance network cards support MSI (Message Signaled Interrupts) which generate real interrupts only in an intelligent basis. and only at a controlled rate. Windows, Solaris and FreeBSD have support for MSI and I think Linux does, too. It requires both hardware and software support. With MSI, TSO, LRO, and PCI-E with hardware that supports these, 9.5 Gbps TCP flows between systems is possible with minimal tuning. That puts the bottleneck back on the forwarding software in the CPU to do the forwarding at high rates. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
On Wed, Jul 23, 2008 at 3:59 PM, Kevin Oberman <oberman@es.net> wrote:
The first bottleneck is the interrupts from the NIC. With a generic Intel NIC under Linux, you start to lose a non-trivial number of packets around 700mbps of "normal" traffic because it can't service the interrupts quickly enough.
Most modern high performance network cards support MSI (Message Signaled Interrupts) which generate real interrupts only in an intelligent basis. and only at a controlled rate. Windows, Solaris and FreeBSD have support for MSI and I think Linux does, too. It requires both hardware and software support.
"ethtool -c". Thanks Sargun for putting me on to "I/O Coalescing." But cards like the Intel Pro/1000 have 64k of memory for buffering packets, both in and out. Few have very much more than 64k. 64k means 32k to tx and 32k to rx. Means you darn well better generate an interrupt when you get near 16k so that you don't fill the buffer before the 16k you generated the interrupt for has been cleared. Means you're generating an interrupt at least for every 10 or so 1500 byte packets. Regards, Bill -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
Date: Wed, 23 Jul 2008 16:51:50 -0400 From: "William Herrin" <herrin-nanog@dirtside.com> Sender: wherrin@gmail.com
On Wed, Jul 23, 2008 at 3:59 PM, Kevin Oberman <oberman@es.net> wrote:
The first bottleneck is the interrupts from the NIC. With a generic Intel NIC under Linux, you start to lose a non-trivial number of packets around 700mbps of "normal" traffic because it can't service the interrupts quickly enough.
Most modern high performance network cards support MSI (Message Signaled Interrupts) which generate real interrupts only in an intelligent basis. and only at a controlled rate. Windows, Solaris and FreeBSD have support for MSI and I think Linux does, too. It requires both hardware and software support.
"ethtool -c". Thanks Sargun for putting me on to "I/O Coalescing."
But cards like the Intel Pro/1000 have 64k of memory for buffering packets, both in and out. Few have very much more than 64k. 64k means 32k to tx and 32k to rx. Means you darn well better generate an interrupt when you get near 16k so that you don't fill the buffer before the 16k you generated the interrupt for has been cleared. Means you're generating an interrupt at least for every 10 or so 1500 byte packets.
You have just hit on a huge problems with most (all?) 1G and 10G hardware. The buffers are way too small for optimal performance in any case where the RTT is anything more that half a millisecond, you exhaust the window and stall the stream. I need port move multi-gigabit streams across the country and between the US and Europe. Those are a bit too far apart for those tiny buffers to be of any use at all. This would require 3 GB of buffers. This same problem also make TCP off-load of no use at all. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
On Wed, 23 Jul 2008, Kevin Oberman wrote:
be of any use at all. This would require 3 GB of buffers. This same problem also make TCP off-load of no use at all.
3 Gigabyte? Why? The newer 40G platforms on the market seems to have abandonded the 600ms buffers typical in the 10G space, in favour of 50-200ms of buffers (I don't remember exactly). Aren't there TCP implementations that don't use exponential window increase, but instead can do smaller increments, which I would have believed would enable routers to still do well with ~50ms of buffering. High speed memory is very expensive, also a lot of applications today would prefer to have their packets dropped instead of being queued for hundreds of milliseconds. Finding a good tradeoff level between the demand of different traffic types is quite hard... Also, DWDM capacity seems to get cheaper all the time, so if you really need to move data at multigigabit speeds, it might make sense to just rent that 10G wave and put your own equipment there that does what you want. -- Mikael Abrahamsson email: swmike@swm.pp.se
Now, there is an exploit for it. http://www.caughq.org/exploits/CAU-EX-2008-0002.txt Robert D. Scott Robert@ufl.edu Senior Network Engineer 352-273-0113 Phone CNS - Network Services 352-392-2061 CNS Receptionist University of Florida 352-392-9440 FAX Florida Lambda Rail 352-294-3571 FLR NOC Gainesville, FL 32611 321-663-0421 Cell
Now, there is an exploit for it.
Maybe I'm missing it, but this looks like a fairly standard DNS exploit. Keep asking questions and sending fake answers until one gets lucky. It certainly matches closely with my memory of discussions of the weaknesses in the DNS protocol from the '90's, with the primary difference being that now networks and hardware may be fast enough to make the flooding (significantly) more effective. I have to assume that one other standard minor enhancement has been omitted (or at least not explicitly mentioned), and will refrain from mentioning it for now. So, I have to assume that I'm missing some unusual aspect to this attack. I guess I'm getting older, and that's not too shocking. Anybody see it? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Actually you are not missing anything. It is a brute force attack. I think you had the right concept when you indicated that "networks and hardware may be fast enough". It is not maybe, it is; and every script kiddie on your block has the power in his/her bedroom. Then you add the college crowd sitting on 10Gig pipes to the Internet and the threat is real. But other than just muck things up where is the motivation for a poisoning? Robert D. Scott Robert@ufl.edu Senior Network Engineer 352-273-0113 Phone CNS - Network Services 352-392-2061 CNS Receptionist University of Florida 352-392-9440 FAX Florida Lambda Rail 352-294-3571 FLR NOC Gainesville, FL 32611 321-663-0421 Cell -----Original Message----- From: Joe Greco [mailto:jgreco@ns.sol.net] Sent: Wednesday, July 23, 2008 6:31 PM To: Robert D. Scott Cc: nanog@merit.edu Subject: Re: Exploit for DNS Cache Poisoning - RELEASED
Now, there is an exploit for it.
Maybe I'm missing it, but this looks like a fairly standard DNS exploit. Keep asking questions and sending fake answers until one gets lucky. It certainly matches closely with my memory of discussions of the weaknesses in the DNS protocol from the '90's, with the primary difference being that now networks and hardware may be fast enough to make the flooding (significantly) more effective. I have to assume that one other standard minor enhancement has been omitted (or at least not explicitly mentioned), and will refrain from mentioning it for now. So, I have to assume that I'm missing some unusual aspect to this attack. I guess I'm getting older, and that's not too shocking. Anybody see it? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Hi, On Jul 23, 2008, at 3:51 PM, Robert D. Scott wrote:
Actually you are not missing anything. It is a brute force attack.
I haven't looked at the exploit code, but the vulnerability Kaminsky found is a bit more than a brute force attack. As has been pointed out in various venues, it takes advantage of a couple of flaws in the DNS architecture. No, not simply the fact that the QID space is only 16 bits. That's part of it, but there is more. Really. I'm sure you can find the 'leaked' Matasano Chargen description of the attack on the net somewhere.
But other than just muck things up where is the motivation for a poisoning?
Man-in-the-middle attacks directed at ISPs serving end users who want to (say) get to their banks? Regards, -drc
Joe Greco wrote:
So, I have to assume that I'm missing some unusual aspect to this attack. I guess I'm getting older, and that's not too shocking. Anybody see it?
AFAIK, the main novelty is the ease with which bogus NS records can be inserted. It may be hard to get a specific A record (www.victimsbank.com) cached, but if you can shim in the NS records of your ns.poisoner.com authority, then getting the real target A record is trivial since you'll be asked directly for it (and can wait for the legit clients to ask for it for you). Mike
On Jul 23, 2008, at 5:30 PM, Joe Greco wrote:
Maybe I'm missing it, but this looks like a fairly standard DNS exploit.
Keep asking questions and sending fake answers until one gets lucky.
It certainly matches closely with my memory of discussions of the weaknesses in the DNS protocol from the '90's, with the primary difference being that now networks and hardware may be fast enough to make the flooding (significantly) more effective. I have to assume that one other standard minor enhancement has been omitted (or at least not explicitly mentioned), and will refrain from mentioning it for now.
So, I have to assume that I'm missing some unusual aspect to this attack. I guess I'm getting older, and that's not too shocking. Anybody see it?
What's new is the method of how it is being exploited. Before, if you wanted to poison a cache for www.gmail.com, you get the victim name server to try to look up www.gmail.com and spoof flood the server trying to beat the real reply by guessing the correct ID. if you fail, you may need to wait for the victim name server to expire the cache before trying again. The new way is slightly more sneaky. You get the victim to try to resolve an otherwise invalid and uncached hostname like 00001.gmail.com, and try to beat the real response with spoofed replies. Except this time your reply comes with an additional record containing the IP for www.gmail.com to the one you want to redirect it to. If you win the race and the victim accepts your spoof for 00001.gmail.com, it will also accept (and overwrite any cached value) for your additional record for www.gmail.com as well. If you don't win the race, you try again with 00002.gmail.com, and keep going until you finally win one. By making up uncached hostnames, you get as many tries as you want in winning the race. By tacking on an additional reply record to your response packet you can poison the cache for anything the victim believes your name server should be authoritative for. This means DNS cache poisoning is possible even on very busy servers that normally you wouldn't be able to predict when it was going expire its cache, and if you fail the first time you can keep trying again and again until you succeed with no wait. -- Kevin
Before, if you wanted to poison a cache for www.gmail.com, you get the victim name server to try to look up www.gmail.com and spoof flood the server trying to beat the real reply by guessing the correct ID. if you fail, you may need to wait for the victim name server to expire the cache before trying again.
That's why that crummy technique isn't used.
The new way is slightly more sneaky. You get the victim to try to resolve an otherwise invalid and uncached hostname like 00001.gmail.com, and try to beat the real response with spoofed replies.
That's the normal technique, as far as I was aware.
Except this time your reply comes with an additional record containing the IP for www.gmail.com to the one you want to redirect it to.
Thought that was the normal technique for cache poisoning. I'm pretty sure that at some point, code was added to BIND to actually implement this whole bailiwick system, rather than just accepting arbitrary out- of-scope data, which it ... used to do (sigh, hi BIND4).
If you win the race and the victim accepts your spoof for 00001.gmail.com, it will also accept (and overwrite any cached value) for your additional record for www.gmail.com as well. If you don't win the race, you try again with 00002.gmail.com, and keep going until you finally win one. By making up uncached hostnames, you get as many tries as you want in winning the race.
Right. To the best of my knowledge, that's neither a new nor a clever technique for generating additional DNS request transactions.
By tacking on an additional reply record to your response packet you can poison the cache for anything the victim believes your name server should be authoritative for.
And that's one standard form of poisoning. Cache poisoning and sending extra data is an interesting topic. I have to admit that my experience with it is somewhat dated, and a lot of it is as much as a decade out of date, when I was writing miniature authoritative name servers for application load balancing purposes. I did in fact do a number of experiments against various implementations to see what sins I could get away with, and I have to say, the protocol is remarkable in that so many broken-seeming things that I deliberately inflicted while playing around can be worked out by server implementations. But, then again, the beauty of DNS is that it hasn't really changed much over time ...
This means DNS cache poisoning is possible even on very busy servers that normally you wouldn't be able to predict when it was going expire its cache, and if you fail the first time you can keep trying again and again until you succeed with no wait.
This is disappointing, because Vixie specifically stated that this was a new attack, and I'm pretty sure he said it was one where the exploit could not be determined merely by knowing the sort of change that was being made to BIND. Well, the change that was made to BIND was to randomize source ports, which indicated it was a forged-response attack of some kind. Knowing that there were further statements about the weakness of the PRNG for the QID suggested that it was susceptible to a brute-force attack. I guess there's a vague hint of novelty in it all if you want to be a tad more clever about what you're corrupting. I can think of a few examples, which I will respectfully not discuss, just in case someone hasn't thought of them, but basically it seems to me that this is neither a new nor a novel attack. So I'm not super impressed. Do I see the need for the patch? Yes. Do I see the need to lie about the nature of the problem? I guess, but this is not any more of a problem today than it was a year (or ten!) ago, except maybe now someone is threatening to finally do something that might cause DNSSEC to get deployed, which seems like it would have been a better way to scare people, IMHO. I think a bunch of us saw the problem with spoofed DNS years ago, and I'm pretty sure a bunch of DNSSEC people have been predicting exactly this state of affairs for quite some time, and knowledge of the general problem predates even that. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, Jul 23, 2008 at 9:44 PM, Joe Greco <jgreco@ns.sol.net> wrote:
Except this time your reply comes with an additional record containing the IP for www.gmail.com to the one you want to redirect it to.
Thought that was the normal technique for cache poisoning. I'm pretty sure that at some point, code was added to BIND to actually implement this whole bailiwick system, rather than just accepting arbitrary out- of-scope data, which it ... used to do (sigh, hi BIND4).
Joe, I think that's the beauty of this attack: the data ISN'T out of scope. The resolver is expecting to receive one or more answers to 00001.gmail.com, one or more authority records (gmail.com NS www.gmail.com) and additional records providing addresses for the authority records (www.gmail.com A 127.0.0.1). Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Wed, Jul 23, 2008 at 9:44 PM, Joe Greco <jgreco@ns.sol.net> wrote:
Except this time your reply comes with an additional record containing the IP for www.gmail.com to the one you want to redirect it to.
Thought that was the normal technique for cache poisoning. I'm pretty sure that at some point, code was added to BIND to actually implement this whole bailiwick system, rather than just accepting arbitrary out- of-scope data, which it ... used to do (sigh, hi BIND4).
Joe,
I think that's the beauty of this attack: the data ISN'T out of scope. The resolver is expecting to receive one or more answers to 00001.gmail.com, one or more authority records (gmail.com NS www.gmail.com) and additional records providing addresses for the authority records (www.gmail.com A 127.0.0.1).
I think the response to that is best summarized as **YAWN**. One of the basic tenets of attacking security is that it works best to attack the things that you know a remote system will allow. The bailiwick system is *OLD* tech at this point, but is pretty much universally deployed (in whatever forms across various products), so it stands to reason that a successful attack is likely to involve either in-scope data, or a bug in the system. The fact that this was known to be a cross-platform vulnerability would have suggested an in-scope data attack. I thought that part was obvious, sorry for any confusion. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 23 Jul 2008, Kevin Day wrote:
The new way is slightly more sneaky. You get the victim to try to resolve an otherwise invalid and uncached hostname like 00001.gmail.com, and try to beat the real response with spoofed replies. Except this time your reply comes with an additional record containing the IP for www.gmail.com to the one you want to redirect it to. If you win the race and the victim accepts your spoof for 00001.gmail.com, it will also accept (and overwrite any cached value) for your additional record for www.gmail.com as well.
RFC 2181 says the resolver should not overwrite authoritative data with additional data in this manner. I believe the Matasano description is wrong. Tony. -- f.anthony.n.finch <dot@dotat.at> http://dotat.at/ FORTIES CROMARTY FORTH TYNE DOGGER: EAST OR SOUTHEAST 3 OR 4, INCREASING 5 OR 6 LATER. SLIGHT OR MODERATE. FOG PATCHES. GOOD, OCCASIONALLY VERY POOR.
On 23 Jul 2008, at 18:30, Joe Greco wrote:
So, I have to assume that I'm missing some unusual aspect to this attack. I guess I'm getting older, and that's not too shocking. Anybody see it?
Perhaps what you're missing can be found in the punchline to the transient post on the Matasano Security blog ("Mallory can conduct this attack in less than 10 seconds on fast Internet link"). Being able to divert users of a particular resolver (who thought they were going to paypal, or their bank, or a government web page to file their taxes, or, or, etc) to the place of your choosing with less than a minute's effort seems like cause for concern to me. Luckily we have the SSL/CA architecture in place to protect any web page served over SSL. It's a good job users are not conditioned to click "OK" when told "the certificate for this site is invalid". Joe
On Wed, 2008-07-23 at 21:17 -0400, Joe Abley wrote:
Luckily we have the SSL/CA architecture in place to protect any web page served over SSL. It's a good job users are not conditioned to click "OK" when told "the certificate for this site is invalid".
'course, as well as relying on users not ignoring certificate warnings, SSL as protection against this attack relies on the user explicitly choosing SSL (by manually prefixing the URL with https://), or noticing that the site didn't redirect to SSL. Your average Joe who types www.paypal.com into their browser may very well not notice that they didn't get redirected to https://www.paypal.com/ -Jasper
On Jul 23, 2008, at 9:27 PM, Jasper Bryant-Greene wrote:
On Wed, 2008-07-23 at 21:17 -0400, Joe Abley wrote:
Luckily we have the SSL/CA architecture in place to protect any web page served over SSL. It's a good job users are not conditioned to click "OK" when told "the certificate for this site is invalid".
'course, as well as relying on users not ignoring certificate warnings, SSL as protection against this attack relies on the user explicitly choosing SSL (by manually prefixing the URL with https://), or noticing that the site didn't redirect to SSL.
Your average Joe who types www.paypal.com into their browser may very well not notice that they didn't get redirected to https://www.paypal.com/
That did not even occur to me. Anyone have a foolproof way to get grandma to always put "https://" in front of "www"? Seriously, I was explaining the problem to someone saying "never click 'OK'" when this e-mail came in and I realized how silly I was being. Help? -- TTFN, patrick
On Wed, Jul 23, 2008 at 11:01:11PM -0400, Patrick W. Gilmore wrote:
That did not even occur to me.
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
Seriously, I was explaining the problem to someone saying "never click 'OK'" when this e-mail came in and I realized how silly I was being.
The problem is there is no perfect solution to these challenges that the industry faces. The enhanced SSL certs and browser magic, eg: www.paypal.com w/ Firefox3 gives a nice green "happy" bar. If you don't invest in these, or if there is a lack of user education around these issues it's just one big Pharming pool. I talked to some govvies today and made what I believe is the clear case when it comes to "Doing the right thing(tm)". My case was that the industry would do the right thing as a whole. There would be stragglers, but the risk of doing nothing is too high. If your nameservers have not been upgraded or you did not enable the proper flags, eg: dnssec-enable and/or dnssec-validation as applicable, I hope you will take another look. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
Some tests from my home Comcast connection tonight showed less than desirable results from their resolvers. The first thing I did was to double check that the bookmarks I use when visiting my banking sites all begin https. I was happy to see I had the sense to create them some time ago, by hand, with the https. Even when I receive a notice of a new statement or pending payment on something in the email, and I KNOW it's valid, I'm still visiting it from the bookmark.
Bookmarks or favorites or whatever your browser of choice wishes to call them, for the https URLs. That, or remember to type in the https:// prefix. - S -----Original Message----- From: Patrick W. Gilmore [mailto:patrick@ianai.net] Sent: Wednesday, July 23, 2008 11:01 PM To: nanog@merit.edu Subject: Re: Exploit for DNS Cache Poisoning - RELEASED On Jul 23, 2008, at 9:27 PM, Jasper Bryant-Greene wrote:
On Wed, 2008-07-23 at 21:17 -0400, Joe Abley wrote:
Luckily we have the SSL/CA architecture in place to protect any web page served over SSL. It's a good job users are not conditioned to click "OK" when told "the certificate for this site is invalid".
'course, as well as relying on users not ignoring certificate warnings, SSL as protection against this attack relies on the user explicitly choosing SSL (by manually prefixing the URL with https://), or noticing that the site didn't redirect to SSL.
Your average Joe who types www.paypal.com into their browser may very well not notice that they didn't get redirected to https://www.paypal.com/
That did not even occur to me. Anyone have a foolproof way to get grandma to always put "https://" in front of "www"? Seriously, I was explaining the problem to someone saying "never click 'OK'" when this e-mail came in and I realized how silly I was being. Help? -- TTFN, patrick
Skywing wrote:
Bookmarks or favorites or whatever your browser of choice wishes to call them, for the https URLs. That, or remember to type in the https:// prefix.
- S
Which works great until you run into something like Washington Mutual (of which you have no doubt heard)... http://www.wamu.com redirects to http://www.wamu.com/personal/default.asp and https://www.wamu.com *also* redirects to http://www.wamu.com/personal.default.asp (!) And yes, then you're supposed to trust that the page you've been served up will send the form submit with your username and password to the right place over https. They do now have a link to https://online.wamu.com/IdentityManagement/Logon.aspx on that main page, but you have to look for it. But really, https://www.wamu.com should redirect to *that* in order for it to be safe for the slightly-knowledgeable-about-http-security. Matthew Kaufman matthew@eeph.com http://www.matthew.at
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...? I'm sure it's doable in FF with a simple plugin, one doesn't have to wait for FF4. (That would work for bookmarks too.) Robert
On Thu, 24 Jul 2008 09:51:40 +0200 Robert Kisteleki <robert@ripe.net> wrote:
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...?
I'm sure it's doable in FF with a simple plugin, one doesn't have to wait for FF4. (That would work for bookmarks too.)
Servers won't go along with it -- it's too expensive, both in CPU and round trips. The round trip issue affects latency, which in turn affects perceived responsiveness. This is quite definitely the reason why gmail doesn't always use https (though it, unlike some other web sites, doesn't refuse to use it). As for CPU time -- remember that most web site visits are very short; this in turn means that you have to amortize the SSL setup expense over very few pages. I talked once with a competent system designer who really wanted to use https but couldn't -- his total system cost would have gone up by a factor of 10. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Steven M. Bellovin wrote:
As for CPU time -- remember that most web site visits are very short; this in turn means that you have to amortize the SSL setup expense over very few pages. I talked once with a competent system designer who really wanted to use https but couldn't -- his total system cost would have gone up by a factor of 10.
We handle the SSL decryption on the front-end load-balancers (hardware assisted). For financial transactions the load-balancers also maintain long-lived SSL connections to the webservers, that the decrypted data is pipelined into. This avoids the expensive session setup and teardown on the servers. Sam
On Thu, Jul 24, 2008 at 3:05 AM, Steven M. Bellovin <smb@cs.columbia.edu> wrote:
The round trip issue affects latency, which in turn affects perceived responsiveness. This is quite definitely the reason why gmail doesn't always use https (though it, unlike some other web sites, doesn't refuse to use it).
Interestingly enough, Google just added a feature to GMail to force secure connections: http://googlesystem.blogspot.com/2008/07/force-gmail-to-use-secure-connectio... Jeff
On Thu, 24 Jul 2008, Jeffrey Ollie wrote:
Interestingly enough, Google just added a feature to GMail to force secure connections:
http://googlesystem.blogspot.com/2008/07/force-gmail-to-use-secure-connectio...
Jeff
I wish Yahoo and Hotmail even had the ability of *reading* email via https: http://www.interall.co.il/hotmail-yahoo-https.html And then MS doesn't quite understand why people prefer Gmail to Hotmail :-) -Hank
On Thu, Jul 24, 2008 at 11:24 PM, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
I wish Yahoo and Hotmail even had the ability of *reading* email via https: http://www.interall.co.il/hotmail-yahoo-https.html
Hah! It was only a year ago that Yahoo even added SSL capabilities for login. Six months ago they added POP3S. -Jim P.
On 7/24/08, Hank Nussbacher <hank@efes.iucc.ac.il> wrote:
On Thu, 24 Jul 2008, Jeffrey Ollie wrote:
Interestingly enough, Google just added a feature to GMail to force secure connections:
http://googlesystem.blogspot.com/2008/07/force-gmail-to-use-secure-connectio...
Jeff
I wish Yahoo and Hotmail even had the ability of *reading* email via https: http://www.interall.co.il/hotmail-yahoo-https.html
I'm sure when Gmail gets close to the same number of users as Yahoo, they will discover how challenging and painful it is to support that many simultaneous short-lived SSL connections. It's much easier to support CPU intensive tasks like full-time SSL when you have a small user base; as that user base grows, the cost of providing that service continues to grow, often outpacing the revenue benefit it brings. I *definitely* agree that any paid-for mail service should support full-time SSL connectivity for reading as well as login. For a free service, though, it's hard to afford the CPU resources to handle it as the demand scales up.
And then MS doesn't quite understand why people prefer Gmail to Hotmail :-)
-Hank
The good news is that the more users switch to gmail from hotmail, the less load there is on the server CPUs at hotmail, and the sooner they'll be able to afford to enable full-time SSL for the remaining users. :D So clearly, the goal is to encourage everyone *else* to go use gmail, leaving you to enjoy the very lightly-loaded and highly-responsive platform left behind. ;) Matt
On Fri, Jul 25, 2008 at 5:52 PM, Matthew Petach <mpetach@netflight.com> wrote:
I'm sure when Gmail gets close to the same number of users as Yahoo, they will discover how challenging and painful it is to support that many simultaneous short-lived SSL connections.
True, however GMail has the advantage of seeing Yahoo!'s past problems and working (in advance) around them. SSL is a good thing, and thankfully Yahoo! came into the 21st century. ;-) Before I switched to GMail I use to VPN my laptop connections to Yahoo! POP3 via a server closer to Yahoo! to avoid sniffing opptys. -Jim P.
Matthew Petach wrote:
I'm sure when Gmail gets close to the same number of users as Yahoo, they will discover how challenging and painful it is to support that many simultaneous short-lived SSL connections. It's much easier to support CPU intensive tasks like full-time SSL when you have a small user base; as that user base grows, the cost of providing that service continues to grow, often outpacing the revenue benefit it brings.
You're aware that certain chips, such as the UltraSparc T1 and T2 chips, have on-board SSL acceleration functions that impose virtually zero penalty on SSL encryption (though I suppose that setup/teardown is handled by the main CPU)? --Patrick
On Thu, 2008-07-24 at 09:51 +0200, Robert Kisteleki wrote:
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...?
I'm sure it's doable in FF with a simple plugin, one doesn't have to wait for FF4. (That would work for bookmarks too.)
It probably wouldn't help. In this case, if I was the attacker, I'd just find a company selling "Domain Validated" certs whose upstream nameserver was vulnerable (there's enough "Domain Validated" certificate pushers now that this shouldn't be hard) Then you spoof the domain from their point of view, obtain a cert, and now HTTPS will work with no error message, almost certainly fooling anyone's grandma. -Jasper
On Thu, 2008-07-24 at 09:51 +0200, Robert Kisteleki wrote:
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...?
I'm sure it's doable in FF with a simple plugin, one doesn't have to wait for FF4. (That would work for bookmarks too.)
I don't think anything involving HTTPS is necessairly an answer to this problem. Specifically: * not all sites do HTTPS * many organizations use transparent proxies like Microsoft ICA * certification authorities can in theory be bought off (or otherwise manipulated) to issue bogus certs, making switching to HTTPS worthless William
Once upon a time, Robert Kisteleki <robert@ripe.net> said:
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...?
This is a _DNS_ vulnerability. The Internet is more than HTTP(S). Think about email (how many MTAs do TLS and validate the certs?). Even things like BitTorrent require valid DNS (how about MPAA/RIAA poisoning the cache for thepiratebay?). -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
Robert Kisteleki wrote:
Patrick W. Gilmore wrote:
Anyone have a foolproof way to get grandma to always put "https://" in front of "www"?
I understand this is a huge can of worms, but maybe it's time to change the default behavior of browsers from http to https...?
I'm sure it's doable in FF with a simple plugin, one doesn't have to wait for FF4. (That would work for bookmarks too.)
Robert
80 != 443 There's nobody listening on 443 for most of the Internet. Ken -- Ken Anderson Pacific.Net
Now, there is an exploit for it.
For anyone looking to use it, you MUST update the frameworks libraries. Some of the code only came out ~5 hours ago that it needs. Tuc/TBOH
William Herrin wrote:
"ethtool -c". Thanks Sargun for putting me on to "I/O Coalescing."
But cards like the Intel Pro/1000 have 64k of memory for buffering packets, both in and out. Few have very much more than 64k. 64k means 32k to tx and 32k to rx. Means you darn well better generate an interrupt when you get near 16k so that you don't fill the buffer before the 16k you generated the interrupt for has been cleared. Means you're generating an interrupt at least for every 10 or so 1500 byte packets.
This is not true in the bus master dma mode how the cards are usually used. The mentioned memory is used only as temporary storage until the card can DMA the data into the buffers in main memory. Most Pro/1000 cards have buffering capability up to 4096 frames. Pete
On Sat, Jul 26, 2008 at 1:40 PM, Petri Helenius <petri@helenius.fi> wrote:
William Herrin wrote:
But cards like the Intel Pro/1000 have 64k of memory for buffering packets, both in and out. Few have very much more than 64k. 64k means 32k to tx and 32k to rx. Means you darn well better generate an interrupt when you get near 16k so that you don't fill the buffer before the 16k you generated the interrupt for has been cleared. Means you're generating an interrupt at least for every 10 or so 1500 byte packets.
This is not true in the bus master dma mode how the cards are usually used. The mentioned memory is used only as temporary storage until the card can DMA the data into the buffers in main memory. Most Pro/1000 cards have buffering capability up to 4096 frames.
Pete, I'll confess to some ignorance here. We're at the edge of my skill set. The pro/1000 does not need to generate an interrupt in order to start a DMA transfer? Can you refer me to some documents which explain in detail how a card on the bus sets up a DMA transfer? Thanks, Bill -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
* William Herrin:
The pro/1000 does not need to generate an interrupt in order to start a DMA transfer? Can you refer me to some documents which explain in detail how a card on the bus sets up a DMA transfer?
Busmaster DMA does not generate an interrupt on the host CPU. The interrupt is used to trigger processing on the host CPU; it can be deferred until several frames have been written.
William Herrin wrote:
The pro/1000 does not need to generate an interrupt in order to start a DMA transfer? Can you refer me to some documents which explain in detail how a card on the bus sets up a DMA transfer?
The driver provides the adapter a ring buffer of memory locations to busmaster dma the data into (which does not require interrupting the CPU). The interrupts are triggered after the DMA completes and in moderation controllable by the driver. For FreeBSD the default maxes interrupts out at 8000 per second and on some of the adapters there are firmware optimizations for lowering the latency from the obvious maximum of 125 microseconds. When an interrupt is fired the driver restocks the ring buffer with new addresses to put data into, for one or for 4000 frames, depending on how many were used up. With IOAT and various offloads this gets somewhat more complicated and more effective. Pete
participants (29)
-
Chris Adams
-
David Conrad
-
Florian Weimer
-
Hank Nussbacher
-
Jared Mauch
-
Jasper Bryant-Greene
-
Jeffrey Ollie
-
Jim Popovitch
-
Joe Abley
-
Joe Greco
-
Ken A
-
Kevin Day
-
Kevin Oberman
-
Matthew Kaufman
-
Matthew Petach
-
Mikael Abrahamsson
-
Mike Lewinski
-
Patrick Giagnocavo
-
Patrick W. Gilmore
-
Petri Helenius
-
Robert D. Scott
-
Robert Kisteleki
-
Sam Stickland
-
Skywing
-
Steven M. Bellovin
-
Tony Finch
-
Tuc at T-B-O-H.NET
-
William Herrin
-
William Pitcock