Re: Schneier: ISPs should bear security burden
Thing is, protecting them from themselves and their own stupidity is also the thing that most everyone else needs, too.
Do you really want an internet where everything has to run over ports 80 and 443 because those are all that's left that ISPs don't filter?
They should be filtered, too. For standard bottom-feeder accounts, *everything* should be filtered and transparent proxied. And the accounts should be priced so that they pay for their own upkeep. What will cost money is to turn off the filters selectively for certain accounts, and people who want that should be in a position to pay for it.
I'm sorry, but, I simply do not share your belief that the educated should be forced to subsidize the ignorant. This belief is at the heart of a number of today's socialogical problems, and, I, for one, would rather not expand its influence.
How much functionality are we going to destroy before we realize that you can't fix end-node problems in the transit network?
How much of the Internet is going to be destroyed before we realize that the users are too stupid to be trusted to run their end-nodes, and if the transit network wants to protect itself from the worst offenses it will need to provide only managed services and not let these people out of the corral to being with?
Strangely, for all the FUD in the above paragraph, I'm just not buying it. The internet, as near as I can tell, is functioning today at least as well as it ever has in my 20+ years of experience working with it. The vast majority of the end node problems come from one particular software vendor. If that vendor could be held accountable for the problems they have created, things would be much better. The major advanatage of the internet is the ability to deploy new applications and protocols quickly and easily. Transparent proxies, btw, would not prevent most of the harmful stuff available via 443, so, I'm not sure what you think that accomplishes. Malware will quickly adapt to any such filtration at the transport layer. As long as you can get some form of undefined content through the internet, malware will have a way to gain transit. It must be addressed at the end node. Owen
On Wed, 27 Apr 2005, Owen DeLong wrote:
Strangely, for all the FUD in the above paragraph, I'm just not buying it. The internet, as near as I can tell, is functioning today at least as well as it ever has in my 20+ years of experience working with it.
You must not have used it much in those 20 years. I can definitely say worms, trojans, spam, phishing, ddos, and other attacks is up several orders of magnitude in those 20 years. Malicious packets now account for a significant percentage of all ip traffic. Eventually I expect malicious packets will outnumber legitimate packets, just like malicious email outnumbers legitimate email today. As long as the environmental polluter model continues to be championed and promoted on nanog (of all places), the problem will only get worse. -Dan
On Wed, Apr 27, 2005 at 11:08:42AM -0700, Dan Hollis wrote:
Malicious packets now account for a significant percentage of all ip traffic.
As a data point: An unused, never before used or even just announced /21 currently draws an average of 112pps und 70kbit/s, translating to about 1GB (1 Gigabyte!) of traffic per day, or about 30GB per month. In some countries, that translates to real money (I'm hearing INTERESTING price tags on bandwidth in South Africa). Looking at psmith's weekly routing table report, this would extrapolate (totally non-scientific and ignoring several effects) to at least about 675GB daily "stray" traffic in the whole Internet, WITHOUT any host answering to the viruses, trojans, whatever. I hope to find the time to do some capturing and analysis of this traffic. If anyone here has experience with that I'd be happy to hear from them... don't want to waste time doing something others already did... :-) Best regards, Daniel -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
Daniel Roesen wrote:
I hope to find the time to do some capturing and analysis of this traffic. If anyone here has experience with that I'd be happy to hear from them... don't want to waste time doing something others already did... :-)
Sure, what would you like to know? Pete
--On Wednesday, April 27, 2005 11:08 AM -0700 Dan Hollis <goemon@anime.net> wrote:
On Wed, 27 Apr 2005, Owen DeLong wrote:
Strangely, for all the FUD in the above paragraph, I'm just not buying it. The internet, as near as I can tell, is functioning today at least as well as it ever has in my 20+ years of experience working with it.
You must not have used it much in those 20 years. I can definitely say worms, trojans, spam, phishing, ddos, and other attacks is up several orders of magnitude in those 20 years. Malicious packets now account for a significant percentage of all ip traffic. Eventually I expect malicious packets will outnumber legitimate packets, just like malicious email outnumbers legitimate email today.
All of that is true. However, I don't define functioning internet in terms of the lack of these things. I define it in terms of when I try to get a connection from my point A to far-end point B, what is the loss and/or failure rate of the desired traffic. From that perspective, in my experience, things are better today than they ever have been.
As long as the environmental polluter model continues to be championed and promoted on nanog (of all places), the problem will only get worse.
I'm not attempting to encourage the environmental polluter model. However, making making the guy that owns the pipeline responsible for the chemical plant 200 miles away that is polluting the product provided to him by the water production company still doesn't make sense to me. You have to make the chemical plant responsible, or, the problem just keeps getting more expensive. My point is we need to look to solve problems, not symptoms of problems. Transit solutions to end-node problems are costly and progressively less effective over time. Owen -- If it wasn't crypto-signed, it probably didn't come from me.
On Wed, 27 Apr 2005, Owen DeLong wrote:
From that perspective, in my experience, things are better today than they ever have been.
The only thing I've seen in the past 20 years which has made any positive impact on overall internet reliability is BGP dampening. In all other cases its gotten worse as networks are ground to dust by daily DDOS attacks. You can read daily about sites xyz or networks xyz being unreachable for hours/days/weeks/months due to DDOS attacks. Compared to 20 years ago I would have to say overall things are worse not better. -Dan
The only thing I've seen in the past 20 years which has made any positive impact on overall internet reliability is BGP dampening. In all other cases its gotten worse as networks are ground to dust by daily DDOS attacks. You can read daily about sites xyz or networks xyz being unreachable for hours/days/weeks/months due to DDOS attacks. Compared to 20 years ago I would have to say overall things are worse not better.
Yes... The news reports more outages today than they reported back then. Of course, part of that is because 20 years ago, the media couldn't spell internet, let alone connect to it. However, the huge expansion in overall bandwidth, the increase in bandwidth to subscriber ratio, the proliferation of firewall appliances, and, faster and better switching and routing capabilities, packet over sonet, MPLS have all contributed to a more reliable and more flexible internet. YMMV, but, for me, today, when I try to connect to things on the internet, I have a much higher success rate than I did 20 years ago. My links aren't clogged with DDOS or abuse, even though I'm on a completely unfiltered link. Sure, I see the occasional DDOS, lots of probes, and, many many attempts to use my systems to relay SPAM. The relay attempts are quietly discarded, the DDOS stays down in the noise threshold for the most part, and, the other abuse attempts are logged and fail. However, the things I try to do with the internet mostly succeed. Judging by the server logs, people are getting to the web servers I host without difficulty. 20, even 10, heck, even 5 years ago, my success rates were lower than they are today. They've been roughly the same for the last 5 years, but, that's pretty good, so, I'm generally happy. I'm not saying we shouldn't make efforts to eliminate abuse. I'm not saying abuse isn't a reliability issue or that it doesn't have a cost. However, eliminating end-node abuse at the transit just adds more cost and is, in the long run, an ineffective solution at best, usually with unintended side consequences. Owen -- If it wasn't crypto-signed, it probably didn't come from me.
On 27-apr-2005, at 20:08, Dan Hollis wrote:
I can definitely say worms, trojans, spam, phishing, ddos, and other attacks is up several orders of magnitude in those 20 years. Malicious packets now account for a significant percentage of all ip traffic. Eventually I expect malicious packets will outnumber legitimate packets, just like malicious email outnumbers legitimate email today.
As long as the environmental polluter model continues to be championed and promoted on nanog (of all places), the problem will only get worse.
The problem is that the maliciousness of packets or email is largely in the eye of the beholder. How do you propose ISPs determine which packets the receiver wants to receive, and which they don't want to receive? (At Mpps rates, of course.) This whole discussion is a clear example of the fallacy of treating "security" as an independent entity, rather than an aspect of other things. There are many ISPs that do less than they should, though. (Allow spoofed sources, don't do anything against hosts that are reported to send clearly abusive traffic, sometimes even at DoS rates...)
On Thu, 28 Apr 2005, Iljitsch van Beijnum wrote:
The problem is that the maliciousness of packets or email is largely in the eye of the beholder. How do you propose ISPs determine which packets the receiver wants to receive, and which they don't want to receive? (At Mpps rates, of course.)
Its not up to the ISP to determine outbound malicious traffic, but its up to the ISP to respond in a timely manner to complaints. Many (most?) do not.
There are many ISPs that do less than they should, though. (Allow spoofed sources, don't do anything against hosts that are reported to send clearly abusive traffic, sometimes even at DoS rates...)
This is what I mean by the environmental polluter model. Providers who continually spew sewage and do nothing to shut off attackers under their domain despite repeated pleas from victims. An paper by Jeffrey Race - http://www.camblab.com/nugget/spam_03.pdf was written about the spam problem, but touches on fraud and other malicious activity. The general attitude in the paper regarding provider's responses to spam complaints also applies to ddos and other attacks. It's also interesting to note where Mr. Ebbers is today. Has the situation gotten better? Maybe at uunet it has since mr. ebbers "departure", but most other places it appears to only have gotten worse[1]. Bigpond let things get so out of hand that their own network began to crumble, which is the only time I can think of in recent history that they've ever taken action to disconnect zombies. You can be certain the victims on the receiving end of bigpond's zombied customers have little sympathy for bigpond's situation. Remember, this is the ISP whos abuse@ box auto-deleted complaints for "unacceptable language". When you're so bad that AOL has to block you[2], you should probably consider cleaning up your network. Sadly these official policies of 'do nothing' come from the top, so engineers and administrators who are in a position to actually take action against blatant network abuse, are actually explicitly forbidden to take any action. So the real question seems to be how to effectively apply a cluebat to CEOs to get a reasonable abuse policy enforced. Nanog can host all the meetings it wants and members can write all the RFCs they want, but until attitudes change at the top, nobody will be allowed to do anything at the bottom. -Dan [1] http://sucs.org/~sits/articles/ntl_dont_care/ [2] http://www.smh.com.au/articles/2003/04/29/1051381931239.html?oneclick=true
Its not up to the ISP to determine outbound malicious traffic, but its up to the ISP to respond in a timely manner to complaints. Many (most?) do not.
If they did their support costs would explode. It is block the customer, educate the customer why they were blocked, exterminate the customers PC, unblock the customer. No doubt there'll be a repeat of the same in short time. Adi
Adi Linden wrote:
Its not up to the ISP to determine outbound malicious traffic, but its up to the ISP to respond in a timely manner to complaints. Many (most?) do not.
If they did their support costs would explode. It is block the customer, educate the customer why they were blocked, exterminate the customers PC, unblock the customer. No doubt there'll be a repeat of the same in short time.
This is actually the opposite. (though I'm biased) But the support costs will decrease because you'll get less complaints inbound and less customers complaining about slow connections because their PC's are filling them with junk. Pete
If they did their support costs would explode. It is block the customer, educate the customer why they were blocked, exterminate the customers PC, unblock the customer. No doubt there'll be a repeat of the same in short time.
On a cost basis, it should be: + block the customer + Explain to the customer why they were blocked Customer should be responsible for getting their PC exterminated, although enterprising ISPs could offer this service for a fee. Finally, it would not be unreasonable to impose a reconnect fee. For that matter, if ISPs wrote contracts appropriately, there could be a disconnect fee for abuse as well. Owen -- If it wasn't crypto-signed, it probably didn't come from me.
On April 28, 2005 at 09:09 adil@adis.on.ca (Adi Linden) wrote:
Its not up to the ISP to determine outbound malicious traffic, but its up to the ISP to respond in a timely manner to complaints. Many (most?) do not.
If they did their support costs would explode. It is block the customer, educate the customer why they were blocked, exterminate the customers PC, unblock the customer. No doubt there'll be a repeat of the same in short time.
This mantra is often repeated but their costs are going to explode anyhow as the defensive blocking of them goes on, world-wide, and their customers want to know why they can no longer send email or browse in random, and ever-growing, chunks of IP space (and, frustrated, find new providers.) Only that situation is going to be much more expensive to fix since it's others' IP space they'll need to get policy changes in, not their own. -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD The World | Public Access Internet | Since 1989 *oo*
participants (7)
-
Adi Linden
-
Barry Shein
-
Dan Hollis
-
Daniel Roesen
-
Iljitsch van Beijnum
-
Owen DeLong
-
Petri Helenius