Packet anonymity is the problem?
If you connect a dialup modem to the public switched telephone network, do you rely on Caller ID for security? Or do you configure passwords on the systems to prevent wardialers with blocked CLIDs from accessing your system? Have a generation of firewalls and security practices distracted us from the fundamental problem, insecure systems. http://www.ecommercetimes.com/perl/story/security/33344.html Gartner research vice president Richard Stiennon confirmed that packet anonymity is a serious issue for Internet security. [...] "Because of the way TCP/IP works, it's an open network," Keromytis said. "Other network technologies don't have that problem. They have other issues, but only IP is subject to this difficulty with abuse." [...] Bellovin compared the situation to bank robberies. "[S]treets, highways and getaway cars don't cause bank robberies, nor will redesigning them solve the problem. The flaws are in the banks," he said. Similarly, most security problems are due to buggy code, and changing the network will not affect that.
On Sat, 10 Apr 2004, Sean Donelan wrote: : "Because of the way TCP/IP works, it's an open network," Keromytis : said. "Other network technologies don't have that problem. They have : other issues, but only IP is subject to this difficulty with abuse." If networks properly filtered the source IP's of packets exiting or entering their networks to only the valid delegations for that network, this would be far less of a problem: we could at least get *some* accountability going. Of course, the still high number of bogon routes illustrate that very few folks (if any) really care. -- -- Todd Vierling <tv@duh.org> <tv@pobox.com>
On Sat, 10 Apr 2004, Todd Vierling wrote:
Of course, the still high number of bogon routes illustrate that very few folks (if any) really care.
Worse; the registries make it trivial to steal registrations and assignments, but nigh impossible to get them back to the rightful owners. -Dan
: "Because of the way TCP/IP works, it's an open network," Keromytis : said. "Other network technologies don't have that problem. They have : other issues, but only IP is subject to this difficulty with abuse."
If networks properly filtered the source IP's of packets exiting or entering their networks to only the valid delegations for that network, this would be far less of a problem: we could at least get *some* accountability going.
Of course, the still high number of bogon routes illustrate that very few folks (if any) really care.
in another thread tonight i see subjects like "lazy network operators" and at first glance, those are the people you're describing (who don't really care.) however, that's simple-minded. "because of the way tcp/ip works..." is a very good lead-in toward the actual cause of this apparent non-caring / laziness. because of the way ip works, and because of the way human nature works, many of the things that would have to be done to fix this problem have assymetric cost/benefit. if a network provider isn't lazy, then everyone except them will benefit from that non-laziness. human nature says that ain't happening. even though i try every day, it probably is too late to redesign human nature. the assymetric cost/benefit is an emergency property of fundamental design principles in tcp/ip, so it's no surprise that ipv6 didn't do much about this "weakness". attempting to symmetrize cost/benefit without design changes in either human nature or the tcp/ip protocol suite has had mixed results. (i.e., MAPS.) so, the article sean quoted is all very entertaining, but says nothing new, which is sad, because i for one would really like to hear something new. -- Paul Vixie
On Sun, Apr 11, 2004 at 03:36:44AM +0000, Paul Vixie wrote: [snip]
in another thread tonight i see subjects like "lazy network operators" and at first glance, those are the people you're describing (who don't really care.)
however, that's simple-minded. "because of the way tcp/ip works..." is a very good lead-in toward the actual cause of this apparent non-caring / laziness.
because of the way ip works, and because of the way human nature works, many of the things that would have to be done to fix this problem have assymetric cost/benefit. if a network provider isn't lazy, then everyone except them will benefit from that non-laziness. human nature says that ain't happening.
I have heard the 'assymetric cost/benefit' rationale for the bad laziness (sloppiness, not the larry wall-esque 'good' laziness of automation) on and off the last few years. Similarly, I have heard about the tremendous cost of sloppiness and human error in terms of root-cause for networking badness for the past several years. Seems that these items are related... -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE
Joe Provo wrote:
I have heard the 'assymetric cost/benefit' rationale for the
bad laziness (sloppiness, not the larry wall-esque 'good' laziness of automation) on and off the last few years. Similarly, I have heard about the tremendous cost of sloppiness and human error in terms of root-cause for networking badness for the past several years.
Maybe there should be more "neighborhood intelligent" worms which would target resources that are within the vicinity of the compromised host. SMTP, WWW, etc. services. That way the effects would be most devastating for the lazy. Pete
Petri Helenius wrote:
Joe Provo wrote:
I have heard the 'assymetric cost/benefit' rationale for the
bad laziness (sloppiness, not the larry wall-esque 'good' laziness of automation) on and off the last few years. Similarly, I have heard about the tremendous cost of sloppiness and human error in terms of root-cause for networking badness for the past several years.
Maybe there should be more "neighborhood intelligent" worms which would target resources that are within the vicinity of the compromised host. SMTP, WWW, etc. services. That way the effects would be most devastating for the lazy.
Pete
That raises what some would call an interesting veiwpoint (not my own) Since there will be a worm for X written by "bad" people, and the worse the worm, the faster the "lazy" administrators patch...... Therefore the "good" people should beat the bad people to the punch and write the worm first. Make it render the vulnerable system invulnerable or if neccessary crash it/disable the port etc..... so that the "lazy" administrators fix it quick without losing their hard drive contents or taking out the neighborhood. Such "corrective" behavior as suggested by you might also be implemented in such a "proactive" worm. How many fewer zombies would there be if this was happening? Clearly the current model is not working.
--On Sunday, April 11, 2004 2:45 PM -0400 Joe Maimon <jmaimon@ttec.com> wrote:
Therefore the "good" people should beat the bad people to the punch and write the worm first. Make it render the vulnerable system invulnerable or if neccessary crash it/disable the port etc..... so that the "lazy" administrators fix it quick without losing their hard drive contents or taking out the neighborhood.
Such "corrective" behavior as suggested by you might also be implemented in such a "proactive" worm.
How many fewer zombies would there be if this was happening?
As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it? I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good. -J -- Jeff Workman | jworkman@pimpworks.org | http://www.pimpworks.org
Jeff Workman wrote:
--On Sunday, April 11, 2004 2:45 PM -0400 Joe Maimon <jmaimon@ttec.com> wrote:
Therefore the "good" people should beat the bad people to the punch and write the worm first. Make it render the vulnerable system invulnerable or if neccessary crash it/disable the port etc..... so that the "lazy" administrators fix it quick without losing their hard drive contents or taking out the neighborhood.
Such "corrective" behavior as suggested by you might also be implemented in such a "proactive" worm.
How many fewer zombies would there be if this was happening?
As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it?
I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good.
One could argue that those were implementation issues, probably performed by people who did not know what they were doing.
-J
-- Jeff Workman | jworkman@pimpworks.org | http://www.pimpworks.org
--On Sunday, April 11, 2004 6:03 PM -0400 Joe Maimon <jmaimon@ttec.com> wrote:
Jeff Workman wrote:
As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it?
I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good.
One could argue that those were implementation issues, probably performed by people who did not know what they were doing.
I would be inclined to agree. However, how do we "verify" such a worm. Do we only allow signed worms to infiltrate our system? This is flawed because the worms in the wild are obviously penetrating systems without their owner's (or the operating system's) consent. And, even if it were possible to implement such a worm, who is going to assume the liability of signing it? -J -- Jeff Workman | jworkman@pimpworks.org | http://www.pimpworks.org
In message <4079C0BB.80509@ttec.com>, Joe Maimon writes:
Jeff Workman wrote:
--On Sunday, April 11, 2004 2:45 PM -0400 Joe Maimon <jmaimon@ttec.com> wrote:
Therefore the "good" people should beat the bad people to the punch and write the worm first. Make it render the vulnerable system invulnerable or if neccessary crash it/disable the port etc..... so that the "lazy" administrators fix it quick without losing their hard drive contents or taking out the neighborhood.
Such "corrective" behavior as suggested by you might also be implemented in such a "proactive" worm.
How many fewer zombies would there be if this was happening?
As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it?
I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good.
One could argue that those were implementation issues, probably performed by people who did not know what they were doing.
From a perspective of auto-patch, *no* programmers "know what they're doing". The state of the art of software engineering, even for well-designed, well-implemented, well-tested systems, is not good enough to allow arbitrary "correct" patches to be installed blindly on a critical system. Let me put it like this: how many ISPs like to install the latest versions of IOS or JunOS on all of their routers without testing it first?
From a purely legal perspective, even a well-written, benevolent worm is illegal -- the writer is not an "authorized" user of my computer. But I'd never authorize someone to patch my system, even an ordinary desktop PC, without my consent -- there are times when I can't afford to have it unavailable. (Many U.S. residents are in such a state for the next four days, until they get their income tax returns prepared and filed. I don't even like installing virus updates at this time of year.)
Auto-patch is a bad idea that just keeps coming back. Auto-patch by people other than the vendor, who've done far less testing, is far beyond "bad". --Steve Bellovin, http://www.research.att.com/~smb
On 11-apr-04, at 4:48, Sean Donelan wrote:
"Because of the way TCP/IP works, it's an open network," Keromytis said. "Other network technologies don't have that problem. They have other issues, but only IP is subject to this difficulty with abuse."
I don't think so. Non-IP networks such as the phone network, the (snail) mail network and the pizza delivery network are also subject to abuse. The difference is there are much fewer convenient multipliers around that give an attacker an asymmetric advantage.
Bellovin compared the situation to bank robberies. "[S]treets, highways and getaway cars don't cause bank robberies, nor will redesigning them solve the problem. The flaws are in the banks," he said. Similarly, most security problems are due to buggy code, and changing the network will not affect that.
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
On Sun, 11 Apr 2004, Iljitsch van Beijnum wrote:
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
It's the other way around in fact: if others were to run (more) secure code, there would be far less boxen used as zombies to launch ddos attacks against your infrastructure, to propagate worms, and to be used as spam relays. While it can sound a bit theorical (to hope that the "others" will run secure code), as the vast majority of users run OSs from one particular (major) vendor, an amelioration of said family of OSs would certainly benefit to all. Just think about all the recent network havocs caused by worms propagating on one OS platform ... - yann
On 11-apr-04, at 11:51, Yann Berthier wrote:
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
It's the other way around in fact: if others were to run (more) secure code, there would be far less boxen used as zombies to launch ddos attacks against your infrastructure, to propagate worms, and to be used as spam relays.
You make two assumptions: 1. denial of service requires compromised hosts 2. good code prevents hosts from being compromised I agree that without zombies launching a significant DoS is much more difficult, but it can still be done. Also, while many hosts run insecure software, the biggest security vulnerability in most systems is the finger resting on the left mouse button. Also, waiting for others to clean up their act to be safe isn't usually the most fruitful approach.
While it can sound a bit theorical (to hope that the "others" will run secure code), as the vast majority of users run OSs from one particular (major) vendor, an amelioration of said family of OSs would certainly benefit to all. Just think about all the recent network havocs caused by worms propagating on one OS platform ...
I'm not all that interested in plugging individual security holes. (Not saying this isn't important, but to the degree this is solvable things are moving in the right direction.) I'm much more interested in shutting up hosts after they've been compromised. This is something we absolutely, positively need to get a handle on.
On Sun, 11 Apr 2004, Iljitsch van Beijnum wrote:
On 11-apr-04, at 11:51, Yann Berthier wrote:
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
It's the other way around in fact: if others were to run (more) secure code, there would be far less boxen used as zombies to launch ddos attacks against your infrastructure, to propagate worms, and to be used as spam relays.
You make two assumptions:
1. denial of service requires compromised hosts
I don't remember having made such an assumption :) the assumption i made (and i still make) is that compromised hosts *are* used for dos attacks, as well as for other uses having major network impact (worms and spam, that is)
2. good code prevents hosts from being compromised
yes, i think that good code can reduce the exposure to compromissions. And then came the diseasusers ...
I agree that without zombies launching a significant DoS is much more difficult, but it can still be done. Also, while many hosts run insecure software, the biggest security vulnerability in most systems is the finger resting on the left mouse button.
I perfectly agree. But there are technical countermeasures available to limit the user willingness to help compromise his own box. Sandboxing, ingress *and* egress filtering, sensible security defaults and so on. While it would have not been a panacea, i think that no unnecessary open ports on default installs + OSs not encouraging their users to run as Administrator would certainly have been a good thing (tm) We certainly can't expect nothing from the user, but we should be able to expect sensible default settings from OS vendors
Also, waiting for others to clean up their act to be safe isn't usually the most fruitful approach.
I was not even suggesting something like that :)
While it can sound a bit theorical (to hope that the "others" will run secure code), as the vast majority of users run OSs from one particular (major) vendor, an amelioration of said family of OSs would certainly benefit to all. Just think about all the recent network havocs caused by worms propagating on one OS platform ...
I'm not all that interested in plugging individual security holes. (Not saying this isn't important, but to the degree this is solvable things are moving in the right direction.) I'm much more interested in shutting up hosts after they've been compromised. This is something we absolutely, positively need to get a handle on.
I think we mostly agree on every points, i just wanted to pinpoint the fact that insecure code run by others has certainly repercussions on everyone's network. So now let's this thread die, because it begins to sound like something we have seen so many times :) I won't add _one_ word to these way too much rebated subjects Cheers, - yann
You make two assumptions:
1. denial of service requires compromised hosts 2. good code prevents hosts from being compromised
I agree that without zombies launching a significant DoS is much more difficult, but it can still be done. Also, while many hosts run insecure software, the biggest security vulnerability in most systems is the finger resting on the left mouse button.
Prior to Windows I would have agreed with you. However, with the advent of Windows, I think insecure software has surpassed the user as a source of problems. This is not based on a belief that users have gotten any better, but, rather that software is significantly worse.
Also, waiting for others to clean up their act to be safe isn't usually the most fruitful approach.
This is very true. However, education and encouragement of others to fix their insecure systems is a worth-while endeavor, and, the reality remains that if we could find a way to solve that issue, it would significantly reduce today's DDOS and SPAM environment.
While it can sound a bit theorical (to hope that the "others" will run secure code), as the vast majority of users run OSs from one particular (major) vendor, an amelioration of said family of OSs would certainly benefit to all. Just think about all the recent network havocs caused by worms propagating on one OS platform ...
I'm not all that interested in plugging individual security holes. (Not saying this isn't important, but to the degree this is solvable things are moving in the right direction.) I'm much more interested in shutting up hosts after they've been compromised. This is something we absolutely, positively need to get a handle on.
I think both efforts are necessary and worthy. Owen -- If this message was not signed with gpg key 0FE2AA3D, it's probably a forgery.
There are network equipment manufactures who offer last mile protection at the chip level which forces authentication or the packets get dropped, this has been around for about 4 years now and people should seriously look at that as a solution, fast changeable FPGA designs can accommodate such issues and can be changed on the fly long before someone has time to effectively reverse engineer them to find out how they work, they will always be behind by several years and will not he having access to source code to be able to hack anything........ Forced Identification for people who purchase Cisco reseller equipment and any other manufacturer of said equipment will put a dent in some of this non sense also. If there is to be security then you must look at the entire issue well beyond the ability to hack stuff. Anyway my 2 cents for the moment -Henry --- Yann Berthier <yb@sainte-barbe.org> wrote:
On Sun, 11 Apr 2004, Iljitsch van Beijnum wrote:
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
It's the other way around in fact: if others were to run (more) secure code, there would be far less boxen used as zombies to launch ddos attacks against your infrastructure, to propagate worms, and to be used as spam relays.
While it can sound a bit theorical (to hope that the "others" will run secure code), as the vast majority of users run OSs from one particular (major) vendor, an amelioration of said family of OSs would certainly benefit to all. Just think about all the recent network havocs caused by worms propagating on one OS platform ...
- yann
In message <C7AA377F-8B92-11D8-8702-000A95CD987A@muada.com>, Iljitsch van Beijn um writes:
Bellovin compared the situation to bank robberies. "[S]treets, highways and getaway cars don't cause bank robberies, nor will redesigning them solve the problem. The flaws are in the banks," he said. Similarly, most security problems are due to buggy code, and changing the network will not affect that.
Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.
That's where my analogy breaks down -- but you're being victimized largely because of bugs in code other people run. I stand by my statement: most of the security problems we have on the Internet are due to buggy code. (If you want to stretch the analogy, imagine a bogus newspaper report that stimulates uncritical readers to withdraw their money. It's called a run on the bank, and it's every bit as much a denial of service issue as excess packet floods -- bank runs are transaction rates much greater than what the (financial) system was designed to handle. And when they're triggered by false rumors -- well, you get the picture, and my metaphors are stretched too thin as is.) --Steve Bellovin, http://www.research.att.com/~smb
On Apr 10, 2004, at 10:48 PM, Sean Donelan wrote:
If you connect a dialup modem to the public switched telephone network, do you rely on Caller ID for security? Or do you configure passwords on the systems to prevent wardialers with blocked CLIDs from accessing your system? Have a generation of firewalls and security practices distracted us from the fundamental problem, insecure systems.
http://www.ecommercetimes.com/perl/story/security/33344.html Gartner research vice president Richard Stiennon confirmed that packet anonymity is a serious issue for Internet security. [...] "Because of the way TCP/IP works, it's an open network," Keromytis said. "Other network technologies don't have that problem. They have other issues, but only IP is subject to this difficulty with abuse."
Is IP really more insecure than, say, *nix? Back in the days of open mail relays and telnet and guest accounts and anonymous FTP sites, etc., hosts were at least as insecure as the "network" is today. Filtering source addresses is analogous to turning off telnet or applying TCP wrappers on a host. No one seems to think that securing your host is a bad idea, but securing your network seems to be way too much trouble. Of course, the analogy only goes so far. Filtering source addresses costs you time & effort, and maybe even hardware if you are running old boxes. Not filtering doesn't really do much until someone launches an attack from your network and you might not even notice that. Leaving telnet running on your host hurts you directly, so that is not even considered. Point is IP is not "inherently insecure". IP is just a transport mechanism. How you configure it, and what you do with it, is up to you.
[...] Bellovin compared the situation to bank robberies. "[S]treets, highways and getaway cars don't cause bank robberies, nor will redesigning them solve the problem. The flaws are in the banks," he said. Similarly, most security problems are due to buggy code, and changing the network will not affect that.
I've always liked that Bellovin guy. :) Another note: Today's attacks tend to not spoof source addresses. What's a few 10s of 1000s of zombies here or there? Let them be caught, not worth the time to put in source spoofing code. Easier to just make them spew massive bits as fast as they can. Shouldn't we concentrate on the problem (hosts), not the transport? -- TTFN, patrick
On Apr 11, 2004, at 12:11 AM, Patrick W.Gilmore wrote: [SNIP] Interesting that I sent this on the 11th and it gets delivered on the 14th. My reverse did not match the forward of that IP, which I just fixed. I figured that the mail was dead, but I guess it queued. Sorry for the ... strange reply timing. -- TTFN, patrick
participants (14)
-
Dan Hollis
-
Henry Linneweh
-
Iljitsch van Beijnum
-
Jeff Workman
-
Joe Maimon
-
Joe Provo
-
Owen DeLong
-
Patrick W.Gilmore
-
Paul Vixie
-
Petri Helenius
-
Sean Donelan
-
Steven M. Bellovin
-
Todd Vierling
-
Yann Berthier