What do you want your ISP to block today?
Which Microsoft protocols should ISP's break today? Microsoft Exchange? Microsoft file sharing? Microsoft Plug & Play? Microsoft SQL/MSDE? Microsoft IIS? It would be so much easier if worm writers followed the RFC's and set the Evil Bit. China has firewalled the entire country, and they have more infected computers than the US. http://www.vnunet.com/Analysis/1143268
Although companies may have the infrastructure to deal with the current band of worms, Trojans and viruses, there is currently a line of defence that is not in place. "The problem isn't Microsoft's products or the knowledge of the consumer. The problem lies in the ISPs' unwillingness to make this issue disappear or at least reduce it dramatically," said Cooper.
He added that ISPs have the view and ability to prevent en-masse attacks. "All these attacks traverse their networks before they reach you and me. If they would simply stop attack traffic that has been identified and accepted as such, we'd all sleep better," Cooper said.
"The problem isn't Microsoft's products or the knowledge of the consumer. The problem lies in the ISPs' unwillingness to make this issue disappear or at least reduce it dramatically," said Cooper.
This is a disturbing viewpoint. Next thing you know we'll be blaming ISP's for file sharing... -Terry
On Fri, 29 Aug 2003 21:06:24 EDT, Terry Baranski <tbaranski@mail.com> said:
This is a disturbing viewpoint. Next thing you know we'll be blaming ISP's for file sharing...
Well, when one of the largest providers of high-speed internet access is including "download music" as a reason for wanting their service.....
Um...What exactly is wrong with that? There are lots of LEGAL ways to download music. Apple's Music Store and several other licensed commercial services provide music download services, as well as internet radio and other "fair use" applications. This seems like a perfectly legitimate reason to want internet access. As such, it seems like a perfectly reasonable feature to advertise. The problem _IS_ Micr0$0ft choosing to produce code with vulnerabilities in order to increase market penetration. They have essentially built the information superhighway equivalent of the exploding Pinto and it's high time they got held accountable if you ask me. I hesitate to include this here (sorry Susan), but, I'm starting to think that all the admins and other people who are suffering impact on their non-windows systems from these vulnerabilities generating DOS traffic should take Micr0$0ft to small claims court. Let them defend a couple of million tiny lawsuits all over the world. Make them play whack-a-mole the way we've had to on patching their garbage. Owen --On Friday, August 29, 2003 21:14 -0400 Valdis.Kletnieks@vt.edu wrote:
On Fri, 29 Aug 2003 21:06:24 EDT, Terry Baranski <tbaranski@mail.com> said:
This is a disturbing viewpoint. Next thing you know we'll be blaming ISP's for file sharing...
Well, when one of the largest providers of high-speed internet access is including "download music" as a reason for wanting their service.....
Hi, NANOGers. ] > He added that ISPs have the view and ability to prevent en-masse ] > attacks. "All these attacks traverse their networks before they reach ] > you and me. If they would simply stop attack traffic that has been ] > identified and accepted as such, we'd all sleep better," Cooper said. Oh, good gravy! I have a news flash for all of you "security experts" out there: The Internet is not one, big, coordinated firewall with a handy GUI, waiting for you to provide the filtering rules. How many of you "experts" regularly sniff OC-48 and OC-192 backbones for all those naughty packets? Do you really want ISPs to filter the mother of all ports-of-pain, TCP 80? Filter at the *EDGE* folks. You own your own networks; use and manage them responsibly. If you need assistance, ASK. If you can't take on the task, purchase bandwidth from providers who sell (yes, CHARGE YOU MONEY) a filtering service. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Fri, 29 Aug 2003, Rob Thomas wrote:
Filter at the *EDGE* folks. You own your own networks; use and manage them responsibly. If you need assistance, ASK. If you can't take on the task, purchase bandwidth from providers who sell (yes, CHARGE YOU MONEY) a filtering service.
North Texas charges students $30 if their computer is infected, and needs to be cleaned. http://www.ntdaily.com/vnews/display.v/ART/2003/08/29/3f4eeca4ac93d If you don't want to download patches from Microsoft, and don't want to pay McAfee, Symantec, etc for anti-virus software; should ISPs start charging people clean up fees when their computers get infected? Would you pay an extra $50/Mb a month for your ISP to operate a firewall and scan your traffic for you?
Hey, Sean. ] North Texas charges students $30 if their computer is infected, and needs ] to be cleaned. I think this is very reasonable, and a great idea. ] Would you pay an extra $50/Mb a month for your ISP to operate a firewall ] and scan your traffic for you? No, but I have been sorely tempted to offer up [coffee|beer|cash] to have ISPs manage the network security of their other customers. :) Folks need to remember that even if they outsource the security facets of their Internet-connected networks, they must still be responsive to abuse complaints and queries. Your managed security services provider might be excellent...or not. In the end it is still YOUR network, and any "CNN moments" will be all YOURS as well. Keep those abuse@ aliases pointed at helpful and clueful folks, and respond as quickly as you would have others respond. Of course if you aren't responsive, you just might end up as an example in my next presentation. ;) Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Fri, Aug 29, 2003 at 11:42:16PM -0400, Sean Donelan wrote:
North Texas charges students $30 if their computer is infected, and needs to be cleaned.
Excellent, perhaps they'll learn early that they have to patch often.
..... don't want to pay McAfee, Symantec, etc for anti-virus software;
Please show me an anti-virus product for the desktop that protects against such things, I've disinfected at least 30 machines this week that have McAfee VirusShield or Norton Antivirus installed with automatic updates enabled (and yes, I verified they all had the latest virus definitions), they'll happily sit there spewing shit to the world until they're rebooted (a few weeks later, now that windows will happily kludge along but not completely crash) then you get a wonderful dialog that says: 'Warning $anti-virus-program has found an infected file $FOO but could not delete it' Why couldn't it delete it? Because the file was set read only, and the software is too dumb to attrib -r $file And no, $upstream should not be filtering my connection, if you see activity from my network and I don't respond to a friendly notice, turn off my circuit. -- Matthew S. Hallacy FUBAR, LART, BOFH Certified http://www.poptix.net GPG public key 0x01938203
On Saturday, Aug 30, 2003, at 01:58 Canada/Eastern, Matthew S. Hallacy wrote:
On Fri, Aug 29, 2003 at 11:42:16PM -0400, Sean Donelan wrote:
North Texas charges students $30 if their computer is infected, and needs to be cleaned.
Excellent, perhaps they'll learn early that they have to patch often.
That won't save them when the time required to download the patch set is an order of magnitude greater than the mean time to infection. Seems to me that it would be far more effective to simply prohibit connection of machines without acceptable operating systems to the network. That would send a more appropriate message to the vendor, too (better than "don't bother to test before you release, we'll pay to clean up the resulting mess"). Joe
On Sat, 30 Aug 2003 14:09:40 EDT, Joe Abley said:
That won't save them when the time required to download the patch set is an order of magnitude greater than the mean time to infection.
This, in fact, is the single biggest thorn in our side at the moment. It's hard to adopt a pious "patch your broken box" attitude when the user can't get it patched without getting 0wned first...
Seems to me that it would be far more effective to simply prohibit connection of machines without acceptable operating systems to the network. That would send a more appropriate message to the vendor, too (better than "don't bother to test before you release, we'll pay to clean up the resulting mess").
Given the Lion worm that hit Linux boxes, and the fact there's apparently a known remote-root (since fixed) for Apple's OSX, what operating systems would you consider "acceptable"?
On Sat, Aug 30, 2003 at 02:53:46PM -0400, Valdis.Kletnieks@vt.edu wrote:
On Sat, 30 Aug 2003 14:09:40 EDT, Joe Abley said:
That won't save them when the time required to download the patch set is an order of magnitude greater than the mean time to infection.
This, in fact, is the single biggest thorn in our side at the moment. It's hard to adopt a pious "patch your broken box" attitude when the user can't get it patched without getting 0wned first...
how about ACLing them? upstream from customer: permit udp <customer> <ISP's nameservers> port 53 permit tcp <customer> <windowsupdaterange> port 80(?) for as much of the windows update range as can be found. Since they've recently akamai'zed, this is somewhat predictable. Downstream, you can either setup stateful, or just be lazy and hope that allowing estab flag is enough... ACL can be either templated or genericized for the OS. (replacing <customer> with any means the customer pvc (assuming DSL) can only hit microsoft regardless of spoofing. Similar ACLs can be setup for Solaris, OSX, even various flavors of linux. being able to at least semi-automate router config changes is a requisite, but not insurmountable. This will, no doubt, increase support calls. How much compared to a pervasive work is left as an exercise to the reader. -- Ray Wong rayw@rayw.net
On Saturday, Aug 30, 2003, at 14:53 Canada/Eastern, Valdis.Kletnieks@vt.edu wrote:
Given the Lion worm that hit Linux boxes, and the fact there's apparently a known remote-root (since fixed) for Apple's OSX, what operating systems would you consider "acceptable"?
I'm not aware of any operating system that is invulnerable. But clearly, some operating systems are more vulnerable than others :)
On Sat, Aug 30, 2003 at 02:53:46PM -0400, Valdis.Kletnieks@vt.edu wrote:
This, in fact, is the single biggest thorn in our side at the moment. It's hard to adopt a pious "patch your broken box" attitude when the user can't get it patched without getting 0wned first...
This is where you start forcing users through a captive portal to the update site of their vendor, I think they'll get the idea when every site they try to bring up turns out to be windowsupdate.microsoft.com [snip]
Given the Lion worm that hit Linux boxes, and the fact there's apparently a known remote-root (since fixed) for Apple's OSX, what operating systems would you consider "acceptable"?
Anything that's not currently infected, and is patched to the current 'safe' level. -- Matthew S. Hallacy FUBAR, LART, BOFH Certified http://www.poptix.net GPG public key 0x01938203
Given the Lion worm that hit Linux boxes, and the fact there's apparently a known remote-root (since fixed) for Apple's OSX, what operating systems would you consider "acceptable"?
This is an old argument and it just doesn't get any better with time. There is a fundamental difference between BUGS which all software has and Micr0$0ft's level of engineered-in vulnerabilities and wanton disregard for security in the name of features. If you cannot see that many of the exploited vulnerabilities in Micr0$0ft were DESIGNED into the software instead of accidental bugs, I can't help you. This is not to say that Micr0$0ft has not had more than their fair share of BUGS which created vulnerabilities as well. BTW, how big was the patch for OSX's remote root? (less than 2MB) How big was the patch for Lion? (don't have that number handy, but I remember it being relatively small) When was the last time you installed a Micr0$0ft security fix that was less than 5MB? (I have yet to see one) Shall we also compare the realtive timetables between vulnerability awareness and general patch availablility? Owen
... Micr0$0ft's level of engineered-in vulnerabilities and wanton disregard for security in the name of features. ...
i can't see it. i know folks who write code at microsoft and they worry as much about security bugs as people who work at other places or who do software as a hobby. the problem microsoft has with software quality that they have no competition, and their marketing people know that ship dates will drive total dollar volume regardless of quality. (when you have competition, you have to worry about quality; when you don't, you don't.) -- Paul Vixie
When you don't have liability you don't have to worry about quality. What we need is lemon laws for software. --vadim On 1 Sep 2003, Paul Vixie wrote:
... Micr0$0ft's level of engineered-in vulnerabilities and wanton disregard for security in the name of features. ...
i can't see it. i know folks who write code at microsoft and they worry as much about security bugs as people who work at other places or who do software as a hobby. the problem microsoft has with software quality that they have no competition, and their marketing people know that ship dates will drive total dollar volume regardless of quality. (when you have competition, you have to worry about quality; when you don't, you don't.)
When you don't have liability you don't have to worry about quality.
What we need is lemon laws for software.
--vadim
That would destroy the free software community. You could try to exempt free software, but then you would just succeed in destroying the 'low cost' software community. (And, in any event, since free software is not really free, you would have a hard time exempting the free software community. Licensing terms, even if not explicitly in dollars, have a cost associated with them.) Any agreement two uncoerced people make with full knowledge of the terms is fair by definition. If I don't want to buy software unless the manufacturer takes liability, I am already free to accept only those terms. All you want to do is remove from the buyer the freedom to negotiate away his right to sue for liability in exchange for a lower price. If you seriously think government regulation to reduce people's software buying choices can produce more reliable software, you're living in a different world from the one that I'm living in. In fact, if all companies were required to accept liability for their software, companies that produce more reliable software couldn't choose to accept liability as a competitive edge. So you'd reduce competition's ability to pressure manufacturers to make reliable software. Manufacturers would simply purchase more expensive liability insurance, raise the prices on their software, and continue to produce software that is no more reliable. DS
On Mon, 1 Sep 2003, David Schwartz wrote:
When you don't have liability you don't have to worry about quality.
What we need is lemon laws for software.
That would destroy the free software community. You could try to exempt free software, but then you would just succeed in destroying the 'low cost' software community.
This is somewhat strange argument; gifts are not subject to lemon laws, AFAIK. The whole purpose of those laws is to protect consumers from unscurpulous vendors exploiting inability of consumers to recognize defects in the products _prior to sale_. The low-cost low-quality software community deserves to be destroyed, because it, essentially, preys on the fact that in most organizations acquisition costs are visible while maintenance costs are hidden. This amounts to rip-off of unsuspecting customers; and, besides, the drive to lower costs at the expense of quality is central to the whole story of off-shoring and decline of the better-quality producers. The availability of initially indistinguishable lower-quality stuff means that the market will engage in the "race to the bottom", effectively destroying the industry in the process.
(And, in any event, since free software is not really free, you would have a hard time exempting the free software community. Licensing terms, even if not explicitly in dollars, have a cost associated with them.)
Free software producers make no implied presentation of fitness of the product for a particular purpose - any reasonable person understands that a good-faith gift is not meant to make the giver liable. Vendors, however, are commonly held to imply such fitness if they offer a product for sale, because they receive supposedly fair compensation. That is why software companies have to explicitly disclaim this implied claim of fitness and merchantability in their (often "shrink-wrap") licenses.
Any agreement two uncoerced people make with full knowledge of the terms is fair by definition.
Consumer of software cannot be reasonably expected to be able to perform adequate pre-sale inspection of the offered product, and therefore the vendor has the advantage of much better knowledge. This is hardly fair to consumers. That is why the consumer-protection laws (and professional licensing laws) are here in the first place.
If I don't want to buy software unless the manufacturer takes liability, I am already free to accept only those terms.
There are no vendors of consumer-grade software who would assume any liability in their end-user licensing agreements. They don't have to do that, so they don't, and doing otherwise would put them at the immediate competitive disadvantage.
All you want to do is remove from the buyer the freedom to negotiate away his right to sue for liability in exchange for a lower price.
You can negotiate if you have a choice. There is no freedom to negotiate in practice, so the "choice" is, at best, illusory. Go find a vendor which will sell you the equivalent of Outlook _and_ assume liability.
If you seriously think government regulation to reduce people's software buying choices can produce more reliable software, you're living in a different world from the one that I'm living in.
It definitely helped to stem the rampant quackery in the medical profession, and significantly improved safety of cars and appliances. I would advise you to read some history of fake medicines and medical devices in the US; some of them, sold as lately as in 50s, were quite dangerous (for example, home water "chargers" including large quantities of radium). Regulation is needed to make the bargain more balanced - as it stands now, the consumers are at the mercy of software companies because of grossly unequal knowledge and inablity of consumers to make reasonable evaluation of the products prior to commencing transactions. (I am living in a country having economical system full of regulation, and it is so far the best-performing system around. Are you suggesting that radically changing it will produce better results? As you may know, what you offer as a solution was already tried and rejected by the same country, leaving a lot of romantic, but somewhat obsolete, notions of radical agrarian capitalism lingering around).
In fact, if all companies were required to accept liability for their software, companies that produce more reliable software couldn't choose to accept liability as a competitive edge. So you'd reduce competition's ability to pressure manufacturers to make reliable software.
I admire your faith in the all-mighty force of the competition. Now would you please explain how the single vendor of the rather crappy software came to thoroughly dominate the marketplace? (Hint: there's a thing called network externalities). Absolutely free market doesn't work, and that is why there are anti-trust, securities, commercial, and consumer-protection laws - all of which were created to address the actual problems after these problems were discovered in previously unregulated markets. "Freedom" is not the same as "fairness", and it is fairness which lets the better-for-consumer players to get upper hand, to make the "invisible hand" to work. For example of a really free market, go to Russia. Over there the businesses are often engaged in such practices (not common in places with better enforced laws) as killing competitors or freely buying government officials. They have way more actual freedom in choosing their business methods - but I doubt you would want to do business there. Oh, and consumers are also quite free not to buy from the bad guys... if they are willing to go without food or gas or apartments, etc.
Manufacturers would simply purchase more expensive liability insurance, raise the prices on their software, and continue to produce software that is no more reliable.
If vendors will need to buy liability insurance at the rates which depend on the real quality of their products (the insurance companies are quite able to perform risk analysis, unlike the end-users), this will make improving quality not just a matter of nebulous and fungible repeat-business rate and consumer loyalty, but the hard, immediate, bottom-line impacting factor. --vadim
This isn't the best forum for this discussion, so this will be my last reply.
On Mon, 1 Sep 2003, David Schwartz wrote:
When you don't have liability you don't have to worry about quality.
What we need is lemon laws for software.
That would destroy the free software community. You could try to exempt free software, but then you would just succeed in destroying the 'low cost' software community.
This is somewhat strange argument; gifts are not subject to lemon laws, AFAIK.
Gifts also don't come with licensing agreements. Gifts aren't the result of contracts, but free software is. (U.S. courts treat licensing agreements as contracts. If there's compensation, it's not a gift.)
The whole purpose of those laws is to protect consumers from unscurpulous vendors exploiting inability of consumers to recognize defects in the products _prior to sale_.
Actually, the protect consumers only against the inability to recognize this inability. So long as consumers are aware of this inability, it poses no threat to them.
The low-cost low-quality software community deserves to be destroyed, because it, essentially, preys on the fact that in most organizations acquisition costs are visible while maintenance costs are hidden. This amounts to rip-off of unsuspecting customers; and, besides, the drive to lower costs at the expense of quality is central to the whole story of off-shoring and decline of the better-quality producers. The availability of initially indistinguishable lower-quality stuff means that the market will engage in the "race to the bottom", effectively destroying the industry in the process.
Your argument is predicated on the premise that you know better than someone else what they want. You use the phrase "unsuspecting customers" if there were no other kind. You have yet to state a problem that can't be solved by educating the customers.
(And, in any event, since free software is not really free, you would have a hard time exempting the free software community. Licensing terms, even if not explicitly in dollars, have a cost associated with them.)
Free software producers make no implied presentation of fitness of the product for a particular purpose - any reasonable person understands that a good-faith gift is not meant to make the giver liable.
Gifts also don't come with licensing terms. Free software is not a gift, it's a contract, and contracts have compensation on both sides.
Vendors, however, are commonly held to imply such fitness if they offer a product for sale, because they receive supposedly fair compensation.
You say this, but you don't believe it. If the compensation was fair, then there would be no need to provide the consumer with any additional protections since he hasn't paid for them. You can't have it both ways. Whatever offer the vendor makes, the customer may take it or leave it based upon whether it provides the customer with value for his money. There are no "unsuspecting customers" because the terms are not a secret.
That is why software companies have to explicitly disclaim this implied claim of fitness and merchantability in their (often "shrink-wrap") licenses.
Umm, makers of free software have to do this too. Even people who place software in the public domain have to do this. This has nothing to do with compensation and has more to do with nuisance.
Any agreement two uncoerced people make with full knowledge of the terms is fair by definition.
Consumer of software cannot be reasonably expected to be able to perform adequate pre-sale inspection of the offered product, and therefore the vendor has the advantage of much better knowledge. This is hardly fair to consumers. That is why the consumer-protection laws (and professional licensing laws) are here in the first place.
This lack of pre-sale inspection reduces the value of software to the purchaser. So the vendor is already paying fair compensation for this lack. The consumer hasn't paid for this information and so isn't entitled to it. Again you want to have it both ways. The consumer could pay for this knowledge if he or she wanted to. Corporate customers, for example, could buy one copy of the software to inspect. Or they could hire outside firms to produce software reviews. Or they could just read printed reviews. There are any number of ways customers could obtain this information if they were willing to pay for it. If we assume, arguendo, that they don't obtain this information, it follows that they weren't willing to pay for it. But you want to force them to pay for it whether they want it or not. And why shouldn't you, you know better than they do, right?
If I don't want to buy software unless the manufacturer takes liability, I am already free to accept only those terms.
There are no vendors of consumer-grade software who would assume any liability in their end-user licensing agreements. They don't have to do that, so they don't, and doing otherwise would put them at the immediate competitive disadvantage.
Right, that's why you, knowing better than everyone else, have to force them to.
All you want to do is remove from the buyer the freedom to negotiate away his right to sue for liability in exchange for a lower price.
You can negotiate if you have a choice. There is no freedom to negotiate in practice, so the "choice" is, at best, illusory. Go find a vendor which will sell you the equivalent of Outlook _and_ assume liability.
They won't, because people aren't willing to pay the costs for it. Again, that's why you, knowing better, must force them. If you really were willing to pay the full cost, I don't think you'd have trouble finding it. Insurance companies will provide you insurance against software failures.
If you seriously think government regulation to reduce people's software buying choices can produce more reliable software, you're living in a different world from the one that I'm living in.
It definitely helped to stem the rampant quackery in the medical profession, and significantly improved safety of cars and appliances. I would advise you to read some history of fake medicines and medical devices in the US; some of them, sold as lately as in 50s, were quite dangerous (for example, home water "chargers" including large quantities of radium).
Sure, it saves lives. But it costs lives too. How many people in the United States died while the government dragged its heels approving labetolol? But again, you know better than everyone else, so you can decide which lives are important.
Regulation is needed to make the bargain more balanced - as it stands now, the consumers are at the mercy of software companies because of grossly unequal knowledge and inablity of consumers to make reasonable evaluation of the products prior to commencing transactions.
Of course you know better than the consumers what's important to them. Strange they seem to buy software even without this knowledged. Maybe that's because it's not worth the cost.
Manufacturers would simply purchase more expensive liability insurance, raise the prices on their software, and continue to produce software that is no more reliable.
If vendors will need to buy liability insurance at the rates which depend on the real quality of their products (the insurance companies are quite able to perform risk analysis, unlike the end-users), this will make improving quality not just a matter of nebulous and fungible repeat-business rate and consumer loyalty, but the hard, immediate, bottom-line impacting factor.
Of course only the big software business could afford this. They'd have the muscle and savvy to defend the liability claims. So far from being a threat to Microsoft, Microsoft would likely champion a proposal like this. How much of their competition would go away? DS
On Tue, 2 Sep 2003, David Schwartz wrote:
this will be my last reply.
David, since all your arguments are variations on "You think you know better than anyone else what they need" (whereby you, supposedly, extoll virtues of a system which you don't yourself think is the best one) I do concur that the further discussion makes no sense. --vadim
On Tue, 02 Sep 2003 13:34:10 PDT, David Schwartz said:
Umm, makers of free software have to do this too. Even people who place software in the public domain have to do this. This has nothing to do with compensation and has more to do with nuisance.
Umm.. if you explicitly put it in the public domain, you*can't* do it. You no longer have a way to say "by copying this, you agree not to sue us down to our skivvies". That's why the BSD and X11 distributions had copyrights at all - public domain would probably have served their political goals just fine except for the inability to disclaim liability by hanging it off the copyright (which they wouldn't have if they put it in public domain).
I just summarized my thoughts on this topic here: http://www.sans.org/rr/special/isp_blocking.php Overall: I think there are some ports (135, 137, 139, 445), a consumer ISP should block as close to the customer as they can. One basic issue is that people discussing this topic on mailing lists like these are not average home users. Most of us here have seen a DOS prompt at some point and know about "Service Packs" and "Hotfixes". -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
On Wed, 3 Sep 2003, Johannes Ullrich wrote:
I just summarized my thoughts on this topic here: http://www.sans.org/rr/special/isp_blocking.php
Overall: I think there are some ports (135, 137, 139, 445), a consumer ISP should block as close to the customer as they can.
If ISPs had blocked port 119, Sobig could not have been distributed via USENET. Perhaps unbelievably to people on this mailing list, many people legitimately use 135, 137, 139 and 445 over the open Internet everyday. Which protocols do you think are used more on today's Internet? SSH or NETBIOS? Some businesses have create an entire industry of outsourcing Exchange service which need all their customers to be able to use those ports. http://www.mailstreet.net/MS/urgent.asp http://dmoz.org/Computers/Software/Groupware/Microsoft_Exchange/ If done properly, those ports are no more or less "dangerous" than any other 16-bit port number used for TCP or UDP protocol headers. But we need to be careful not to make the mistake that just because we don't use those ports that the protocols aren't useful to other people.
Some businesses have create an entire industry of outsourcing Exchange service which need all their customers to be able to use those ports.
So should everyone else be required to keep their doors open so they can offer the service? Who is wrong/right? Millions of vulnerable users that need some basic protection now, or a few businesses? -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
Johannes Ullrich wrote:
So should everyone else be required to keep their doors open so they can offer the service? Who is wrong/right? Millions of vulnerable users that need some basic protection now, or a few businesses?
That depends if you are buying the 100% internet or 99.993% internet service. Pete
That depends if you are buying the 100% internet or 99.993% internet service.
Well, if '100%' includes all the garbage traffic generated by the worm d'jeur. On my home cable modem connection, about 80% of the packets hitting my firewall are 'junk'. Maybe I would be able to actually share files unencrypted using MSFT file sharing. If I can manage to inject the necessary traffic between all the Nachia Pings and Blaster scans. -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
Johannes Ullrich wrote:
Well, if '100%' includes all the garbage traffic generated by the worm d'jeur. On my home cable modem connection, about 80% of the packets hitting my firewall are 'junk'. Maybe I would be able to actually share files unencrypted using MSFT file sharing. If I can manage to inject the necessary traffic between all the Nachia Pings and Blaster scans.
Once upon a time there was a proposal for a protocol which allowed clients to push a filter configuration to the edge router to both classify traffic and filter unneeded things. For reason or another, this supposedly ended in the bit bucket? Pete
Once upon a time there was a proposal for a protocol which allowed clients to push a filter configuration to the edge router to both classify traffic and filter unneeded things.
Nice idea. I am sure clients will figure that out. As quickly as they caught on to 'Windows Update' and 'Setting up a VCR clock'. Lets face it: Some things are better left to the "experts". -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
On Wed, 3 Sep 2003, Johannes Ullrich wrote:
Once upon a time there was a proposal for a protocol which allowed clients to push a filter configuration to the edge router to both classify traffic and filter unneeded things.
Nice idea. I am sure clients will figure that out. As quickly as they caught on to 'Windows Update' and 'Setting up a VCR clock'. Lets face it: Some things are better left to the "experts".
you mean like 'using a computer' ?
you mean like 'using a computer' ?
hehe... yes! if you insert the word "securely" at the end. Case in point: I helped my neighbor last weekend to diagnose a printer issue. Another problem he had was that his computer always "rebooted" and never "shut down". He just never read/understood the shutdown dialog and it never ocured to him that the radio buttons do anything. Its hard these days. But I HIGHLY recommend for everyone to get out of your server closets, enjoy the sun, and talk to non-techies once in a while. Or: spend a couple hours answering the front end customer support calls if you can't remember where you parked your car. -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
Hi, Johannes. ] Its hard these days. But I HIGHLY recommend for everyone to get out of ] your server closets, enjoy the sun, and talk to non-techies once in a ] while. Or: spend a couple hours answering the front end customer support ] calls if you can't remember where you parked your car. While non-techies can be a support challenge, I find the greatest challenges and demands come from the very techie customers. These are the same customers that don't want to hear "the outage happened because we put a new filter on the peering router...to protect you from outages caused by worms!" Although it sounds logical to say "some filters are better than no filters," this presumes that "some filters" have no adverse side effects. We all know better. Bugs aren't restricted only to products from Redmond, typos happen, and the performance hit can be quite painful. You say that putting these filters in place will reap financial reward? Where is the data to support that theory? Most contracts include credit or refund clauses if the link goes down or if the performance doesn't meet a certain level. Failure to meet these clauses results in credits to the customer, refund to the customer, or the customer leaving for a competitor. Convincing a business to take a risk - a *fiscal* risk - isn't as easy as saying "this will stop worms." All of the cost data I've seen related to worms is either clearly overblown or is based on a paucity of data. I'm not saying these things don't have a cost; I am saying that the cost hasn't been realistically quantified. Of course all of this is hand-waving until the market places security above other requirements, such as increased performance and shiny new features. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
Rob Thomas <robt@cymru.com> writes: ;; Hi, Johannes. ;; ;; ] Its hard these days. But I HIGHLY recommend for everyone to get out of ;; ] your server closets, enjoy the sun, and talk to non-techies once in a ;; ] while. Or: spend a couple hours answering the front end customer support ;; ] calls if you can't remember where you parked your car. ;; ;; While non-techies can be a support challenge, I find the greatest ;; challenges and demands come from the very techie customers. YES! Often it's the case that they A) don't fully understand the problem but B) feel they have the "perfect" solution anyways. "non-techies" will defer to your judgement, "demi-techies" will require bulletproof reasoning for not doing things their way. I hate when that happens. Especially when the reasoning is indeed suboptimal and not by (my) choice or under my control. Peace, Petr
While non-techies can be a support challenge, I find the greatest challenges and demands come from the very techie customers. These are the same customers that don't want to hear "the outage happened because we put a new filter on the peering router...to protect you from outages caused by worms!"
The paper talks about "consumers" defined as "home users or small business without dedicated IT staff". These filters should be clearly stated as part of the subscriber agreement. Many filter problems are the result of inconsistent and rushed implementation.
You say that putting these filters in place will reap financial reward? Where is the data to support that theory?
I admit: I do not have "hard numbers". But all the calls to support about slow connections, or dealing with all the abuse@ complaints has to cost something.
Most contracts include credit or refund clauses if the link goes down or if the performance doesn't meet a certain level.
given that (a) the customer knows ahead of time about the blocked port, and (b) blocking the port may actually reduce the impact of the occasional worm, your argument proofs that there may be a financial benefit.
All of the cost data I've seen related to worms is either clearly overblown or is based on a paucity of data. I'm not saying these things don't have a cost; I am saying that the cost hasn't been realistically quantified.
yes. I am not using any of these numbers to support my issue. But answering support calls, handing out refunds, and dealing with abuse email does cost money.
such as increased performance and shiny new features.
Well, performance should if anything improve. At this point, my cable modem which I use for regular web browsoing is seeing about 80% "unsolicited" traffic. Not that the bandwidth impact is huge. But I rather use it to speed up my pr0n downloads then to waste it on pings/port 135 probes/arp storms... And someone is paying to move all these packets across the wire. After all: Thats what we all agree on. We are paying ISPs to move packets. -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
At 22:30 03/09/2003, Rob Thomas wrote: [snip]
effects. We all know better. Bugs aren't restricted only to products from Redmond, typos happen, and the performance hit can be quite painful.
In my experience more network downtime is caused by configuration errors that all other causes together. The best diagnostic tool I've ever had is a script I cobbled together over two hours one night. Once an hour, it simply collected all the router configs across the network, did a 'diff' between the current and last config, and if there were changes, emailed them to me, along with a TACACS+ log summary that showed who had logged into which router when. Experience with this quickly taught me to check these summary change logs whenever a problem was escalated to me. Most times the problem was related to a config change, not an external cause. Further experience taught me to look out for one particular engineers name in the logs but that's another story.
On Thursday, Sep 4, 2003, at 09:59 Canada/Eastern, Ian Mason wrote:
The best diagnostic tool I've ever had is a script I cobbled together over two hours one night. Once an hour, it simply collected all the router configs across the network, did a 'diff' between the current and last config, and if there were changes, emailed them to me, along with a TACACS+ log summary that showed who had logged into which router when.
There are a couple of tools I know about which will do the first part (the config diffing part). Both are easy to extend if you wanted to include other bits (such as tac-plus log summaries). http://www.shrubbery.net/rancid/ http://buffoon.automagic.org/dist/ciscoconf-1.1.tar.gz I wrote ciscoconf. I would recommend that everybody use rancid instead.
Experience with this quickly taught me to check these summary change logs whenever a problem was escalated to me. Most times the problem was related to a config change, not an external cause. Further experience taught me to look out for one particular engineers name in the logs but that's another story.
Amen to all that. Joe
Once upon a time there was a proposal for a protocol which allowed clients to push a filter configuration to the edge router to both classify traffic and filter unneeded things.
Nice idea. I am sure clients will figure that out. As quickly as they caught on to 'Windows Update' and 'Setting up a VCR clock'. Lets face it: Some things are better left to the "experts".
If the clients don't figure it out, they get the default, which can be as permissive or as restrictive as make sense for people who can't figure out how to control filtering. DS
--On Wednesday, September 3, 2003 3:11 PM -0400 Johannes Ullrich <jullrich@euclidian.com> wrote:
Some businesses have create an entire industry of outsourcing Exchange service which need all their customers to be able to use those ports.
So should everyone else be required to keep their doors open so they can offer the service? Who is wrong/right? Millions of vulnerable users that need some basic protection now, or a few businesses?
Sorry... "Millions of vulnerable users" are only vulnerable because those users chose to run vulnerable systems. They have the responsibility to do what is necessary to correct the vulnerabilities in the systems they chose to run. I am really tired of the attitude that the rest of the world should bear the consequences of Micr0$0ft's incompetence/arrogance. The people who are Micr0$0ft customers should have responsibility to resolve these issues with Micr0$0ft. It is nice of ISPs to help when they do. This is akin to driving a pinto, knowing that it's a bomb, and expecting your local DOT to build explosion-proof freeways. Owen
-- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
Owen, Owen DeLong wrote:
Sorry... "Millions of vulnerable users" are only vulnerable because those users chose to run vulnerable systems. They have the responsibility to do what is necessary to correct the vulnerabilities in the systems they chose to run.
Most of them don't know any better than to run what they've got. Computer users, by and in large, are not at all educated in the nature of what their running, or the potential issues due to running Windows. Who tells them that they shouldn't run Windows?
This is akin to driving a pinto, knowing that it's a bomb, and expecting your local DOT to build explosion-proof freeways.
Your analogy is flawed. The problem is, most people don't realize that: 1.) Windows is as flawed as it is, 2.) That there are real alternatives. But, I suspect, this has gone far off the topic of Operations. Take this off-list; there's nothing to be gained from this discussion any further. ObOperational: Did anybody see some strange latency on UU.Net yesterday in the Chicago area? Gabriel -- Gabriel Cain www.dialupusa.net Systems Administrator gabriel@dialupusa.net Dialup USA, Inc. 888-460-2286 ext 208 PGP Key ID: 2B081C6D PGP fingerprint: C0B4 C6BF 13F5 69D1 3E6B CD7C D4C8 2EA4 2B08 1C6D Beware he who would deny you access to information, for in his heart he dreams himself your master.
Sorry... "Millions of vulnerable users" are only vulnerable because those users chose to run vulnerable systems.
no, they chose to run popular/... systems. they do not know what vulnerable means, let alone how to judge it. pinto owners did not make a conscious choice of buying a bomb. randy
Some businesses have create an entire industry of outsourcing Exchange service which need all their customers to be able to use those ports.
So should everyone else be required to keep their doors open so they can offer the service? Who is wrong/right? Millions of vulnerable users that need some basic protection now, or a few businesses?
If a user needs protection, it is up to user to get it. It is just like one wants to go and screw everyone who walks past him/her, it is up to him/her to make sure that he/she uses condoms, not for everyone else. Alex
I would think that any company that outsourced exchange services to another entity would want either a VPN between their two offices or a direct PtP link. But I also know that the most logical method is not always understandable to the pointy haired people. william ----- Original Message ----- From: "Sean Donelan" <sean@donelan.com> To: "Johannes Ullrich" <jullrich@euclidian.com> Cc: <nanog@merit.edu> Sent: Wednesday, September 03, 2003 1:51 PM Subject: Re: What do you want your ISP to block today?
On Wed, 3 Sep 2003, Johannes Ullrich wrote:
I just summarized my thoughts on this topic here: http://www.sans.org/rr/special/isp_blocking.php
Overall: I think there are some ports (135, 137, 139, 445), a consumer ISP should block as close to the customer as they can.
If ISPs had blocked port 119, Sobig could not have been distributed via USENET.
Perhaps unbelievably to people on this mailing list, many people legitimately use 135, 137, 139 and 445 over the open Internet everyday. Which protocols do you think are used more on today's Internet? SSH or NETBIOS?
Some businesses have create an entire industry of outsourcing Exchange service which need all their customers to be able to use those ports.
http://www.mailstreet.net/MS/urgent.asp
http://dmoz.org/Computers/Software/Groupware/Microsoft_Exchange/
If done properly, those ports are no more or less "dangerous" than any other 16-bit port number used for TCP or UDP protocol headers.
But we need to be careful not to make the mistake that just because we don't use those ports that the protocols aren't useful to other people.
I just read the paper... Sounds like as an ISP, I should offer a new product "The Internet Minus Four Port Numbers Microsoft Can't Handle." What I can't tell is whether this should cost more or less than "The Internet" Matthew Kaufman
On Behalf Of Johannes Ullrich:
I just summarized my thoughts on this topic here: http://www.sans.org/rr/special/isp_blocking.php>
Overall: I think there are some ports (135, 137, 139, 445), a consumer ISP should block as close to the customer as they can.
On Wed, 2003-09-03 at 14:53, Matthew Kaufman wrote:
I just read the paper... Sounds like as an ISP, I should offer a new product "The Internet Minus Four Port Numbers Microsoft Can't Handle." What I can't tell is whether this should cost more or less than "The Internet"
Charge the same and take your 'abuse' team out for lunch on the change you save by blocking the ports ;-) -- -------------------------------------------------------------- Johannes Ullrich jullrich@euclidian.com pgp key: http://johannes.homepc.org/PGPKEYS -------------------------------------------------------------- "We regret to inform you that we do not enable any of the security functions within the routers that we install." support@covad.net --------------------------------------------------------------
Johannes Ullrich wrote:
Charge the same and take your 'abuse' team out for lunch on the change you save by blocking the ports ;-)
We were looking at blocking 25 outbound except to designated servers as well for many of our dialup and broadband customers. Those with the service get the benefit of not worrying about account suspensions for a majority of the issues (open proxies, viruses, yada yada). You'd be surprised how many customers really don't want to have their system suspended and don't care if they have 30 viruses. -Jack
On zaterdag, aug 30, 2003, at 05:42 Europe/Amsterdam, Sean Donelan wrote:
If you don't want to download patches from Microsoft, and don't want to pay McAfee, Symantec, etc for anti-virus software; should ISPs start charging people clean up fees when their computers get infected?
Only if it impacts the ISP, which it doesn't most of the time unless they buy an unfortunate brand of dial-up concentrators.
Would you pay an extra $50/Mb a month for your ISP to operate a firewall and scan your traffic for you?
No way. They have no business even looking at my traffic, let alone filtering it. What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
On Sat, 30 Aug 2003, Iljitsch van Beijnum wrote:
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
No... I have one T1 to Sprint and one T1 to AT&T, I think my AT&T bill will be high this month so I stop sending OUT AT&T and only accept traffic, all my traffic in that link... So now I push OUT sprint and IN AT&T. I don't want sprint to kill my connection just because all traffic to me is entering AT&T do I?
Hey, Chris. ] No... I have one T1 to Sprint and one T1 to AT&T, I think my AT&T bill ] will be high this month so I stop sending OUT AT&T and only accept... Yep, this is a very common tactic, for reasons of finance, politics, responsiveness, etc. Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
On Sat, Aug 30, 2003 at 08:33:54AM +0200, Iljitsch van Beijnum wrote:
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service
Eh? Have you ever run a mailing list? The majority of subscribers NEVER post. Those who do, post prior to the large quantity of traffic originates. I suppose the latter can be accounted for using positronic equipment instead of electronic. =) Legit mailing lists may not be 99% of total traffic, but they're sure a good chunk of legit email.
attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
Sure, and I scan my systems from outside all the time. I'm looking for validation that my system has NOT started listening on ports I don't run services on. It's called external monitoring, and is rather useful in letting me get a good night's sleep. Could I do it locally? Sure, but I'd still need a way to verify my sites can be reached from other places. If you want to know how TCP is working to a destination, you have to use TCP to test it. When I'm working a half dozen part-time contracts, each of whom has multiple servers scattered around the country, this traffic may well be nearly continuous. My employers will "know" about this (it'll be in some memo that no one read), but I'm not going to find every transit provider I cross to warn them, too much hassle. I'm probably not even going to tell my ISP, as it's none of their business. Are those patterns common among DOS/DDOS? Sure. You'll need to do more analysis than that to determine if that's, in fact, what you have. Scans by themselves certainly aren't inherently dangerous. Heavy levels of them? Well, who gets to define "heavy?" A cracker might need only 2 or 3 scans to get the info needed to attack a site. I probably need a few hundred a day to verify said cracker hasn't succeeded. A script kiddie might run hundreds, or more, or less. -- Ray Wong rayw@rayw.net
On zaterdag, aug 30, 2003, at 09:54 Europe/Amsterdam, Ray Wong wrote:
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service
Eh? Have you ever run a mailing list?
No, haven't had the pleasure.
The majority of subscribers NEVER post. Those who do, post prior to the large quantity of traffic originates.
So? SMTP uses TCP, TCP generates incoming ACKs for outgoing data, so no problems there. Christopher L. Morrow's mention of asymmetric routing for multihomed customers is more to the point, but if we can solve this for all those single homed dial, cable and ADSL end-users and not for multihomed networks, I'll be very happy.
attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
Sure, and I scan my systems from outside all the time. I'm looking for validation that my system has NOT started listening on ports I don't run services on. It's called external monitoring, and is rather useful in letting me get a good night's sleep.
So which do you prefer: nobody gets to scan your systems from the outside (including you) or everyone gets to scan your systems from the outside (including you).
but I'd still need a way to verify my sites can be reached from other places.
They have something for that now. It's called "ping".
If you want to know how TCP is working to a destination, you have to use TCP to test it.
As I mentioned above: this will not impact TCP at all because TCP generates return traffic. I'm sure there are one or two UDP applications out there that don't generate return traffic, but I don't know any. The only problem (except asymmetric routing when multihomed) would be tunnels, but you can simply enable RIP or something else on the tunnel to make sure it's used in both directions. Multicast doesn't generate return traffic so this would only apply to unicast destinations.
Scans by themselves certainly aren't inherently dangerous.
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
On Sat, Aug 30, 2003 at 10:28:11AM +0200, Iljitsch van Beijnum wrote:
On zaterdag, aug 30, 2003, at 09:54 Europe/Amsterdam, Ray Wong wrote: So? SMTP uses TCP, TCP generates incoming ACKs for outgoing data, so no problems there.
Ah, so you're only looking to stop non-TCP attacks. How long do you think before the majority of DOS are TCP based? SYN floods result in ACKs, they just also result in the server being useless. If an ACK is all you need, you won't catch much of anything.
Christopher L. Morrow's mention of asymmetric routing for multihomed customers is more to the point, but if we can solve this for all those single homed dial, cable and ADSL end-users and not for multihomed networks, I'll be very happy.
Yes, I'd be happy too, but your original point wasn't terribly specific, and doesn't really address typical traffic patterns. Now that it's clear, how about a more obvious one: Streaming services are primarily assymetric, and plenty of them use UDP. There may be a little return traffic, but nothing you're going to predict. I suppose you can call for the end of UDP based streaming protocols. Good luck. It took long enough for people to get used to moving away from NFSv2.
attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
Sure, and I scan my systems from outside all the time. I'm looking for validation that my system has NOT started listening on ports I don't run services on. It's called external monitoring, and is rather useful in letting me get a good night's sleep.
So which do you prefer: nobody gets to scan your systems from the outside (including you) or everyone gets to scan your systems from the outside (including you).
So let's see, my choices are: 1) both cracker and I know if I've been cracked by cracker. 2) cracker knows I've been hacked, I have to wait until my server is now an active participant in screwing the rest of the internet, AND I then have to actively be inspecting the system to see where he's failed to cover his tracks well. Yes, the choice is wonderful. Obscurity has done so much to enhance reliability, security, you name it.
but I'd still need a way to verify my sites can be reached from other places.
They have something for that now. It's called "ping".
Yes, and ICMP echos are already consistent in being blocked (not). This line is relevant:
If you want to know how TCP is working to a destination, you have to use TCP to test it.
It's an example. I need to generate traffic to the various ports. Even if I know ping is working, that doesn't mean I know HTTP or SSH or RTSP or SMTP are getting through. Relying on ping to verify outside connectivity is great for providing a ping response server, but not many customers seem interested in paying for that.
As I mentioned above: this will not impact TCP at all because TCP generates return traffic. I'm sure there are one or two UDP applications out there that don't generate return traffic, but I don't know any. The only problem (except asymmetric routing when multihomed)
UDP generates return traffic, but there's nothing to predict any degree of symmetry. Indeed, likely different last mile, local congestion, et al virtually guarantee that I can't predict how much return traffic there will be. Look inside, and they all come down to 'push a bunch of UDP out. pray very hard that enough gets to the other side. hope that other side can tell us if not.' ICMP likewise may or may not result in return traffic. At any level, things are almost never completely tit-for-tat.
Scans by themselves certainly aren't inherently dangerous.
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
Sure, and spoofing the special "return traffic" will be obvious only to the other end, not the transits in the middle. -- Ray Wong rayw@rayw.net
On zaterdag, aug 30, 2003, at 10:57 Europe/Amsterdam, Ray wrote:
So? SMTP uses TCP, TCP generates incoming ACKs for outgoing data, so no problems there.
Ah, so you're only looking to stop non-TCP attacks. How long do you think before the majority of DOS are TCP based? SYN floods result in ACKs, they just also result in the server being useless. If an ACK is all you need, you won't catch much of anything.
A SYN flood will either stay within the resource limits of the (network to the) target host, or it won't, and either the source addresses are legitimate, or they aren't. Only in one of the four combined cases there will be return traffic for most packets. So this should have beneficial effects most of the time. Also, when the target host implements filtering there won't be return traffic so then it should work even better.
Now that it's clear, how about a more obvious one: Streaming services are primarily assymetric, and plenty of them use UDP. There may be a little return traffic, but nothing you're going to predict.
I did a little test using Quicktime and I see 10 packets per second return traffic. But the port numbers don't match the traffic flowing in the other direction... The amount of return traffic isn't important, as long as there is _some_.
If you want to know how TCP is working to a destination, you have to use TCP to test it.
It's an example. I need to generate traffic to the various ports. Even if I know ping is working, that doesn't mean I know HTTP or SSH or RTSP or SMTP are getting through.
So what's the problem? You open an HTTP, SSH, RTSP or SMTP session and see if you get a response. If you do, no problems. If you don't, the "suspicious traffic going on" counter increases. If you keep hammering on a non responsive server then after a while something is going to happen to your port. I think rate limiting outgoing traffic to very low levels (5 kbps or so) is probably the best automated way to handle this.
Scans by themselves certainly aren't inherently dangerous.
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
Sure, and spoofing the special "return traffic" will be obvious only to the other end, not the transits in the middle.
Hm, good point. Maybe it's easier to set the thresholds such that some limited port scanning doesn't trigger any action. It's not like any of this is going to make targeted portscanning completely impossible anyway, it will mostly make sweeping the net for vulnerable systems too slow to be useful.
Christopher L. Morrow's mention of asymmetric routing for multihomed customers is more to the point, but if we can solve this for all those single homed dial, cable and ADSL end-users and not for multihomed networks, I'll be very happy.
Sorry to throw yet another insect into the topical remedy (fly in the ointment), but, I happen to look alot like a single homed ADSL end user at certain levels, but, I'm multihomed. I'd be very annoyed if my ISP started blocking things just because my traffic pattern didn't look like what they expect from a single homed customer.
So which do you prefer: nobody gets to scan your systems from the outside (including you) or everyone gets to scan your systems from the outside (including you).
I prefer the latter.
If you want to know how TCP is working to a destination, you have to use TCP to test it.
As I mentioned above: this will not impact TCP at all because TCP generates return traffic. I'm sure there are one or two UDP applications out there that don't generate return traffic, but I don't know any. The only problem (except asymmetric routing when multihomed) would be tunnels, but you can simply enable RIP or something else on the tunnel to make sure it's used in both directions. Multicast doesn't generate return traffic so this would only apply to unicast destinations.
But, TCP to a port that isn't listening (or several ports that aren't listening) _ARE_ what you are talking about blocking. This is not a good idea.
Scans by themselves certainly aren't inherently dangerous.
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
I don't think it's desirable or appropriate to have everyone re-engineer their hosts to allow monitoring and external validation scans to get around your scheme for turning off services ISPs should be providing. Owen
On zaterdag, aug 30, 2003, at 18:54 Europe/Amsterdam, Owen DeLong wrote:
Christopher L. Morrow's mention of asymmetric routing for multihomed customers is more to the point, but if we can solve this for all those single homed dial, cable and ADSL end-users and not for multihomed networks, I'll be very happy.
I happen to look alot like a single homed ADSL end user at certain levels, but, I'm multihomed. I'd be very annoyed if my ISP started blocking things just because my traffic pattern didn't look like what they expect from a single homed customer.
I'm sure knife salespeople find it extremely annoying that they can't bring their wares along as carry-on when they fly. Sometimes a few people have to be inconvenienced for the greater good.
But, TCP to a port that isn't listening (or several ports that aren't listening) _ARE_ what you are talking about blocking. This is not a good idea.
Why not? I think it's a very good idea. TCP doesn't work if you only use it in one direction, so blocking this doesn't break anything legitimate, but it does stop a whole lot of abuse. (Obviously I'm talking about the case where the lack of return traffic can be determined with a modicum of reliability.)
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
I don't think it's desirable or appropriate to have everyone re-engineer their hosts to allow monitoring and external validation scans to get around your scheme for turning off services ISPs should be providing.
But then you don't seem to have any problems with letting through denial of service attacks so I'm not sure if there is any use in even discussing this with you. Today, about half of all mail is spam, and it's only getting worse. If we do nothing, tomorrow half of all network traffic could be worms, scans and DOS. We can't go on sitting on our hands.
--On Saturday, August 30, 2003 8:18 PM +0200 Iljitsch van Beijnum <iljitsch@muada.com> wrote:
On zaterdag, aug 30, 2003, at 18:54 Europe/Amsterdam, Owen DeLong wrote:
Christopher L. Morrow's mention of asymmetric routing for multihomed customers is more to the point, but if we can solve this for all those single homed dial, cable and ADSL end-users and not for multihomed networks, I'll be very happy.
I happen to look alot like a single homed ADSL end user at certain levels, but, I'm multihomed. I'd be very annoyed if my ISP started blocking things just because my traffic pattern didn't look like what they expect from a single homed customer.
I'm sure knife salespeople find it extremely annoying that they can't bring their wares along as carry-on when they fly. Sometimes a few people have to be inconvenienced for the greater good.
In my opinion, this is a very unfortunate attitude largely based on FUD and myth. Apologies for the off-topicness of the following example, but, having just been through this level of greater good, I hope it will serve some positive purpose if people realize how ridiculous it gets if you let this go. Frankly, I think the level of absurdity that the TSA and HSA have taken things to speaks for itself. From May 21 of this year until August 1, certain interpretations of our newfound greater good would have allowed me to be classified as a terrorist and hauled off to prison. Why? Because on May 21, depending on your interpretation of the statutes, my posession of an until then perfectly legal 2 pounds of black powder or my posession of an until then perfectly legal Aerotech J-350 Ammonium Perchlorate Composite Propellant rocket motor reload suddenly changed from a perfectly legal hobby to an act of terrorism for anyone who did not posess a Low Explosives User Permit from the USDOJ/BATFE. What changed on August 1? I got my permit (finally) which I applied for in April. The minor inconvenience involved in doing this consisted of: 1. $100 to the feds. 2. I had to file an FBI Fingerprint Card with the BATF + $30 to get the fingerprinting done + Took about 3 hours to track down the correct method of getting the fingerprinting done and actually have it done. (BATF instructions didn't work and it turned into a name-that-bureacracy trip through 5 different agencies to find one that would do the fingerprinting (no, the FBI will not)). 3. Federal Background Check 4. Essentially sign away my 4th amendment rights and grant the BATFE permission to inspect my home at any time. 5. Get a letter of agreement for contingency storage from at least one agency with a LEUP and a storage authorization (my LEUP is a non-storage LEUP). 6. I now need to keep records of all my rocket motor purchases, usages, storages, and other dispositions for 10 years. The greater good accomplished: Any nutcase that wants to can still pay cash for all the ammonium nitrate and diesel fuel he/she wants with no identification required, no record of the transaction, and no permit required. Did I mention that the Oklahoma City Federal building has proven that AN+Diesel does explode, while the NH state police explosives lab has proven that APCP DOES NOT EXPLODE. Sorry... I just don't see a greater good in forcing liability on ISPs for forwarding IP datagrams with valid headers.
But, TCP to a port that isn't listening (or several ports that aren't listening) _ARE_ what you are talking about blocking. This is not a good idea.
Why not? I think it's a very good idea. TCP doesn't work if you only use it in one direction, so blocking this doesn't break anything legitimate, but it does stop a whole lot of abuse. (Obviously I'm talking about the case where the lack of return traffic can be determined with a modicum of reliability.)
1. Your assumption is false. There are multiple diagnostic things that can be accomplished with what appears to be a single-sided TCP connection. 2. I should be able to probe, portscan, or otherwise attack my own site from any location on the internet so long as I do not create a DOS or AUP violation on someone elses network that I have an agreement with. 3. Fixing the end hosts will stop a lot more abuse than breaking the network will.
It should be possible to have a host generate special "return traffic" that makes sure that stuff that would otherwise be blocked is allowed through.
I don't think it's desirable or appropriate to have everyone re-engineer their hosts to allow monitoring and external validation scans to get around your scheme for turning off services ISPs should be providing.
But then you don't seem to have any problems with letting through denial of service attacks so I'm not sure if there is any use in even discussing this with you. Today, about half of all mail is spam, and it's only getting worse. If we do nothing, tomorrow half of all network traffic could be worms, scans and DOS. We can't go on sitting on our hands.
I don't propose sitting on our hands. I propose fixing the problem where the problem is. What you are proposing makes as much sense as locking up all the yeast producers to cut down on drunk driving. Sure, there are fewer yeast producers than drunk drivers and they're in business, so they're easier to find. However, just because it's easier doesn't make it correct or even logical. Yes, this is an extreme example, but, other than degree of separation, I don't see alot of difference in the approaches. Fixing the edge is harder, but, it will yield better results. Breaking the core is easier, but, will yield lots of collateral damage and won't necessarily do much more than create smarter worms. Owen
At 07:33 30/08/2003, Iljitsch van Beijnum wrote:
On zaterdag, aug 30, 2003, at 05:42 Europe/Amsterdam, Sean Donelan wrote:
If you don't want to download patches from Microsoft, and don't want to pay McAfee, Symantec, etc for anti-virus software; should ISPs start charging people clean up fees when their computers get infected?
Only if it impacts the ISP, which it doesn't most of the time unless they buy an unfortunate brand of dial-up concentrators.
Would you pay an extra $50/Mb a month for your ISP to operate a firewall and scan your traffic for you?
No way. They have no business even looking at my traffic, let alone filtering it.
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
This is fine until a customers sends out legitimate multicast traffic, so any such scheme has to ignore multicast traffic. Then the worms and virus writers will just switch to using multicast as a vector. Also this only works where routing is strictly symmetrical (e.g. edge connections, and to single homed edges at that). It also has the problem that you have to retain some state (possibly little) for all outbound traffic until you can match it to inbound traffic. Given the paupacity of memory in most edge routers this is a problem. Even with a decent amount of memory, it would soon get overrun, even on a slowish circuit like a T1. A DSLAM with several hundred DSL lines would need lots of memory to implement this, and lots of CPU cycles to manage it. At the layer 3 level, all TCP traffic is revertive as it has to send ACKs back so this scheme can't simply work on '"I've seen another packet in the reverse direction, so it's OK". So we have to work on byte counts and see if what goes one way is balanced by what goes another way. Then it gets worse still, much legitimate traffic is highly asymmetric. In a POP3 session, most traffic is one way and only a small quantity of high level ACKs go the other way. Ditto SMTP and most HTTP traffic. So, we're reached the stage that, for this to work, it has to have at least the complexity of a stateful firewall. OK, that is doable, at a cost. But in the process we seem to have lost any characteristic of asymmetry that allows us to distinguish good from bad traffic.
He added that ISPs have the view and ability to prevent en-masse attacks. "All these attacks traverse their networks before they reach you and me. If they would simply stop attack traffic that has been identified and accepted as such, we'd all sleep better," Cooper said.
Frankly I dont want any of my ISP's filtering any of my traffic. I think we need (especially enterprise administrators like myself) to take some responsibility, and place our own filters. Filters not only to stop the ingress attack but to also filter our own egress traffic. I have encountered many private administrators who have the mentality that all they need to do is filter the ingress traffic and do not place egress filters on their networks. TSK TSK TSK!!!!! Individuals like Rob Thomas, and countless others provide frequently updated Bogon Lists, templates, etc. apply these to your edge. This is your first layer of filtering. Make sure to apply NULL routes to the BOGONS so you block these on the egress. Apply prefix list if you are a BGP speaker (keep that routing table clean), and access list at your ingress point to block any traffic from a BOGON (Bogus!!!) address. Now you are ready for your next filters. Use a chokepoint, and filter now your TCP/UDP ports, or any other protocols you run internally (MS PORTS???). Making an all inclusive filter is the only way to go here. Now keep yourself informed and modify your filters to mitigate attacks, etc. This might not be the easy way (easy way would be to say...Hey ISP it's on you now...Filter this stuff!!!!) but it is the only sure way to protect that network you administrate (which is your responsibility not the ISP's). Frankly all I want my ISP to do is to maintain my link with them, provide to me BGP routes, and accept my advertisements. Your BOGONS are easily maintained since once again individuals like Rob Thomas update their templates accordingly (THANKS!!!!!!!), and are nice enough to also inform the list of upcoming changes. A big letter "L" should be stamped on anyone's forehead who was allowing ingress traffic on those MS ports (and even more so if they where allowing it to egress also). Microsoft cannot blame the ISP networks for not filtering the ports used by their proprietary protocols. Shame on them, shame on all those that left these ports open on their networks. Even if ISP's would begin filtering (a thought that doesnt make me too happy) I would never trust their filters because I have no control over them. Yes I am that paranoid!!!!!!! Gerardo A. Gregory Manager Network Administration and Security 402-970-1463 (Direct) 402-850-4008 (Cell) ------------------------------------------------ Affinitas - Latin for "Relationship" Helping Businesses Acquire, Retain, and Cultivate Customers Visit us at http://www.affinitas.net
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Gerardo Gregory
Frankly I dont want any of my ISP's filtering any of my traffic. I think we need (especially enterprise administrators like myself) to take some responsibility, and place our own filters.
That's a popular sentiment which derives its facade of reasonableness from the notion that ISP's ought to provide unencumbered pipes to the Internet core. However, it doesn't bear close scrutiny. Would you say that ISP's should not filter spoofed source addresses? That they should turn off "no ip directed broadcast"? Of course not, because such traffic is clearly pathological with no redeeming social value. The tough part for the ISP is to decide what other traffic types are absolutely illegitimate and should therefore be subject to being Verboten on the net.
Well I understand why an ISP will filter these. But those things you mentioned are not software vendor vulnerabilities, or vulnerabilities of some proprietary protocol used only by desktop systems. Also the ISP will filter anything it feels it is a threat to it's own systems as that is where their own responsibility lies, and if they dont protect these they dont make any money. Because an ISP chooses to filter IANA reserved addresses (I am to argue that all do not perform this type of filtering, I would think that applying prefix lists, and null routes is what an ISP would do...not filter on source address...I have received packets at my edge with a IANA reserved address as the source), or turn off IP directed broadcasts, does not compare to applying filters every single time some vendor releases faulty code, or their code is exploited. These exploits affect the end user nodes of the ISP's customer, not the ISP itself (in a grand scale). The ISP is a business. G. Mark Borchers writes:
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Gerardo Gregory
Frankly I dont want any of my ISP's filtering any of my traffic. I think we need (especially enterprise administrators like myself) to take some responsibility, and place our own filters.
That's a popular sentiment which derives its facade of reasonableness from the notion that ISP's ought to provide unencumbered pipes to the Internet core. However, it doesn't bear close scrutiny.
Would you say that ISP's should not filter spoofed source addresses? That they should turn off "no ip directed broadcast"? Of course not, because such traffic is clearly pathological with no redeeming social value.
The tough part for the ISP is to decide what other traffic types are absolutely illegitimate and should therefore be subject to being Verboten on the net.
Gerardo A. Gregory Manager Network Administration and Security 402-970-1463 (Direct) 402-850-4008 (Cell) ------------------------------------------------ Affinitas - Latin for "Relationship" Helping Businesses Acquire, Retain, and Cultivate Customers Visit us at http://www.affinitas.net
Frankly I dont want any of my ISP's filtering any of my traffic. I think we need (especially enterprise administrators like myself) to take some responsibility, and place our own filters.
That's a popular sentiment which derives its facade of reasonableness from the notion that ISP's ought to provide unencumbered pipes to the Internet core. However, it doesn't bear close scrutiny.
I disagree.
Would you say that ISP's should not filter spoofed source addresses?
It depends. If spoofed source address can be determined with 100% reliability then, generally, yes. However, an ISP, generally would only be able to reliably make this determination on some of their own customers' links. As such, that's not my traffic unless I'm already violating an AUP or one of said ISPs other customers is violationg the ISPs AUP. Of course an ISP has the right to block traffic which is in clear violation of the ISPs AUP from the ISPs customers who presumably signed the AUP as a condition of their service agreement.
That they should turn off "no ip directed broadcast"? Of course not,
I cannot think of a single situation in which the ISPs configuration of no ip directed broadcast would affect my traffic unless I was sending traffic _TO_ the broadcast of some network within the ISPs backbone. As such, I would, again, figure that falls into the AUP violation category above.
because such traffic is clearly pathological with no redeeming social value.
No. Because such traffic is clearly in violation of the AUP I signed as a customer and for no other reason. My ISP has the right to block my traffic in any case where I am in violation of the AUP. He has a similar right with any of his/her other customers. Outside of that, no, an ISP should not, generally block traffic.
The tough part for the ISP is to decide what other traffic types are absolutely illegitimate and should therefore be subject to being Verboten on the net.
Again, this is a very slippery slope and relies on the fallacy that traffic must have some socially redeeming value in order to be routed. In my eyes, what traffic has value may be radically different from your opinion. Allowing opinion to enter into rulesets is not, generally, a good plan. Owen
On zaterdag, aug 30, 2003, at 14:44 Europe/Amsterdam, Ian Mason wrote:
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack
This is fine until a customers sends out legitimate multicast traffic, so any such scheme has to ignore multicast traffic. Then the worms and virus writers will just switch to using multicast as a vector.
Yes, that would be cool. I'm surprised that Microsoft doesn't send out its updates over multicast yet. That would save them unbelievable amounts of bandwidth: all Windows boxes simply join the windows update multicast group so they automatically receive each and every update. But we can safely assume they won't use single source multicast so it's only a question of time before some industrious worm builder creates the ultimate worm: one that infects all windows systems world wide by sending a single packet to the windows update multicast group... Ok, this could happen if: 1. more than five people world wide had interdomain multicast capability 2. anyone with multicast capability could send to any multicast group And besides, this will happen if possible regardless of the utility of unicast for worm propagation.
Also this only works where routing is strictly symmetrical (e.g. edge connections, and to single homed edges at that).
Yes.
It also has the problem that you have to retain some state (possibly little) for all outbound traffic until you can match it to inbound traffic. Given the paupacity of memory in most edge routers this is a problem. Even with a decent amount of memory, it would soon get overrun, even on a slowish circuit like a T1. A DSLAM with several hundred DSL lines would need lots of memory to implement this, and lots of CPU cycles to manage it.
Give implementers a little credit. There is no need to do this for every packet that flows through a box. You can simply sample the traffic at regular intervals and perform the return traffic check for only a small fraction of all traffic. Statistics is on your side here, as with Random Early Detect congestion/queue management, because you automatically see more packets from sources that send out a lot of traffic.
At the layer 3 level, all TCP traffic is revertive as it has to send ACKs back so this scheme can't simply work on '"I've seen another packet in the reverse direction, so it's OK".
That's exactly why this works: if the other end sends ACKs, then obviously at _some_ level they're willing to talk. So that would indeed be ok. With DOS and scanning this is very different: for many/most/all packets sent by the attacking system, nothing comes back, except maybe a port unreachable or RST.
On Sat, 30 Aug 2003 13:44:05 +0100 Ian Mason <nanog@ian.co.uk> wrote:
At 07:33 30/08/2003, Iljitsch van Beijnum wrote:
On zaterdag, aug 30, 2003, at 05:42 Europe/Amsterdam, Sean Donelan wrote:
If you don't want to download patches from Microsoft, and don't want to pay McAfee, Symantec, etc for anti-virus software; should ISPs start charging people clean up fees when their computers get infected?
Only if it impacts the ISP, which it doesn't most of the time unless they buy an unfortunate brand of dial-up concentrators.
Would you pay an extra $50/Mb a month for your ISP to operate a firewall and scan your traffic for you?
No way. They have no business even looking at my traffic, let alone filtering it.
What would be great though is a system where there is an automatic check to see if there is any return traffic for what a customer sends out. If someone keeps sending traffic to the same destination without anything coming back, 99% chance that this is a denial of service attack. If someone sends traffic to very many destinations and in more than 50 or 75 % of the cases nothing comes back or just an ICMP port unreachable or TCP RST, 99% chance that this is a scan of some sort.
This is fine until a customers sends out legitimate multicast traffic, so any such scheme has to ignore multicast traffic. Then the worms and virus writers will just switch to using multicast as a vector.
It's not just UDP Multicast. Unicast streaming is moving towards UDP. In Apple Darwin Streaming Server, for example, unicast streaming is UDP by default. Examination of my DSS server logs shows that over 2/3 of our video streaming in the last 2 months is over UDP. In this UDP streaming there is return traffic but it is highly assymetric. Regards Marshall Eubanks
Also this only works where routing is strictly symmetrical (e.g. edge connections, and to single homed edges at that).
It also has the problem that you have to retain some state (possibly little) for all outbound traffic until you can match it to inbound traffic. Given the paupacity of memory in most edge routers this is a problem. Even with a decent amount of memory, it would soon get overrun, even on a slowish circuit like a T1. A DSLAM with several hundred DSL lines would need lots of memory to implement this, and lots of CPU cycles to manage it.
At the layer 3 level, all TCP traffic is revertive as it has to send ACKs back so this scheme can't simply work on '"I've seen another packet in the reverse direction, so it's OK". So we have to work on byte counts and see if what goes one way is balanced by what goes another way.
Then it gets worse still, much legitimate traffic is highly asymmetric. In a POP3 session, most traffic is one way and only a small quantity of high level ACKs go the other way. Ditto SMTP and most HTTP traffic.
So, we're reached the stage that, for this to work, it has to have at least the complexity of a stateful firewall. OK, that is doable, at a cost. But in the process we seem to have lost any characteristic of asymmetry that allows us to distinguish good from bad traffic.
On Sat, 30 Aug 2003, Iljitsch van Beijnum wrote:
Only if it impacts the ISP, which it doesn't most of the time unless they buy an unfortunate brand of dial-up concentrators.
Bits are bits, very few of them actually impact the ISP itself. Most ISPs protect their own infrastructure. Routers are very good at forwarding bits. Routers have problems filtering bits. Whether it is spam, viruses or other attacks; its mostly customers or end-users that bear the brunt of the impact, not the ISP. The recurring theme is: I don't want my ISP to block anything I do, but ISPs should block other people from doing things I don't think they should do. So how long is reasonable for an ISP to give a customer to fix an infected computer; when you have cases like Slammer where it takes only a few minutes to infect the entire Internet? Do you wait 72 hours? or until the next business day? or block the traffic immediately? Or some major ISPs seem to have the practice of letting the infected computers continuing attacking as long as it doesn't hurt their network.
Bits are bits, very few of them actually impact the ISP itself. Most
Lies! all the bits that pass through the ISP impact the ISP. Generally in the fiscal arena. More bits == More cash.
Or some major ISPs seem to have the practice of letting the infected computers continuing attacking as long as it doesn't hurt their network.
Or for fiscal reasons... -- bill (being cynical in WDC)
--On Saturday, August 30, 2003 1:08 PM -0700 bmanning@karoshi.com wrote:
Bits are bits, very few of them actually impact the ISP itself. Most
Lies! all the bits that pass through the ISP impact the ISP. Generally in the fiscal arena. More bits == More cash.
Actually, there can be some debate to this point. Bits really don't cost anything more until they fill a link and require the purchase of additional bandwidth. Otherwise, generally, for ISPs, the finanancial impact of an empty link is often more expensive than that of one full of bits. (often, bits are billable. Idletime is almost never billable)
Or some major ISPs seem to have the practice of letting the infected computers continuing attacking as long as it doesn't hurt their network.
Or for fiscal reasons...
-- bill (being cynical in WDC)
Yep. Owen
On Sat, 30 Aug 2003, Sean Donelan wrote:
The recurring theme is: I don't want my ISP to block anything I do, but ISPs should block other people from doing things I don't think they should do.
That's about my position, I guess. <g> There's a difference between naively blocking ports or screwing with packets, though, and blocking known dodgy behaviour (spoofed source addresses, for one). Yes, port 135 is a known vector, and so is 4444 now, but they have their legitimate uses. If you have evidence that someone is doing something dodgy with them, then you should shut them down. But spanking everyone because some people can't/won't take responsibility for their systems reeks of schoolroom justice ("We're all going to sit here until the guilty party owns up").
So how long is reasonable for an ISP to give a customer to fix an infected computer; when you have cases like Slammer where it takes only a few minutes to infect the entire Internet? Do you wait 72 hours? or until the next business day? or block the traffic immediately?
Immediately. The ISP is, IMO, responsible for the traffic of those they connect to the Internet. Maybe I'm just showing my old-fashioned values there, though.
Or some major ISPs seem to have the practice of letting the infected computers continuing attacking as long as it doesn't hurt their network.
"Welcome to my null0, O provider of loose morals". -- ----------------------------------------------------------------------- #include <disclaimer.h> Matthew Palmer, Geek In Residence http://ieee.uow.edu.au/~mjp16
On Sun, 31 Aug 2003, Matthew Palmer wrote:
dodgy behaviour (spoofed source addresses, for one). Yes, port 135 is a known vector, and so is 4444 now, but they have their legitimate uses. If
OK, here's an alternative viewpoint. We're an ISP. I'm blocking 135 and the other netbios ports inbound on my clients dial-up/dsl lines because if I didn't, the lines would be useless. Client side firewalls are great, but by the time they can do anything the traffic is already over the line. It doesn't take much traffic at all to overload a dial-up, and every virus flare-up puts a noticeable impact on DSL lines. I'll unblock for a client that asks. The only one who asked, sheepishly asked for it to be put back less than an hour later. They couldn't do anything with the line. It's all well and good to say how things 'should' be, but reality has a way of not caring how things should be. ========================================================== Chris Candreva -- chris@westnet.com -- (914) 967-7816 WestNet Internet Services of Westchester http://www.westnet.com/
On Sun, 31 Aug 2003, Christopher X. Candreva wrote:
We're an ISP. I'm blocking 135 and the other netbios ports inbound on my clients dial-up/dsl lines because if I didn't, the lines would be useless.
Sunday morning posting. I'm blocking these ports OUTBOUND -- TO our clients. Their lines are being saturated by other infected hosts trying to infect them. ========================================================== Chris Candreva -- chris@westnet.com -- (914) 967-7816 WestNet Internet Services of Westchester http://www.westnet.com/
On zaterdag, aug 30, 2003, at 20:54 Europe/Amsterdam, Sean Donelan wrote:
Only if it impacts the ISP, which it doesn't most of the time unless they buy an unfortunate brand of dial-up concentrators.
Bits are bits, very few of them actually impact the ISP itself. Most ISPs protect their own infrastructure. Routers are very good at forwarding bits. Routers have problems filtering bits. Whether it is spam, viruses or other attacks; its mostly customers or end-users that bear the brunt of the impact, not the ISP.
Impact can be more than ISP equipment getting into trouble. It can also be congestion or excessive bandwidth use because of incoming abusive traffic, or infected customers.
The recurring theme is: I don't want my ISP to block anything I do, but ISPs should block other people from doing things I don't think they should do.
Actually this doesn't have to be the paradox it seems to be. If we can find a way to make sure at the source that the destination welcomes the communication, we can have both.
So how long is reasonable for an ISP to give a customer to fix an infected computer; when you have cases like Slammer where it takes only a few minutes to infect the entire Internet? Do you wait 72 hours? or until the next business day? or block the traffic immediately?
Or some major ISPs seem to have the practice of letting the infected computers continuing attacking as long as it doesn't hurt their network.
Let's first look at the reverse situation: infective traffic comes in. Customers may take the position that it is in their best interest that their ISP filters this traffic forever, so that they can't get infected, regardless of whether they patch their systems or not. But it isn't realistic to expect ISPs to do this. First of all, because in many cases, the vulnerability is in a service that also has legitimate uses. In some cases this isn't much of a problem: for instance, with the slammer worm blocking the affected port didn't really impact the SQL service. Or with filtering blaster, windows file sharing doesn't work anymore but this isn't a public service so the people who need it can run it over a secure tunnel of some kind. However, shutting down port 80 because an HTTP implementation has a vulnerability wouldn't be acceptable because of the collateral damage. Then there are the issues of ISPs being able to do this effectively in the first place, and effectiveness. If ISPs were to filter everything forever everywhere, maybe this would be effective, but nearly all equipment takes a performance hit when it has to filter, and this usually gets worse as the filters get bigger, and there are limits to the length of filters. On top of that, there is the management issue: with 100k ADSL customers, you need to apply filters to 100k interfaces on hundreds of boxes. So in reality ISPs can only have a limited number of filter rules in a limited number of places. While this gets rid of most of the infective traffic for as long as the filter is in place, this doesn't really protect customers, as when one customer is infected, the infection can still spread to other customers (most worms are optimized for this) unless the ISP has put filters on all customer ports. And we've seen that worms are often carried from location to location in infected laptops. And then, when the filter rules have to go (for instance because there is a new worm du jour) experience shows there is still some infecting traffic, however long after the initial outbreak, so at some point a vulnerable system WILL be infected. Last but not least: if ISPs filter X worms, and then worm X+1 presents itself which proves unfilterable, things get really bad because users were depending on ISP action to prevent infection, rather than take their own measures. This could even lead to legal problems for ISPs. Bottom line: unless ISPs explicitly want to take on this responsibility and invest in heavier equipment and very advanced network management, the best they can do is take the edge off by implementing some filtering that allows their users a little more time to patch their systems. Then there is the other side of the coin: infected customers. I mostly work for content hosters these days, and there the situation is slightly different from the one that access ISPs are facing, as the number of customers is much smaller and the bandwidth they have is much larger. So one customer can do much more damage by either causing congestion in the local network or by driving up the bandwidth use on external connections (which is expensive because of the usual 95th percentile billing). There have been several cases the past year where my customers shut down ports of infected customers of theirs (sometimes lowering the port speed to 10 Mbps is a good compromise). But since this leads to many phone calls, I can imagine that doing this for every infected customer may be a problem for ISPs with many dial/ADSL/cable customers. Also, if the bandwidth use isn't too excessive, it may not always be apparent that a customer is infected.
Sean Donelan wrote:
If you don't want to download patches from Microsoft, and don't want to pay McAfee, Symantec, etc for anti-virus software; should ISPs start charging people clean up fees when their computers get infected?
www.google.com +Free +AntiVirus Now was that so hard? -Jack
Rob Thomas wrote:
Oh, good gravy! I have a news flash for all of you "security experts" out there: The Internet is not one, big, coordinated firewall with a handy GUI, waiting for you to provide the filtering rules. How many of you "experts" regularly sniff OC-48 and OC-192 backbones for all those naughty packets? Do you really want ISPs to filter the mother of all ports-of-pain, TCP 80?
Yes. While I hate to admit it, the one thing worse than not applying filters is applying them incorrectly. A good example would be the icmp rate limits. It's one thing to shut off icmp, or even filtering 92 byte icmp. The second one rate-limits icmp echo/reply, they just destroyed the number one network troubleshooting and performance testing tool. If it was a full block, one would say "it's filtered". Yet with rate limiting, you just see sporatic results; sometimes good, sometimes high latency, sometimes dropped. Filter edges, and if you apply a backbone filter, apply it CORRECTLY! Rate-limiting icmp is not correctly. -Jack
On Fri, 29 Aug 2003, Sean Donelan wrote:
http://www.vnunet.com/Analysis/1143268
Although companies may have the infrastructure to deal with the current band of worms, Trojans and viruses, there is currently a line of defence that is not in place. "The problem isn't Microsoft's products or the knowledge of the consumer. The problem lies in the ISPs' unwillingness to make this issue disappear or at least reduce it dramatically," said Cooper.
This completely overlooks the user as the ultimate infection vector. Even if Microsoft never has another external hole users can still infect themselves. To paraphrase badly: the most dangerous part of the computer is the nut behind the wheel. Moore's law in on the side of virus writers that spam their viruses to users. As long as users only need to click on email attachments to execute programs you can expect an increasing amount of virus spam.
He added that ISPs have the view and ability to prevent en-masse attacks. "All these attacks traverse their networks before they reach you and me. If they would simply stop attack traffic that has been identified and accepted as such, we'd all sleep better," Cooper said.
Perhaps paper manufacturers should be held liable until they come out with paper that can't be used to write down bad ideas. Mike. +----------------- H U R R I C A N E - E L E C T R I C -----------------+ | Mike Leber Direct Internet Connections Voice 510 580 4100 | | Hurricane Electric Web Hosting Colocation Fax 510 580 4151 | | mleber@he.net http://www.he.net | +-----------------------------------------------------------------------+
On Fri, 29 Aug 2003 21:36:36 PDT, Mike Leber said:
Perhaps paper manufacturers should be held liable until they come out with paper that can't be used to write down bad ideas.
Know what *really* irks me? I order blank paper, and this damned company keeps sending me paper that's got connect-the-dots pictures of bad ideas all over it. I'd change vendors, but I can't find a copying machine vendor that will service my copier if I use any other brand....
On Fri, 29 Aug 2003, Sean Donelan wrote:
Which Microsoft protocols should ISP's break today? Microsoft Exchange? Microsoft file sharing? Microsoft Plug & Play? Microsoft SQL/MSDE? Microsoft IIS?
All of the above. <g>
He added that ISPs have the view and ability to prevent en-masse attacks. "All these attacks traverse their networks before they reach you and me. If they would simply stop attack traffic that has been identified and accepted as such, we'd all sleep better," Cooper said.
Bwahahaha. Ghod I love a good comedian. Having recently pulped my head against the wall of a "network provider" too clueless to provision decent IP connectivity, the last thing I want is to have the ISP unilaterally decide what they're going to do with my packets. -- ----------------------------------------------------------------------- #include <disclaimer.h> Matthew Palmer, Geek In Residence http://ieee.uow.edu.au/~mjp16
participants (31)
-
alex@yuriev.com
-
bmanning@karoshi.com
-
Christopher L. Morrow
-
Christopher X. Candreva
-
David Schwartz
-
Gabriel
-
Gerardo Gregory
-
Ian Mason
-
Iljitsch van Beijnum
-
Jack Bates
-
Joe Abley
-
Johannes Ullrich
-
Mark Borchers
-
Marshall Eubanks
-
Matthew Kaufman
-
Matthew Palmer
-
Matthew S. Hallacy
-
Mike Leber
-
Owen DeLong
-
Paul Vixie
-
Petr Swedock
-
Petri Helenius
-
Randy Bush
-
Ray Wong
-
rayw@rayw.net
-
Rob Thomas
-
Sean Donelan
-
Terry Baranski
-
Vadim Antonov
-
Valdis.Kletnieks@vt.edu
-
William Devine, II