Providers removing blocks on port 135?
Anyone know anything about prorviders removing ACLs from their routers to allow ports 135/445/4444 back into their network? Curious only because customers are calling in saying that Verizon, Cox, Bellsouth, and DSL.net are doing so and seem to have a big problem with the fact that we're hesitent follow their lead. Adam
Adam Hall wrote:
Anyone know anything about prorviders removing ACLs from their routers to allow ports 135/445/4444 back into their network? Curious only because customers are calling in saying that Verizon, Cox, Bellsouth, and DSL.net are doing so and seem to have a big problem with the fact that we're hesitent follow their lead.
No two networks are the same, nor do they have the same issues. The new RPC exploit worm will be interesting to watch on the above networks if they've dropped their blocks. There's also a question of at which layer they have done so. For example, if blocks were removed from central sites in favor of blocks that were pushed out to the end users. Allowing the various scans out costs other people money. If nothing else, I'll leave 135 in place long enough to ensure that the number of users that are infected are manageable. My transit customers are all telling me the same thing. They are still pushing it to get people cleaned up and patched. They want their blocks to remain (so they don't have to pay us more). -Jack
FWIW, my opinion is that blocking this at the customer edge per customer request is fine. Any other blocking by an ISP is damage and should be routed around like any other internet damage. Owen
I agree entirely with this. You shouldn't call yourself an ISP unless you can transport the whole Internet, including those "bad Microsoft ports", between the world and your customers. On the other hand, what's a provider to do when their access hardware can't deal with a pathological set of flows or arp entries? There isn't a good business case to forklift out your DSLAMs and every customer's matching CPE when a couple of ACLs will fix the problem. For that matter, there isn't a very good business case for transporting Nachi's ICMP floods across an international backbone network when you can do a bit of rate-limiting and cut your pipe requirements by 10-20%. Matthew Kaufman matthew@eeph.com
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Owen DeLong Sent: Friday, September 19, 2003 10:03 AM To: Jack Bates; Adam Hall Cc: 'nanog@nanog.org' Subject: Re: Providers removing blocks on port 135?
FWIW, my opinion is that blocking this at the customer edge per customer request is fine. Any other blocking by an ISP is damage and should be routed around like any other internet damage.
Owen
OK... Obviously, you need to do what you need to do to keep things running. However, that should be a temporary crisis response. If your equipment is getting DOS'd for more than a few months, we need to find a way to fix a bigger problem. Permanently breaking some service (regardless of what we think of it. Personally, I'll be glad to see M$ go down in flames) is _NOT_ the correct answer. Owen --On Friday, September 19, 2003 10:14 AM -0700 Matthew Kaufman <matthew@eeph.com> wrote:
I agree entirely with this. You shouldn't call yourself an ISP unless you can transport the whole Internet, including those "bad Microsoft ports", between the world and your customers.
On the other hand, what's a provider to do when their access hardware can't deal with a pathological set of flows or arp entries? There isn't a good business case to forklift out your DSLAMs and every customer's matching CPE when a couple of ACLs will fix the problem. For that matter, there isn't a very good business case for transporting Nachi's ICMP floods across an international backbone network when you can do a bit of rate-limiting and cut your pipe requirements by 10-20%.
Matthew Kaufman matthew@eeph.com
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Owen DeLong Sent: Friday, September 19, 2003 10:03 AM To: Jack Bates; Adam Hall Cc: 'nanog@nanog.org' Subject: Re: Providers removing blocks on port 135?
FWIW, my opinion is that blocking this at the customer edge per customer request is fine. Any other blocking by an ISP is damage and should be routed around like any other internet damage.
Owen
Well, we could start by having every ISP do what we do, which is to find worm-infected customers inside our network and get them patched or turned off. But that's a lot of work. (Especially when you've got a new worm to track down every week) The scary/unfortunate part to me is that these things never seem to go away... Check your web server's log for the last hit from Code Red, for instance. (6 minutes ago, from 203.59.48.139, on the server I just checked) Matthew Kaufman matthew@eeph.com
-----Original Message----- From: Owen DeLong [mailto:owen@delong.com] Sent: Friday, September 19, 2003 10:23 AM To: Matthew Kaufman; 'Jack Bates'; 'Adam Hall' Cc: nanog@nanog.org Subject: RE: Providers removing blocks on port 135?
OK... Obviously, you need to do what you need to do to keep things running. However, that should be a temporary crisis response. If your equipment is getting DOS'd for more than a few months, we need to find a way to fix a bigger problem. Permanently breaking some service (regardless of what we think of it. Personally, I'll be glad to see M$ go down in flames) is _NOT_ the correct answer.
Owen
I agree. In my opinion, at this point, the larger issue that we really need to find a way to address is to force a recall on the Exploding Pinto and get real product liability available to the victims. Not just the moron who bought the Exploding Pinto knowing it would probably explode, who, actually should have some culpability, but, more importantly, there should be actual liability on the part of the manufacturer and the reckless operator to the innocent bystander victims (ISPs, Web sites, etc.) that are forced to try to repair or work around the damage, handle the traffic, etc. The Virus-of-the-week club is not going to go away until somebody gets a multi-million dollar judgement against Micr0$0ft for the damage inflicted by their Exploding Pinto, or, until businesses start to see that there is liability for running unpatched windows systems to the tune of "If you keep driving your Exploding Pinto, you may have to pay $$$ to the victims of the explosion when it occurs." Case 1 will lead Micr0$0ft to start rethinking some of it's less secure design decisions. Case 2 will lead businesses to evaluate whether Windows is _REALLY_ a cost effective choice or not. Until at least one, preferably both of these things happens, we are going to be stuck with the consequences of other people choosing to run Whine-Doze whether we run it, or even want to know that anyone runs it or not. Owen --On Friday, September 19, 2003 10:32 AM -0700 Matthew Kaufman <matthew@eeph.com> wrote:
Well, we could start by having every ISP do what we do, which is to find worm-infected customers inside our network and get them patched or turned off. But that's a lot of work. (Especially when you've got a new worm to track down every week)
The scary/unfortunate part to me is that these things never seem to go away... Check your web server's log for the last hit from Code Red, for instance. (6 minutes ago, from 203.59.48.139, on the server I just checked)
Matthew Kaufman matthew@eeph.com
-----Original Message----- From: Owen DeLong [mailto:owen@delong.com] Sent: Friday, September 19, 2003 10:23 AM To: Matthew Kaufman; 'Jack Bates'; 'Adam Hall' Cc: nanog@nanog.org Subject: RE: Providers removing blocks on port 135?
OK... Obviously, you need to do what you need to do to keep things running. However, that should be a temporary crisis response. If your equipment is getting DOS'd for more than a few months, we need to find a way to fix a bigger problem. Permanently breaking some service (regardless of what we think of it. Personally, I'll be glad to see M$ go down in flames) is _NOT_ the correct answer.
Owen
On Fri, 19 Sep 2003, Matthew Kaufman wrote:
I agree entirely with this. You shouldn't call yourself an ISP unless you can transport the whole Internet, including those "bad Microsoft ports", between the world and your customers.
I disagree. In my opinion a NSP shouldn't filter traffic unless one of its customers requests it. However I strongly believe that an ISP (where it's customers are Joe Blow average citizen and Susy Homemaker) should take every reasonable step to protect it's users from malicious traffic and that includes filtering ports. For example I have no reservation about NATing basic dialup users. I also have no problem with filtering ports for services they shouldn't be running on a dialup connection (HTTP, FTP, DNS) or for services that IMHO have no business on the public internet (including every single Microsoft port I can identify). To not do so is IMHO to run a network in an extremely negligent manner. We do this very thing with email. We filter known malicious messages, attachments, and spam from email. I don't think there's a reasonable person among us that can complain about that. Why not do it to network traffic then? If we should allow every bit of traffic to pass unmolested by ACLs then we should allow all email to pass by unmolested by anti-virus and spam checks. It's a two-way street.
On the other hand, what's a provider to do when their access hardware can't deal with a pathological set of flows or arp entries? There isn't a good business case to forklift out your DSLAMs and every customer's matching CPE when a couple of ACLs will fix the problem. For that matter, there isn't a very good business case for transporting Nachi's ICMP floods across an international backbone network when you can do a bit of rate-limiting and cut your pipe requirements by 10-20%.
A good point. Justin
I disagree. In my opinion a NSP shouldn't filter traffic unless one of its customers requests it. However I strongly believe that an ISP (where it's customers are Joe Blow average citizen and Susy Homemaker) should take every reasonable step to protect it's users from malicious traffic and that includes filtering ports. For example I have no reservation about NATing basic dialup users. I also have no problem with filtering ports for services they shouldn't be running on a dialup connection (HTTP, FTP, DNS) or for services that IMHO have no business on the public internet (including every single Microsoft port I can identify). To not do so is IMHO to run a network in an extremely negligent manner.
Why do you get to decide that, I can't, from a hotel room, call my ISP and put up a web server on my dialup connection so someone behind a firewall can retrieve a document I desperately need to get to them? Why _SHOULDN'T_ I run a web server to do this over a dialup connection? Why do you get to dictate to _ANYONE_ what things they can and can't do with their most portable internet access? How can you say that it is negligent to refuse to DOS your customers unless they request it? (blocking traffic to me that I want is every bit as much a denial of service as flooding my link).
We do this very thing with email. We filter known malicious messages, attachments, and spam from email. I don't think there's a reasonable person among us that can complain about that. Why not do it to network traffic then? If we should allow every bit of traffic to pass unmolested by ACLs then we should allow all email to pass by unmolested by anti-virus and spam checks. It's a two-way street.
I leave it to the community to decide whether I am a reasonable person or not, but, generally, I tend to think that I am viewed as such. However, I would complain about the parctices you describe above if I was your customer. If I ask you to filter SPAM, fine. If I ask you not to filter SPAM, then I should receive every email addressed to me. If I cannot, then, I won't be your customer. As far as I'm concerned, if an ISP wants to run anti-virus or spam-checks, they should run them as an opt-in value added service, _NOT_ as a "we're deleting your mail for you whether you like it or not" thing.
On the other hand, what's a provider to do when their access hardware can't deal with a pathological set of flows or arp entries? There isn't [snip]
A good point.
Yes. I responded to this in a previous post. We must do what we must do temporarily to keep things running. However, breaking the net is not a long term solution. We must work to solve the underlying problem or it just becomes an arms-race where eventually, no services are useful. Owen
Why do you get to decide that, I can't, from a hotel room, call my ISP and put up a web server on my dialup connection so someone behind a firewall can retrieve a document I desperately need to get to them? Why _SHOULDN'T_ I run a web server to do this over a dialup connection? Why do you get to dictate to _ANYONE_ what things they can and can't do with their most portable internet access? How can you say that it is negligent to refuse to DOS your customers unless they request it? (blocking traffic to me that I want is every bit as much a denial of service as flooding my link).
The distinction may be blurrier these days, but there *is* a difference between networking and internetworking. Whereas I'd agree that interconnections between networks be unencumbered to the greatest degree possible, the administrator of a network can be slightly more draconian in order to keep the network running smoothly. This statement applies, IMHO, to any provider who sells service to individual users. It may be a huge wide area dialup network, but it's still a network, in which the average customer is not a professional network administrator but rather a user of indeterminate knowledge level. Now, if as an ISP you operate an internetwork ("network of networks") and a network of users, the challenge is obviously how do you draw the distinction between user/customers and network/customers. I think it's do-able (DHCP being one criteria that comes to mind), but there there are a lot of permutations to consider.
Why do you get to decide that, I can't, from a hotel room, call my ISP and put up a web server on my dialup connection so someone behind a firewall can retrieve a document I desperately need to get to them? Why _SHOULDN'T_ I run a web server to do this over a dialup connection? Why do you get
when scp or ftp over an ssh tunnel are much easier/lighter weight? or you could hand out ASNs and run third-party BGP from your hotel room back to the trusted core... there are lots of ways to get your critical content to the right party, some are more cost effective than others. The name "Rube Goldburg" comes to mind here...
The distinction may be blurrier these days, but there *is* a difference between networking and internetworking.
true enough. --bill
Has anyone else notice the flip-flops? To prevent spam providers should block port 25. If providers block ports, e.g. port 135, they aren't providing access to the "full" Internet. Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service at any time? For example run an SMTP mailer, or leave Network Neighborhood open for others to browse or install software on their computers? Or should ISPs have a "default deny" on all services, and subscribers need to call for permitssion if they want to use some new service? Should new services like Voice over IP, or even the World Wide Web be blocked by default by service providers? As a HOST requirement, I think all hosts should be "client-only" by default. That includes things when acting as like hosts such as routers, switches, print servers, file servers, UPSes. If a HOST uses a network protocol for local host processes (e.g. X-Windows, BIFF, Syslog, DCE, RPC) by default it should not accept network connections. It should require some action, e.g. the user enabling the service, DHCP-client enabling it in a profile, clicking things on the LCD display on the front ofthe printer, etc. If the device is low-end and only has a network connection (e.g. no console), it may have a single (i.e. telnet or web; but not both) management protocol enabled provided it includes a default password which can not be discovered from the network (i.e. not the MAC address), is different for each device (i.e. not predictable), and is only accessiable from the "local" network (e.g. the "internal" subnet interface). A "proprietary" protocol is not an adequate substitute. Static passwords are inherently insecure if you get enough guesses, so the device should block use of the default password after N failed attempts until the device is manually reset. I recognize this is a potential denial of service, and for non-default passwords vendors may decide to do something else. But if the user hasn't changed the default password; they probably aren't using it anyway. SERVICE PROVIDERS do not enforce host requirements.
--On Saturday, September 20, 2003 3:36 PM -0400 Sean Donelan <sean@donelan.com> wrote:
Has anyone else notice the flip-flops?
To prevent spam providers should block port 25.
I still disagree with this. To prevent SPAM, people shouldn't run open relays and the open relay problem should be solved. Breaking legitimate port 25 traffic is a temporary hack.
If providers block ports, e.g. port 135, they aren't providing access to the "full" Internet.
That would be my position, yes. Even though I personally have no real use for that port (other than possibly a honeypot), I think that is true. Generally, I want my net uncensored by my provider. If I want them to block something, I'll tell them. Otherwise, I expect non-blocking to be the default.
Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service at any time? For example run an SMTP mailer, or leave Network Neighborhood open for others to browse or install software on their computers?
If the person running the system in question chooses to do so, yes, they should be able to do so.
Or should ISPs have a "default deny" on all services, and subscribers need to call for permitssion if they want to use some new service? Should new services like Voice over IP, or even the World Wide Web be blocked by default by service providers?
Personally, I'm in the default permit camp with ISPs providing filtration on demand to customer specs.
As a HOST requirement, I think all hosts should be "client-only" by default. That includes things when acting as like hosts such as routers, switches, print servers, file servers, UPSes. If a HOST uses a network protocol for local host processes (e.g. X-Windows, BIFF, Syslog, DCE, RPC) by default it should not accept network connections.
It should require some action, e.g. the user enabling the service, DHCP-client enabling it in a profile, clicking things on the LCD display on the front ofthe printer, etc.
I could live with that, although, having a printer reject LPD by default doesn't make alot of sense to me.
If the device is low-end and only has a network connection (e.g. no console), it may have a single (i.e. telnet or web; but not both) management protocol enabled provided it includes a default password which can not be discovered from the network (i.e. not the MAC address), is different for each device (i.e. not predictable), and is only accessiable from the "local" network (e.g. the "internal" subnet interface). A "proprietary" protocol is not an adequate substitute. Static passwords are inherently insecure if you get enough guesses, so the device should block use of the default password after N failed attempts until the device is manually reset. I recognize this is a potential denial of service, and for non-default passwords vendors may decide to do something else. But if the user hasn't changed the default password; they probably aren't using it anyway.
I like that idea, although, I don't like saying only one service. I think one CLI and one GUI service is reasonable. I don't want to have to use a web interface to get to the CLI, and I'm sure a lot of other customers don't want to know what a CLI is.
SERVICE PROVIDERS do not enforce host requirements.
I REALLY like this.
Owen
Hi, NANOGers. ] I still disagree with this. To prevent SPAM, people shouldn't run open ] relays and the open relay problem should be solved. Breaking legitimate ] port 25 traffic is a temporary hack. I suspect that most spam avoids open relays. The abuse of proxies, routers, and bots for this purpose is far more in vogue. Watch out for worms such as W32.Sanper, which also provide a built-in spam relay network. Remove all of the open mail relays and you are left with...lots of spam. More at NANOG... ;) Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
However, I'm not convinced blocking port 25 on dialups helps much with that. What it does help with is preventing them from connecting to open relays. The real solution in the long run will be two-fold: 1. Internet hosts need to become less penetrable. (or at least one particular brand of software) 2. SMTP AUTH will need to become more widespread and end-to-endish. Owen --On Saturday, September 20, 2003 4:53 PM -0500 Rob Thomas <robt@cymru.com> wrote:
Hi, NANOGers.
] I still disagree with this. To prevent SPAM, people shouldn't run open ] relays and the open relay problem should be solved. Breaking legitimate ] port 25 traffic is a temporary hack.
I suspect that most spam avoids open relays. The abuse of proxies, routers, and bots for this purpose is far more in vogue. Watch out for worms such as W32.Sanper, which also provide a built-in spam relay network. Remove all of the open mail relays and you are left with...lots of spam.
More at NANOG... ;)
Thanks, Rob. -- Rob Thomas http://www.cymru.com ASSERT(coffee != empty);
However, I'm not convinced blocking port 25 on dialups helps much with that. What it does help with is preventing them from connecting to open relays.
We don't stop our dial customers from getting *to* anything. What we do have though are (optional) *inbound* filters that make sure no-one can connect to their privileged ports over TCP/IP, and a mandatory filter that says only our network can deliver to their SMTP service. We don't get problems with open-relays on dialups. We didn't have any problems with MS-Blaster on dialups either... I'm considering adding privileged port filters for UDP/IP too, although again it would be optional so that customers who run their own UDP/IP services can get their responses (i.e. cacheing DNS, IKE, NTP, etc). Ray
On Sat, 20 Sep 2003 23:22:34 +0100 "Ray Bellis" <rpb@community.net.uk> wrote:
What we do have though are (optional) *inbound* filters that make sure no-one can connect to their privileged ports over TCP/IP, and a mandatory filter that says only our network can deliver to their SMTP service.
We don't get problems with open-relays on dialups. We didn't have any problems with MS-Blaster on dialups either...
I would suggest instead that you have mandatory sending via your relays, and allow inbound connections to port 25. Sympatico, last I checked, didn't have any restrictions until you tripped off their alarms, at which point you needed to configure your smtpd to send mail via their relays. If they continued spewing copious amounts of spam, cut them off entirely until they fix their configuration. There are a couple of pluses to this type of setup; people like me who have dozens of (required) email addresses can forward them all to their home machine. Some of my family also much prefer this even though they've only got one or two email addresses. It also ensures that they can't send spam directly no matter what the source; blocking inbound connections will certainly stop open relays, but it won't stop trojans and worms and whatnot that are really just spamware. (Note that I consider spamware included in other applications and hidden from the user "trojans.")
I would suggest instead that you have mandatory sending via your relays, and allow inbound connections to port 25.
We're a fairly big provider on the GRIC (global roaming) network. That means that it's not feasible for us to prevent many of our POPs' users from contacting off-net SMTP servers. Running an enforced SMTP service via transparent proxying wouldn't stop the spam problem, it would just shift it and probably get the proxy system black-listed as an open-relay's relay. Anyway, like I said, we don't *have* a spam problem on our dialups. By virtue of our filters we don't have any open relays on dialup. ADSL is a different matter and we do have occasional problems with open relays and/or worms there. Unfortunately the UK incumbent wholesaler(*) doesn't provide a way to filter ADSL traffic within their ATM core. The only way to do it is to put another router between our network and the "BT Central" router that connects their ATM cloud to us. Of course that doesn't provide any inter-customer filtering, since that traffic never reaches our network :( Ray (*) BT - they have a nearly complete monopoly on the local loop.
* rpb@community.net.uk (Ray Bellis) [Sun 21 Sep 2003, 00:25 CEST]:
What we do have though are (optional) *inbound* filters that make sure no-one can connect to their privileged ports over TCP/IP, and a mandatory filter that says only our network can deliver to their SMTP service.
There's an ISP in the Netherlands who do that too for their DSL customers. Unfortunately, their mail servers are not that reliable to begin with and also spool mail only for 4 hours, so if your connection is down for the weekend (happens more often if you work for a company in direct competition with the telco that owns this particular ISP) all your mail bounces instead of getting spooled somewhere and delivered later... -- Niels.
On Sat, 20 Sep 2003 15:05:08 -0700 Owen DeLong <owen@delong.com> wrote: | I'm not convinced blocking port 25 on dialups helps much with that. | What it does help with is preventing them from connecting to open | relays. There are so few open relays now that spammers have moved on. They now use, almost without exception, compromised Windows boxes acting as open proxies, or on which a trojan spam-sender of some sort has been installed - usually by one of the recent stream of viruses/worms. Blocking outbound port 25, other than via a designated smarthost, would at least prevent the direct-to-MX traffic from compromised boxes - which currently seems to be the spammers "method of choice". | The real solution in the long run will be two-fold: | 1. Internet hosts need to become less penetrable. | (or at least one particular brand of software) | | 2. SMTP AUTH will need to become more widespread and end-to-endish. Right on both counts. But "end-to-end" may have to include the senders' fingers: as if bundled mail-client software contains the AUTH password it will be trivial for the spammers to hijack at the client level. And users won't like having to key in their password each time, meaning that trivial, guessable passwords will often be used. In recent weeks one particular spammer seems to have perfected a knack of breaking SMTP AUTH passwords on a widespread basis. Governments on both sides of the Pond may be reluctant to make spam illegal, but the issue is not spam (or we couldn't be discussing it here). This is a matter of system and network security, and if law enforcement had the skills, resources and motivation to deal with what are clear breaches of existing laws, admins' jobs would be significantly easier. Until then, we have to deal with issues as they arise. Networks need to be contactable quickly when compromised sites start to be misused, and to respond immediately. Not just wait until "Monday Morning" in their timezone ... if we can't deal with the incidents in real time, how can we expect law enforcement to do anything? Hello Comcast, Skynet, Ireland-onLine, NTL in the UK ... need I go on? Where's Declan McC when we need him? -- Richard
--On Saturday, September 20, 2003 2:46 PM -0700 Owen DeLong <owen@delong.com> wrote:
I still disagree with this. To prevent SPAM, people shouldn't run open relays and the open relay problem should be solved. Breaking legitimate port 25 traffic is a temporary hack.
Very little spam coming off dialups and other dynamically assigned, "residential" type connections has anything to do with open relays. The vast majority of it is related to open proxies (which the machine owners do not realize they are running) and machines that have been compromised by various viruses and exploits. These are machines that should not be running outbound mailservers, and in most cases, the owners neither intend nor believe that their systems are sending mail. Merely stating that people shouldn't run open relays didn't stop spam four years ago and it is less likely to do so now. My guess is that you haven't heard of the current issue with various servers running SMTP AUTH. These MTAs are secure by normal mechanisms, but are being made to relay spam anyway. It's hard enough to get mailservers secured when they are maintained by real sysadmins on static IPs with proper and informative PTR records. When the IP addresses sourcing the spam are moving targets, with "generic" PTR records, and the machines are being operated by end users with no knowledge that their computer is even capable of sending direct to MX mail, the situation is impossible to solve without ISP intervention via Port filtering, etc.
If the person running the system in question chooses to do so, yes, they should be able to do so.
If the person running the system in question wants to run server class services, such as ftp, smtp, etc, then they need to get a compatible connection to the internet. There are residential service providers that allow static IP addressing, will provide rDNS, and allow all the servers you care to run. They generally cost more than dial-ups or typical dynamic residential broadband connections. As a rule, you tend to get what you pay for. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Margie Arbon Mail Abuse Prevention System, LLC margie@mail-abuse.org http://mail-abuse.org
On Sat, 20 Sep 2003, Margie wrote:
If the person running the system in question wants to run server class services, such as ftp, smtp, etc, then they need to get a compatible connection to the internet. There are residential service providers that allow static IP addressing, will provide rDNS, and allow all the servers you care to run. They generally cost more than dial-ups or typical dynamic residential broadband connections. As a rule, you tend to get what you pay for.
So if someone wants to run Outlook or Netbios from home, they need to get a "server-class" connection to the Internet? If everyone buys a server-class connection, we end up back were we started. The problem is many "clients" act as servers for part of the transaction. Remember X-Windows having ports 6000-6099 open on clients? IRC users need to have Identd port 113 open. Microsoft clients sometimes need to receive traffic on port 135, 137-139 as well as transmit it due to how software vendors designed their protocols. Outlook won't receive the "new mail" message, and customers will complain that mail is "slow." And do we really want to discuss peer-to-peer networking, which as the name suggests, peer-to-peer. It costs service providers more (cpu/ram/equipment) to filter a connection. And even more for every exception. Should service providers charge customers with filtering less (even though it costs more), and customers without filtering more (even though it costs less)? If the unfiltered connection was less expensive, wouldn't everyone just buy that; and we would be right back to the current situation? In the old regulated days of telephony, service providers could get away with charging business customers more for a phone line or charging for "touch-tone" service. But the Internet isn't regulated. There is always someone willing to sell the service for less if you charge more than what it costs.
On Sat, 20 Sep 2003, Sean Donelan wrote:
It costs service providers more (cpu/ram/equipment) to filter a connection. And even more for every exception. Should service providers charge customers with filtering less (even though it costs more), and customers without filtering more (even though it costs less)? If the unfiltered connection was less expensive, wouldn't everyone just buy that; and we would be right back to the current situation?
Abosulutely. At least if the customer wants technical support or plans on paying for their bandwidth. It costs *more* resources for an ISP to *not* filter ports and it costs them *less* resources to filter known ports that are rarely used by Joe Blow average user but the cause of 99% of their (our) headaches. How many people here have ever worked in a helpdesk with hundreds of users calling you for help when they've been infected with the latest greatest Netbios-enabled virus and lost their report, thesis, archived email, pictures of the kids, you name it. I used to work at a Unv helpdesk. Every single time the mail server hiccuped for whatever reason, or the personal webserver was offline for a few minutes of maintenance in the week hours of the morning (no matter whether it was 2 minutes of 2 days) people would inundate us with complaints. All the real problems had to be put on hold so we could answer the phones. Technical support costs an ISP many times that of the neccessary CPU and RAM resources on an access server or border router needed to filter malicious ports. Why don't we just wait until we identify that a user has been infected or compromised (by whatever resource-hog of a method that entails). Then we can just disable their account and wait for them to call. Those calls are always the most pleasant of the day. When did proactive security measures become criminal? Was there a memo I missed? Justin
On Sat, 20 Sep 2003, Justin Shore wrote:
Abosulutely. At least if the customer wants technical support or plans on paying for their bandwidth. It costs *more* resources for an ISP to *not* filter ports and it costs them *less* resources to filter known ports that are rarely used by Joe Blow average user but the cause of 99% of their
The majority of viruses still spread through port 25 and port 80. I've asked other providers about their experiences. Based on their experiences, the number of incidents for providers that filtered netbios was essentially the same as providers that didn't. It didn't significantly change the number of calls to their help desks over the long-term (e.g. 6 months) either. They were hit with the same number of drop-everything-all-hands-on-deck incidents. Microsoft Windows has more than enough vulnerabilities. Blocking a few ports doesn't change much. Deleting Outlook might help :-) I know how people working the help desk feel. But is this a case of "do something" rather than figuring out what the problem is. What data do people have to back up blocking specific ports. What were your control groups? With Trojan proxies appear on almost any port, blocking anything less than every port will be ineffective.
On Sat, Sep 20, 2003 at 07:01:27PM -0400, Sean Donelan wrote:
The problem is many "clients" act as servers for part of the transaction. [...] And do we really want to discuss peer-to-peer networking, which as the name suggests, peer-to-peer.
The Internet has always consisted of peer-to-peer hosts. It seems odd that people talk about peer-to-peer as a new feature. ...and are there similarities with those basic assumptions and those people expected from the .com/.net domains? For either case, have you (the collective recipients of this email) thought both long and hard about those two issues. Are your beliefs about each compatible? John
On Sat, 20 Sep 2003, Margie wrote:
My guess is that you haven't heard of the current issue with various servers running SMTP AUTH. These MTAs are secure by normal mechanisms, but are being made to relay spam anyway.
Would this be a reference to the qmail-smtp-auth patch that recently was discovered, that if misconfigured, could allow incorrect relays? If so, I would say that this was an isolated incident for a single patch for a specific MTA and only when it was misconfigured. I'm not sure I would describe that as "secure by normal mechanisms" nor quite the epidemic it was the first week or two. I'm not necessarily making a statement one way or the other on port 25 filtering, but SMTP Auth, when properly configured and protected against brute force attacks is certainly a useful thing. YMMV of course. andy -- PGP Key Available at http://www.tigerteam.net/andy/pgp
--On Saturday, September 20, 2003 6:36 PM -0500 Andy Walden <andy@tigerteam.net> wrote:
Would this be a reference to the qmail-smtp-auth patch that recently was discovered, that if misconfigured, could allow incorrect relays?
No, that was the tip of the iceberg.
If so, I would say that this was an isolated incident for a single patch for a specific MTA and only when it was misconfigured. I'm not sure I would describe that as "secure by normal mechanisms" nor quite the epidemic it was the first week or two.
We've seen the same behavior out of Postfix, QMail, Imail, Mdaemon, Exchange, Sendmail, Mercury, Merak, NTMail, and others that I can't recall off the top of my head. In some cases, the relaying was fixed with the release of a new patch or a conf change. In others, particulary Exchange, the guest account was enabled, allowing open authentication. The big "BUT" is that there is a not insignificant number of other machines that have either been shown to have been brute forced or we've yet to determine the mechanism that allows the relay. The problem is not small.
I'm not necessarily making a statement one way or the other on port 25 filtering, but SMTP Auth, when properly configured and protected against brute force attacks is certainly a useful thing. YMMV of course.
Yes, it is a useful thing. It's not the ultimate answer. A machine that tests secure by any test we are willing to run, that requires fifteen character passwords, with mulitple special characters required, that is STILL relaying indicates there is a bad thing happening somewhere. -- Margie
Andy Walden wrote:
I'm not necessarily making a statement one way or the other on port 25 filtering, but SMTP Auth, when properly configured and protected against brute force attacks is certainly a useful thing. YMMV of course.
Keyloggers are popular in the same viruses that install open proxies. :) -Jack
On Sat, 20 Sep 2003, Margie wrote:
Very little spam coming off dialups and other dynamically assigned, "residential" type connections has anything to do with open relays. The vast majority of it is related to open proxies (which the machine owners do not realize they are running) and machines that have been compromised by various viruses and exploits. These are machines that should not be running outbound mailservers, and in most cases, the owners neither intend nor believe that their systems are sending mail. Merely stating that people shouldn't run open relays didn't stop spam four years ago and it is less likely to do so now.
This veers off the original topic. Of course I don't think any of us recall what that was anyways... I remember back when I first started using the DUL. Of all the DNSBLs I used at the time it blocked the most spam of any of them. I mean that by long shot. About the time the DUL and other MAPS lists went commericial is about the same time I noticed fewer and fewer hits on the DUL. We still pay for an AXFR (IXFR) of it but it doesn't block nearly as much as it used to. The open proxy lists block an unbelievable amount of spam. In theory the DUL would take care of this if it also list residential dynamically assigned cable/dsl lines (if it doesn't already, hmmm...). Still the open proxy DNSBLs seem to be more effective now. Bottom line, use every DNSBL you possibly can and don't be afraid to pay for them. I strongly recommend redirecting SMTP traffic for this same class of user as well. Now I'm going to get even more off-topic. It occurs to me that major changes to a protocol such as SMTP getting auth should justify utilizing a different tcp/ip port. Think about it like this. If authenticated forms of SMTP used a different TCP/IP port we netadms could justify leaving that port open on these same dynamically assigned netblocks in the theory that they are only able to connect to other authenticated SMTP services. Doesn't that seem logical? Justin
On Sat, 20 Sep 2003, Justin Shore wrote:
This veers off the original topic. Of course I don't think any of us recall what that was anyways... I remember back when I first started using the DUL. Of all the DNSBLs I used at the time it blocked the most spam of any of them. I mean that by long shot. About the time the DUL and other MAPS lists went commericial is about the same time I noticed fewer and fewer hits on the DUL. We still pay for an AXFR (IXFR) of it but it doesn't block nearly as much as it used to.
At one time, signing up for "throwaway dial-up accounts" was a common spammer MO. We got hit a couple times, and they were like a plague of vermin [the spammers]. They'd sign up giving us bogus contact info and a freshly stolen (active) credit card. When the account was activated, they'd dial in using half a dozen or so lines and pump out as much spam (direct-to-MX) as they could. The really annoying bit is, we'd terminate them, they'd call right back, and sign up again, giving different bogus info and card numbers. We'd block them by ANI, and they'd block caller-ID when calling us. I ended up being forced to block access to some of our dial-up numbers both by ANI, and if there was no ANI, and then had to setup exceptions for a few customers in those areas who we never got ANI for. When I tried getting police in their areacode to investigate, they had no interest/were too busy...even though I could give them phone numbers the accounts were used from and stolen credit cards. To put a little operational spin in here...how many of you run dial-up networks where you refuse logins unless you get ANI?...and if you do this, do you also maintain an ANI blacklist? Anyway...they moved on to proxy abuse, then outright theft by creating their own proxies on compromised MS Windows boxes. Both methods have the advantage of totally hiding the spammer from the recipients and bandwidth amplification. I imagine you could utilize multiple spam proxies on broadband connections pumping out your spam while connected via dial-up yourself. If you look at the numbers at http://njabl.org/stats, about 5% of the hosts that have ever been checked are currently open relays (or nobody's bothered to remove them). IIRC, at one point, this was nearly 20%. 13.6% are open proxies...and the disparity is definitely still growing, with about 10x as many open proxies as relays being detected daily. Unfortunately, the new breed of purpose-built spam proxies are generally not remotely detectable, so the proxy percentage would be even higher if it included the newer spam proxies. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
My guess is that you haven't heard of the current issue with various servers running SMTP AUTH. These MTAs are secure by normal mechanisms, but are being made to relay spam anyway.
You're right. It's been a while since I was last on the front lines of this issue.
It's hard enough to get mailservers secured when they are maintained by real sysadmins on static IPs with proper and informative PTR records. When the IP addresses sourcing the spam are moving targets, with "generic" PTR records, and the machines are being operated by end users with no knowledge that their computer is even capable of sending direct to MX mail, the situation is impossible to solve without ISP intervention via Port filtering, etc.
So, what you're saying is that a large number of easily compromised hosts are the Root Cause. While blocking port 25 traffic from these systems is a convenient patch, it's not a solution to the root cause. The solution is to make the hosts less vulnerable. One step towards doing that will be to put real product liability on the vendor of the software and the corporations running fleets of compromised systems. Right now, Windows owns the world and the hackers own Windows. The only corporate wake-up call that seems to get understood is one that comes from the legal department.
If the person running the system in question chooses to do so, yes, they should be able to do so.
If the person running the system in question wants to run server class services, such as ftp, smtp, etc, then they need to get a compatible connection to the internet. There are residential service providers that allow static IP addressing, will provide rDNS, and allow all the servers you care to run. They generally cost more than dial-ups or typical dynamic residential broadband connections. As a rule, you tend to get what you pay for.
There are lots of different scenarios available. The bottom line is still that, while an effective workaround, blocking internet ports is not a solution to the root cause of the problem. When we decide that workarounds are solutions, we only invite an arms race of escalating denial of services. My concern is that we seem to have reached a place where we take for granted the immutable vulnerability of systems and, therefore, don't seek to solve the problem, but, instead decide to move from one workaround to the next. I agree the workarounds are necessary for now, but, that doesn't mean we should accept them as permanent solutions. We should work to solve the root cause of the problem as well. Owen
-- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Margie Arbon Mail Abuse Prevention System, LLC margie@mail-abuse.org http://mail-abuse.org
On zaterdag, sep 20, 2003, at 21:36 Europe/Amsterdam, Sean Donelan wrote:
Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service at any time? For example run an SMTP mailer, or leave Network Neighborhood open for others to browse or install software on their computers?
As someone who has been using IP for a while now, I would very much like to be able to use any service at any time.
Or should ISPs have a "default deny" on all services, and subscribers need to call for permitssion if they want to use some new service? Should new services like Voice over IP, or even the World Wide Web be blocked by default by service providers?
Obviously not. Blocking services that are known to be bad or vulnerable wouldn't be entirely unreasonable, though. But who gets to decide which services should be blocked? Some services are very dangerous and not very useful, so blocking is a no brainer. Other services are only slightly risky and very useful. Where do we draw the line? Who draws the line?
As a HOST requirement, I think all hosts should be "client-only" by default. That includes things when acting as like hosts such as routers, switches, print servers, file servers, UPSes. If a HOST uses a network protocol for local host processes (e.g. X-Windows, BIFF, Syslog, DCE, RPC) by default it should not accept network connections.
It should require some action, e.g. the user enabling the service, DHCP-client enabling it in a profile, clicking things on the LCD display on the front ofthe printer, etc.
Get yourself a Mac. :-) I think it would useful to set aside a block of port numbers for local use. These would be easy to filter at the edges of networks but plug and play would still be possible.
SERVICE PROVIDERS do not enforce host requirements.
But someone has to. The trouble is that access to the network has never been considered a liability, except for local ports under 1024. (Have a look at java, for example.) I believe that the only way to solve all this nonsense is to have a mechanism that is preferably outside the host, or at least deep enough inside the system to be protected against application holes and user stupidity, which controls application's access to the network. This must not only be based on application type and user rights (user www gets to run a web server that listens on port 80) but also on application version. So when a vulnerability is found the vulnerable version of the application is automatically blocked. I don't see something like this popping up over night, though.
Iljitsch van Beijnum wrote:
But someone has to. The trouble is that access to the network has never been considered a liability, except for local ports under 1024. (Have a look at java, for example.) I believe that the only way to solve all this nonsense is to have a mechanism that is preferably outside the host, or at least deep enough inside the system to be protected against application holes and user stupidity, which controls application's access to the network. This must not only be based on application type and user rights (user www gets to run a web server that listens on port 80) but also on application version. So when a vulnerability is found the vulnerable version of the application is automatically blocked.
Go and count the Pinto´s on US101 or I-880. :-)
I don't see something like this popping up over night, though.
For this to be really effective, there needs to be an unbroken chain of authentication for code from the author to your PC and additionally the operating system needs to change to get rid of the notion of "superuser". As have been said multiple times on this and other lists, most consumer users expect their stuff "just work" and unfortunately Microsoft translated this requirement to "Always Local Administrator" which has catastrophic security consequences. The chain above does not have to mean that there is central authority enabling the code to run on your box, it can as well give the right to you or some place in the organization where it makes sense. Pete
Owen DeLong wrote:
Yes. I responded to this in a previous post. We must do what we must do temporarily to keep things running. However, breaking the net is not a long term solution. We must work to solve the underlying problem or it just becomes an arms-race where eventually, no services are useful.
I agree, and as a point of fact, many ISP's allow their users to opt out of spam. The ability to opt out of port filtering is a little more difficult, but it is not impossible. Most authentication methods designed have support for telling connection equipment what security lists to use and how to treat a specific user. Some systems, like mine, do not run authentication models that support this, but I consider it very wise to change. In my case, I will maintain a filter anywhere in the network that it is required in order to help protect the network and the users who rely upon the network. Currently, estimates show that removing port 135 at this junction would allow the current Blaster infected users to become infected with Nachi/Welchia which has more network impact. Some segments, despite blocks, have already had small outbreaks which we had to irradicate. In addition, dialups have very little bandwidth to begin with. The amount of traffic generated on icmp and 135 is currently high enough to severly cripple connectivity on an unprotected dialup account. I do agree that it is a temporary measure. Yet, one must remember that each network has it's own definitions of temporary, drastic, and appropriate. I now return you to contacting those infected users in your network. :) -Jack
On Fri, 19 Sep 2003, Adam Hall wrote:
Anyone know anything about prorviders removing ACLs from their routers to allow ports 135/445/4444 back into their network? Curious only because customers are calling in saying that Verizon, Cox, Bellsouth, and DSL.net are doing so and seem to have a big problem with the fact that we're hesitent follow their lead.
Well, first you would have to find providers willing to say they had ACLs, then willing to say the ACLs that didn't exist are being removed. Although 135, 139, 445, etc ACLs still seem to be very wide-spread, they are not network or service provider wide. It may vary by region, provider, wholesale arrangement, etc. A provider may have some ACLs in Atlanta, but not in Boston. Or even in the same city, some circuits may go through different wholesale arrangements resulting in different ACLs.
participants (19)
-
Adam Hall
-
Andy Walden
-
bmanning@karoshi.com
-
David B Harris
-
Iljitsch van Beijnum
-
Jack Bates
-
jlewis@lewis.org
-
John Kristoff
-
Justin Shore
-
Margie
-
Mark Borchers
-
Matthew Kaufman
-
Niels Bakker
-
Owen DeLong
-
Petri Helenius
-
Ray Bellis
-
Richard Cox
-
Rob Thomas
-
Sean Donelan