I wanted to run this past you to see what you thought of it and get some feedback on pro's and cons of this type of system. I have been thinking recently about the ever increasing amount of spam that is flooding the internet, clogging mail servers, and in general pissing us all off. I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less then stellar reputation and are not very effective on top of that. 95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags? my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site. to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned) work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in. because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is. the only way to fight spammers is to squeeze them out of hiding, and that's what I hope this system would be designed to do. I do not have the coding knowledge to do this I will need coders, I do have the PR skills to work with ISPs. I am also working with my congresswoman to pave the way for legal clearance for this program. I would greatly appreciate your input on this and anything I may have overlooked. I would also like to know if this would be a DC program you would run. a lot of people argue the practical application of DC. although we know differently this project would show them what DC can do for them and wake them up to perhaps other DC projects.
On Sat, 14 Feb 2004 00:30:30 PST, Tim Thorpe <tim@cleanyourdirt.com> said:
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even
How many IP addresses are there, and what percent of them are on DHCP, and will you be able to do a scan in under a week, by which time the info will be very stale indeed. (Hint - how long does the ISC 'Internet Domain Survey' take to run?) Also, read where it got the ORBS project. I'll overlook the fact that in general, you don't know what port the spammer backdoor malware is listening on, so you'll have to scan multiple ports. Not going to make you very popular. Other than that, go for it. :)
There are several groups working on identifying open relays, proxies, etc and creating lists of such ips for active blocking. For example see http://www.spamhaus.org/xbl/index.lasso The problem is not as much actual open relays (which are now rare and almost universlly blocked) but open proxies - these come in all shapes and sizes and same tools can not be used for testing it (i.e. just sending email as you propose). Similar growing issues are with zombie PCs which have been infected by special viruses that makes it an open proxy that requires certain access codes and while actual virus-set code maybe known and can be tested for, this code can be reset by the first person who gains access to that PC and spammers do that and after that normal testing methods may not work. On Sat, 14 Feb 2004, Tim Thorpe wrote:
I wanted to run this past you to see what you thought of it and get some feedback on pro's and cons of this type of system.
I have been thinking recently about the ever increasing amount of spam that is flooding the internet, clogging mail servers, and in general pissing us all off.
I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less then stellar reputation and are not very effective on top of that.
95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned)
work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in.
because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is.
the only way to fight spammers is to squeeze them out of hiding, and that's what I hope this system would be designed to do.
I do not have the coding knowledge to do this I will need coders, I do have the PR skills to work with ISPs. I am also working with my congresswoman to pave the way for legal clearance for this program.
I would greatly appreciate your input on this and anything I may have overlooked. I would also like to know if this would be a DC program you would run.
a lot of people argue the practical application of DC. although we know differently this project would show them what DC can do for them and wake them up to perhaps other DC projects.
Tim Thorpe wrote:
95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
I don't think open relays are that big a part of the picture anymore. The rest of that 'graph is pretty close. Open proxies, insecure forms, and asymmetrical routing is where it is at, and remote-control trojans installed by viruses and worms is where it is going.
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
I don't know what a "DC Network" is.
to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned)
As they sometimes say--"It won't scale." And for people on small pipes or metered connections, that will be more abusive than the current problem is.
work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in.
ingress filtering at the edges to drop packets that have to be fraud scales better, but I'm not sure that matters much anymore. But if we could not do that, how will we get this handled?
because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is.
I think you need to check your arithmetic.
It just doesn't work :( A few years ago I developed a sendmail milter system that would perform an open relay test on all new IP's that attempted to send mail to or through our server. If the test failed (open relay), the mail was rejected before it was even sent. If the test passed, the mail was allowed through. Once this test was performed, the status of the IP address was recorded for 90 days, after which it was deleted and the test would be performed again the next time it attempted to access our mail server. The tests themselves took under 20 seconds on average. Within 2 weeks we had a list of over 250,000 open relays. The total cut down in SPAM: somewhere around 10% Sadly, fact turned out to be that zombies, trojaned machines, and proxies are the reason. Not that much SPAM is open relay anymore. Mike Wiacek IRoot.Net On Sat, 14 Feb 2004, Tim Thorpe wrote:
I wanted to run this past you to see what you thought of it and get some feedback on pro's and cons of this type of system.
I have been thinking recently about the ever increasing amount of spam that is flooding the internet, clogging mail servers, and in general pissing us all off.
I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less then stellar reputation and are not very effective on top of that.
95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned)
work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in.
because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is.
the only way to fight spammers is to squeeze them out of hiding, and that's what I hope this system would be designed to do.
I do not have the coding knowledge to do this I will need coders, I do have the PR skills to work with ISPs. I am also working with my congresswoman to pave the way for legal clearance for this program.
I would greatly appreciate your input on this and anything I may have overlooked. I would also like to know if this would be a DC program you would run.
a lot of people argue the practical application of DC. although we know differently this project would show them what DC can do for them and wake them up to perhaps other DC projects.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michael Wiacek Sent: Saturday, February 14, 2004 9:12 AM To: Tim Thorpe Cc: nanog@merit.edu Subject: Re: Anti-spam System Idea
It just doesn't work :( A few years ago I developed a sendmail milter system that would perform an open relay test on all new IP's that attempted to send mail to or through our ...
I can look at virus code, see how its written what it does to the machines ect and "crack" their entry points and scan as well, I think the system could be adapted to scan and pre-emptivly block potential hostile hosts. To have passwords / port knocking schemes you have to code them, all you have to do to break it is read the code ;). (its not THAT simple but it covers the point I think.)
On Sat, 14 Feb 2004, Tim Thorpe wrote:
I wanted to run this past you to see what you thought of it and get some feedback on pro's and cons of this type of system.
I have been thinking recently about the ever increasing amount of spam that is flooding the internet, clogging mail servers, and in general pissing us all off.
I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less then stellar reputation and are not very effective on top of that.
I used to agree with this, until I tried amavisd-new with spamassassin. Yes, you have to throw a little hardware at it, but it really is an effective solution. For my mail, it's more than 99% effective. The only falsely tagged messages (never had a message reach the "bounce" threshold as a false positive) are mailing list mails from people who are on blacklists. Because amavisd-new has support for querying mysql maps, it's trivial to create multiple filtering policies, allowing users to select their own through your online account management interface. Along with that is per-recipient sender whitelists (and blacklists). And since amavisd-new has support for most virus scanners (clamav is nice and free), it really provides a complete solution. Note, however, that amavisd-new works best with postfix (according to the developer). Not sure how well it works with the others. It was nice going back to getting around 1 spam per day in my inbox...(over 200 are tagged or rejected every day). This solution really antiquates the old paradigm of rejecting based purely on status in an RBL. I encourage everybody who runs a mailserver to read http://www.flakshack.com/anti-spam/ Andy --- Andy Dills Xecunet, Inc. www.xecu.net 301-682-9972 ---
On Sat, 14 Feb 2004, Tim Thorpe wrote:
95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
s/relays/proxies/ The proxies are tough to find since they can run on any port. Some of them even pick random ports, then "phone home" to tell the spammer which IP/port was just created as one of their open proxies.
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
You're a few years late. See http://dsbl.org. For a non-DC version, see http://njabl.org. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
If these exist then why are we still having problems? Why do we let customers who have been infected flood the networks with traffic as they do? Should they not also be responsible for the security of their computers? Do we not do enough to educate? ...> addresses (or even
addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
You're a few years late. See http://dsbl.org. For a non-DC version, see http://njabl.org.
on Sat, Feb 14, 2004 at 03:55:40PM -0800, Tim Thorpe wrote:
If these exist then why are we still having problems?
See my reply to the thread "SMTP relaying policies for Commercial ISP customers...?" -- we have problems because the spammers are a lot smarter than any of us and can bounce from one infected host to another, in an attempt to evade network-specific traps, and few ISPs do anything at all to stop them.
Why do we let customers who have been infected flood the networks with traffic as they do?
Very good question.
Should they not also be responsible for the security of their computers? Do we not do enough to educate?
Yes, and no. -- hesketh.com/inc. v: (919) 834-2552 f: (919) 834-2554 w: http://hesketh.com Book publishing is second only to furniture delivery in slowness. -b. schneier
On Sat, 14 Feb 2004, Tim Thorpe wrote:
If these exist then why are we still having problems?
Because the spammers are creating proxies faster than any of the anti-spam people can find them. Evidence suggests, at least on the order of 10,000 new spam proxies are created and used every day by spackers (spammer/hackers). The relative insecurity of windows and ignorance of the average internet user has created an incredibly target rich environment for the spackers.
Why do we let customers who have been infected flood the networks with traffic as they do? Should they not also be responsible for the security of their computers? Do we not do enough to educate?
Economics, and convenience outweighing security. We're big, and slow to change. They're small and mobile. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Sat, 14 Feb 2004 jlewis@lewis.org wrote:
On Sat, 14 Feb 2004, Tim Thorpe wrote:
If these exist then why are we still having problems?
Because the spammers are creating proxies faster than any of the anti-spam people can find them. Evidence suggests, at least on the order of 10,000 new spam proxies are created and used every day by spackers (spammer/hackers).
Why do we let customers who have been infected flood the networks with traffic as they do? Should they not also be responsible for the security of their computers? Do we not do enough to educate? Just completely blocking access to those users seems an overly agressive
Add to that (or part of that number) is that many DSL and cable providers use DHCP to assign ip addresses for short period of time to their customers. Typically whenever system is reset a new ip would be assigned and a few of the zombie viruses being installed on the user system causes it to become unstable (especially if its trying to send email and can not and keeps retrying after the ip is on blacklist) and those users begin to reboot the computer trying to get it to work properly resulting in those computers getting new ip addresses which would again be outside of blacklist punishment (which actually caused quite a few angry users who left their dsl provider). Some providers deal with this by blocking port25 or redirecting it their own smtp server - some even do it onj their networks for all customers no matter if they got any reports or not (as preventative measure). While there are many techs who don't like this practice it does seem that this solution effectively removes the PC from being used as source of spam even if it becomes a zombie. -- William Leibzon Elan Networks william@elan.net
jlewis@lewis.org wrote:
On Sat, 14 Feb 2004, Tim Thorpe wrote:
If these exist then why are we still having problems?
Because the spammers are creating proxies faster than any of the anti-spam people can find them. Evidence suggests, at least on the order of 10,000 new spam proxies are created and used every day by spackers (spammer/hackers).
The relative insecurity of windows and ignorance of the average internet user has created an incredibly target rich environment for the spackers.
Why do we let customers who have been infected flood the networks with traffic as they do? Should they not also be responsible for the security of their computers? Do we not do enough to educate?
Economics, and convenience outweighing security. We're big, and slow to change. They're small and mobile.
The Internet's spam load could be easily cut by 50% or more. All it would take is the cooperation of most major ISPs and academic institutions. As this discussion thread has indicated, most spam originates from systems infected with spamiruses or open proxy servers. How to shut down all such malware? Simple: Apply egress filtering ACLs to all border routers to prohibit outgoing port 25 connections from DHCP addresses. We find that at least 85% of all spam originates from DHCP addresses. Thus, if a significant number of ISPs would perform port 25 egress filtering, I believe that it would significantly reduce spam, and force criminal spammers to develop completely new spamming technologies. If ISPs were to go further, and require their customers with static IPs to perform port 25 egress filtering, blocking such connections from all systems except for the customer's legitimate MTA, we could virtually eliminate spam originating from hijacked systems. OK, I can hear the objections now... ACLs slow down our routers and thus reduce through-put. Well, that may be true in the purest sense of the argument, but can you demonstrate that a few ACLs will have a SIGNIFICANT impact on through-put? I would be willing to bet that any through-put reduction caused by ACLs, in the long run, would be more than compensated for by the corresponding reduction in spam traffic passing through the router. Also, if filtering was to occur at the point closest to the source, rather than at an aggregation point, the impact of any ACLs would be distributed across the network in such a manner as to probably have no observable impact on network through-put. (If anyone has any hard statistics on ACL impact on network through-put, I would sure like to see those studies!) Just my $0.02 worth... Jon R. Kibler Chief Technical Officer A.S.E.T., Inc. Charleston, SC USA (843) 849-8214 ================================================== Filtered by: TRUSTEM.COM's Email Filtering Service http://www.trustem.com/ No Spam. No Viruses. Just Good Clean Email.
On Sun, 15 Feb 2004, Jon R. Kibler wrote:
We find that at least 85% of all spam originates from DHCP addresses. Thus, if a significant number of ISPs would perform port 25 egress filtering, I believe that it would significantly reduce spam, and force criminal spammers to develop completely new spamming technologies.
DialUp Lists (DUL) dns block lists permits you to ignore e-mail from many dynamic IP addresses. You can configure your mail server to do this today without waiting for ISPs to do anything. Like most other "simple" solutions, how effective is it?
On Sun, 15 Feb 2004 16:40:40 EST, Sean Donelan said:
DialUp Lists (DUL) dns block lists permits you to ignore e-mail from many dynamic IP addresses. You can configure your mail server to do this today without waiting for ISPs to do anything.
If we advertise the DHCP pools for AS1312 in a DUL, we solve the problem for those sites that use the DUL we list them in. If we block outbound port 25 SYN packets from origin addresses in the DHCP address blocks, we solve the problem for everybody.
On Sun, 15 Feb 2004 Valdis.Kletnieks@vt.edu wrote:
If we advertise the DHCP pools for AS1312 in a DUL, we solve the problem for those sites that use the DUL we list them in.
If we block outbound port 25 SYN packets from origin addresses in the DHCP address blocks, we solve the problem for everybody.
No...you just speed up the migration (which has already begun) to spam proxies that use the local ISP's mail servers as smart hosts. Then you have to come up with a way to rate-limit customer outbound SMTP traffic. BTW...who brought SARS (or more likely just flu) to nanog30? I drove (so I didn't catch it on the plane) and symptoms (sore throat, congestion, very high fever) started thursday. I've spent most of the weekend in bed waiting to die. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
jlewis@lewis.org wrote:
On Sun, 15 Feb 2004 Valdis.Kletnieks@vt.edu wrote:
<snip!>
If we block outbound port 25 SYN packets from origin addresses in the DHCP address blocks, we solve the problem for everybody.
EXACTLY correct!
No...you just speed up the migration (which has already begun) to spam proxies that use the local ISP's mail servers as smart hosts. Then you have to come up with a way to rate-limit customer outbound SMTP traffic.
I agree that proxies that use the local ISP's mail servers as smart hosts is a growing problem. However, it is a problem that is far more manageable than is our current situation. First, if spam is forced through a centralized set of outgoing servers, and these servers do adequate logging, then a compromised system can be detected in a matter of minutes and blocked. Next, requiring users to use SMTP AUTH to authenticate to the mail server, even when on the ISP's network, would throw another hurdle into the spammer's ability to access the ISP's mail server, and thus block the ability of spamware to route mail in this manner. Ultimately, if all local networks, including ISP customers, would require that MUAs submit mail through MSAs (instead of through MTAs), and require that the MUAs use StartTLS to connect to the MSA, it would become very difficult for spammers to hijack an ISP's MTA. (Yes, this means that ISPs will have to run their own PKI, but I can easily see the day where this will be SOP.) Bottom line... I believe that it such easier to control spammer traffic routed through central mail servers, than it is to control spammers using thousands of hijacked systems that have their own SMTP engines dumping mail onto the net. -- Jon R. Kibler Chief Technical Officer A.S.E.T., Inc. Charleston, SC USA (843) 849-8214 ================================================== Filtered by: TRUSTEM.COM's Email Filtering Service http://www.trustem.com/ No Spam. No Viruses. Just Good Clean Email.
On 2004-02-15T20:43-0500, Jon R. Kibler wrote: ) jlewis@lewis.org wrote: ) > On Sun, 15 Feb 2004 Valdis.Kletnieks@vt.edu wrote: ) > > If we block outbound port 25 SYN packets from origin addresses in the DHCP ) > > address blocks, we solve the problem for everybody. ) EXACTLY correct! Not quite exactly, no; perhaps by adding the blocking of incoming ACKs with a source port of 25 might be closer to solving the problem "for everybody". But this is part of my point: There is no simple answer, because spammers will eventually [hire someone to] come up with a technical work around for whatever you do. -- Daniel Reed <n@ml.org> http://naim-users.org/nmlorg/ http://naim.n.ml.org/ "True nobility lies not in being superior to another man, but in being superior to one's previous self."
On Sun, 15 Feb 2004 Valdis.Kletnieks@vt.edu wrote:
DialUp Lists (DUL) dns block lists permits you to ignore e-mail from many dynamic IP addresses. You can configure your mail server to do this today without waiting for ISPs to do anything.
If we advertise the DHCP pools for AS1312 in a DUL, we solve the problem for those sites that use the DUL we list them in.
What if I told you about a method to identify the type of connection for every IP address in our DNS? You don't need to rely on third-party DUL lists. Blocking is a binary decision. Instead if you have better information about the connection source, you can make different decisions how to handle the message.
If we block outbound port 25 SYN packets from origin addresses in the DHCP address blocks, we solve the problem for everybody.
Including the people who don't want you to solve it for them. People want to use outbound port 25 from dynamic address blocks. Why block it between people who want to use it just because some people want to have open servers? Block 119, you must use your ISPs NNTP server. Block 6667, you must use your ISPs IRC server Block 80, you must use your ISPs HTTP proxy. Block N, you must use your ISPs whatever server. Enterprises already do this, the equipment exists. Why do we want ISPs doing this?
On Sun, 15 Feb 2004 17:46:05 EST, Sean Donelan said:
What if I told you about a method to identify the type of connection for every IP address in our DNS? You don't need to rely on third-party DUL lists.
Hmm.. color me dubious, but keep talking. Best bet here would probably be some interesting abuse of PTR records?
On Sun, 15 Feb 2004 Valdis.Kletnieks@vt.edu wrote:
On Sun, 15 Feb 2004 17:46:05 EST, Sean Donelan said:
What if I told you about a method to identify the type of connection for every IP address in our DNS? You don't need to rely on third-party DUL lists.
Hmm.. color me dubious, but keep talking. Best bet here would probably be some interesting abuse of PTR records?
You wouldn't be too far off. It depends on whether you consider the ISP a cooperative partner or a hostile participant. Not only are 3rd party block lists often out-of-date and difficult to update, the public has a hard time understanding the difference between an ISP voluntarily listing their IP addresses in a DUL list and being labelled a "spam haven" because their IP addresses are in a block list. If you assume the ISP wants to help (which you also have to assume for a port 25 blocks to work), how can an ISP provide first-party information about the status of an IP address on demand to anyone? My idea is to follow the RFC1101 example. PTR records already have other uses and requirements. So I suggest using another record type which doesn't have a current meaning in the reverse DNS. Instead use something like a HINFO record. 1.0.168.192.in-addr.arpa in ptr some1.example.net in hinfo Dynamic Dialup 2.0.168.192.in-addr.arpa in ptr some2.example.net in hinfo Static xDSL The ISP (or really the network administrator for the network block) is in the best position to know how the IP addresses are managed. The netadmin can keep the HINFO records up to date, or correct the record if they are incorrect. You don't need to guess which DUL maintainer contains records for various networks or worry about a DOS attacks on a few DNS servers affecting mail service globally. You always query the network administrator's DNS servers when you receive a connection from an IP address for information about that IP address.
Sean Donelan wrote:
DialUp Lists (DUL) dns block lists permits you to ignore e-mail from many dynamic IP addresses. You can configure your mail server to do this today without waiting for ISPs to do anything.
Like most other "simple" solutions, how effective is it?
We block known dialup netblks. Catches < 5% of spam. Why? Because the real culprits are xDSL, CABLE and other systems with broadband connections. These account for about 80% of the spam attempts we observe. The idea here is not just to prevent the receipt of spam (which is what DNSBLs can accomplish), rather, it is to prevent the generation of spam that is accounting for such a growing amount of everyone's network traffic. If you block the ability of non-legitimate MTAs (such as open proxies and spamiruses) to send spam, you reduce the network bandwidth waste that spam is consuming. (As a side effect, you would also reduce the spread of viruses by email.) -- Jon R. Kibler Chief Technical Officer A.S.E.T., Inc. Charleston, SC USA (843) 849-8214 ================================================== Filtered by: TRUSTEM.COM's Email Filtering Service http://www.trustem.com/ No Spam. No Viruses. Just Good Clean Email.
On Sun, 15 Feb 2004, Jon R. Kibler wrote:
DialUp Lists (DUL) dns block lists permits you to ignore e-mail from many dynamic IP addresses. You can configure your mail server to do this today without waiting for ISPs to do anything.
Like most other "simple" solutions, how effective is it?
We block known dialup netblks. Catches < 5% of spam. Why? Because the real culprits are xDSL, CABLE and other systems with broadband connections. These account for about 80% of the spam attempts we observe.
Why don't you block "known" dynamic netblks, including xDSL, Cable, and other broadband connections using dynamic addresses such as WiFi in Starbucks? Most of the existing public DUL's include dynamic IP addresses from all network technologies, not just dialup.
The idea here is not just to prevent the receipt of spam (which is what DNSBLs can accomplish), rather, it is to prevent the generation of spam that is accounting for such a growing amount of everyone's network traffic.
All mail traffic (legitimate and illegitimate) is a very small percentage of network traffic. Besides, connections blocked at receipt use a very small amount of bandwidth. When the ISP blocks the traffic, you loose the capability to make an exception when you decide.
If you block the ability of non-legitimate MTAs (such as open proxies and spamiruses) to send spam, you reduce the network bandwidth waste that spam is consuming. (As a side effect, you would also reduce the spread of viruses by email.)
Blocking port 25 blocks the ability of all MTA's to send any type of mail. "Non-legitimate" is a determination best made by the two parties involved in the communication.
Sean Donelan wrote:
On Sun, 15 Feb 2004, Jon R. Kibler wrote:
We block known dialup netblks. Catches < 5% of spam. Why? Because the real culprits are xDSL, CABLE and other systems with broadband connections. These account for about 80% of the spam attempts we observe.
Why don't you block "known" dynamic netblks, including xDSL, Cable, and other broadband connections using dynamic addresses such as WiFi in Starbucks? Most of the existing public DUL's include dynamic IP addresses from all network technologies, not just dialup.
OK, I was sloppy in my wording... I should have said that we block published dynamic netblks, including dial, cable, xDSL, and wireless. That still catches something less than 5% of spam originating from DHCP connections. Also, most ISPs (at least that serve the SE U.S.) AUP prohibit the running of any type of server on a DHCP connection. I know of at least one that regularly drop service to any system found running web, mail, IRC, proxy, ftp, telnet, or any of a dozen other different servers on any DHCP connection.
Blocking port 25 blocks the ability of all MTA's to send any type of mail. "Non-legitimate" is a determination best made by the two parties involved in the communication.
Why should hundreds of thousands of MTAs each have to make the determination that a given system wishing to make a connection is running spamware on a hacked system when that user's ISP could simply block that user and save everyone else the grief? To me, the approach you advocate is something like saying "do away with any centralized law enforcement, force everyone to carry guns, and if anyone suspects that someone else is committing a crime, they are obliged to shoot them." I believe that blocking spam at its source is far easier than blocking it at every possible destination. The less parties involved in blocking the spam, the higher the probability that the spam will be successfully blocked. -- Jon R. Kibler Chief Technical Officer A.S.E.T., Inc. Charleston, SC USA (843) 849-8214 ================================================== Filtered by: TRUSTEM.COM's Email Filtering Service http://www.trustem.com/ No Spam. No Viruses. Just Good Clean Email.
On Sun, 15 Feb 2004, Jon R. Kibler wrote:
To me, the approach you advocate is something like saying "do away with any centralized law enforcement, force everyone to carry guns, and if anyone suspects that someone else is committing a crime, they are obliged to shoot them." I believe that blocking
So, what Sean is proposing, and what you accurately describe (mostly) here is how the Internet is intended to be run... Minus the 'and the people running the systems should be "smart" or "careful" or "considerate"' of course. There was never any central control/enforcement for the Internet, and time and again Governments have been shown that its next to impossible to BE that central enforcer... With the exception, possibly, of China though one could successfully argue that their firewall isn't working so well if hundreds of thousands of hosts on their networks can get compromised and flood out spoofed ip datagrams, eh?
"Christopher L. Morrow" wrote: <SNIP!>
There was never any central control/enforcement for the Internet, and time and again Governments have been shown that its next to impossible to BE that central enforcer...
<SNIP!> I am NOT advocating government regulation or policing of the Internet. Rather, my point is that ISPs should proactively enforce their AUPs, instead of most ISP's current policy of reactive enforcement. Proactive enforcement of AUPs would save everyone a lot of time, money, and grief. -- Jon R. Kibler Chief Technical Officer A.S.E.T., Inc. Charleston, SC USA (843) 849-8214 ================================================== Filtered by: TRUSTEM.COM's Email Filtering Service http://www.trustem.com/ No Spam. No Viruses. Just Good Clean Email.
On Mon, 16 Feb 2004, Jon R. Kibler wrote:
"Christopher L. Morrow" wrote: <SNIP!>
There was never any central control/enforcement for the Internet, and time and again Governments have been shown that its next to impossible to BE that central enforcer...
<SNIP!>
I am NOT advocating government regulation or policing of the Internet. Rather, my point is that ISPs should proactively enforce their AUPs, instead of most
quite a few ISP's actually do proactively enforce them, there are some limits to what is feasible though. I think you are seeing these limits.
At 02:11 PM 2/16/2004 -0500, Jon R. Kibler wrote:
"Christopher L. Morrow" wrote: <SNIP!>
There was never any central control/enforcement for the Internet, and time and again Governments have been shown that its next to impossible to BE that central enforcer...
<SNIP!>
I am NOT advocating government regulation or policing of the Internet. Rather, my point is that ISPs should proactively enforce their AUPs, instead of most ISP's current policy of reactive enforcement. Proactive enforcement of AUPs would save everyone a lot of time, money, and grief.
This is like expecting the police to be proactive and prevent crimes.
I've spent many years in the industry... It comes down to this: a) Being proactive costs money. Whether it be in the form of additional engineering/operations time or beefier routers doesn't matter. No management type will *ALLOW* the technical folks to expend resources unless there is either 1) a certifiable return on the investment or 2) a legal requirement that *ALL* service providers do the same exact thing. b) Action by one provider will mostly benefit other providers, this provides negative inducement.... c) Being proactive without some form of legal/legislative backing signifies risk to the management types... Who's going to complain, who's going to sue, etc.... There will *never* be a concerted action by all service providers to filter ingress/egress on abused ports unless there is a legal requirement to do so. Think 'level playing field'... Tim McKee
Timothy R. McKee wrote:
There will *never* be a concerted action by all service providers to filter ingress/egress on abused ports unless there is a legal requirement to do so. Think 'level playing field'...
Haven´t it been stated enough times previously that blindly blocking ports is irresponsible? There are ways to similar, if not more accurate results without resorting to shooting everything that moves. Pete
Personally I don't see where ingress filters that only allow registered SMTP servers to initiate TCP connections on port 25 is irresponsible. Any user sophisticated enough to legitimately require a running SMTP server should also have the sophistication to create a dns entry and register it with his upstream in whatever manner is required. There will never be a painless or easy solution to this problem, only a choice where we select the lesser of all evils. Tim -----Original Message----- From: Petri Helenius [mailto:pete@he.iki.fi] Sent: Monday, February 16, 2004 16:06 To: Timothy R. McKee Cc: 'J Bacher'; nanog@merit.edu Subject: Re: Anti-spam System Idea Timothy R. McKee wrote:
There will *never* be a concerted action by all service providers to filter ingress/egress on abused ports unless there is a legal requirement to do so. Think 'level playing field'...
Haven´t it been stated enough times previously that blindly blocking ports is irresponsible? There are ways to similar, if not more accurate results without resorting to shooting everything that moves. Pete
We do block port 25 as suggested in earlier in the thread. Now the problem is the spambots use our smarthost(s) to spew their garbage and the smarthosts are blocked. there is an easy if somewhat impractical anwswer ;~} access-list network-egress deny ip any any log Think of all the bandwidth charges this would save... Seriosly though if anyone on the list has any solutions for rate limiting SMTP in a sendmail environment please reply off list. Scott C. McGrath On Mon, 16 Feb 2004, Timothy R. McKee wrote:
Personally I don't see where ingress filters that only allow registered SMTP servers to initiate TCP connections on port 25 is irresponsible.
Any user sophisticated enough to legitimately require a running SMTP server should also have the sophistication to create a dns entry and register it with his upstream in whatever manner is required.
There will never be a painless or easy solution to this problem, only a choice where we select the lesser of all evils.
Tim
-----Original Message----- From: Petri Helenius [mailto:pete@he.iki.fi] Sent: Monday, February 16, 2004 16:06 To: Timothy R. McKee Cc: 'J Bacher'; nanog@merit.edu Subject: Re: Anti-spam System Idea
Timothy R. McKee wrote:
There will *never* be a concerted action by all service providers to filter ingress/egress on abused ports unless there is a legal requirement to do so. Think 'level playing field'...
Haven´t it been stated enough times previously that blindly blocking ports is irresponsible?
There are ways to similar, if not more accurate results without resorting to shooting everything that moves.
Pete
Most of the responses to the anti-spam thread, and the comments to Itojun's IAB presentation in Miami about filtering, show that this community has been thoroughly infiltrated and is now as CLUELESS as the PSTN providers, and just as power hungry. The current ISPs have the opportunity to turn the Internet into the PSTN, where customers can have any service they want as long as it uses an audio interface and a rotary dial for signaling. ;) Seriously, filtering is about attempting to prevent the customer from using their target application. Central registration is no better, as its only purpose is exercising power through extortion of additional funds for 'allowing' that application. What people seem to be refusing to hear is the comment Phil Karn made at the mic. If you insist on restricting the service to a small set of 'approved' applications, people will simply encapsulate what they really want to do in the approved service and you will lose visibility. For any who doubt this, revisit how the Internet was deployed and grew despite the limitations of the PSTN interface & offerings. The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models. Tony
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Timothy R. McKee Sent: Monday, February 16, 2004 1:19 PM To: 'Petri Helenius' Cc: 'J Bacher'; nanog@merit.edu Subject: RE: Anti-spam System Idea
Personally I don't see where ingress filters that only allow registered SMTP servers to initiate TCP connections on port 25 is irresponsible.
Any user sophisticated enough to legitimately require a running SMTP server should also have the sophistication to create a dns entry and register it with his upstream in whatever manner is required.
There will never be a painless or easy solution to this problem, only a choice where we select the lesser of all evils.
Tim
-----Original Message----- From: Petri Helenius [mailto:pete@he.iki.fi] Sent: Monday, February 16, 2004 16:06 To: Timothy R. McKee Cc: 'J Bacher'; nanog@merit.edu Subject: Re: Anti-spam System Idea
Timothy R. McKee wrote:
There will *never* be a concerted action by all service providers to filter ingress/egress on abused ports unless there is a legal requirement to do so. Think 'level playing field'...
Haven´t it been stated enough times previously that blindly blocking ports is irresponsible?
There are ways to similar, if not more accurate results without resorting to shooting everything that moves.
Pete
In message <20040217201751.5B25F5DDEA@segue.merit.edu>, "Tony Hain" writes:
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models.
Thank you. You've got it exactly right. --Steve Bellovin, http://www.research.att.com/~smb
In message <20040217201751.5B25F5DDEA@segue.merit.edu>, "Tony Hain" writes:
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models.
Thank you. You've got it exactly right.
Thank you Tony, i was trying to comment but you made the exact point i was trying to make. itojun@IAB hat on
In message <20040217201751.5B25F5DDEA@segue.merit.edu>, "Tony Hain" writes:
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models.
Thank you. You've got it exactly right.
--Steve Bellovin, http://www.research.att.com/~smb
I also agree. The RFC for mail was very well designed. If people simply stuck to the orginal RFC (~800 something) and managed more of their own small systems then this spam thing just wouldn't be the problem that it has become... would it? Cheers Don
On Wed, 18 Feb 2004 10:08:25 +1300, Don Gould <don@bowenvale.co.nz> said:
The RFC for mail was very well designed. If people simply stuck to the orginal RFC (~800 something) and managed more of their own small systems then this spam thing just wouldn't be the problem that it has become... would it?
The problem is that literally 95% of the end hosts on the internet are running software that's not capable of being properly managed by the people who are alledgedly managing them. (Yes, that was carefully phrased that way, because there's *multiple* failures in the basic model. Lots of blame to go around on this one). Any real solution is going to have to deal with the fact that properly administered systems are in the distinct minority.
Steven M. Bellovin wrote:
In message <20040217201751.5B25F5DDEA@segue.merit.edu>, "Tony Hain" writes:
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models.
Thank you. You've got it exactly right.
Does that then mean that there is in fact a requirement to allow into the Internet from its connecting networks every single packet offered? Or is it OK, just as a mind game, to deny access to packets that can clearly have no possible (not "no conceivable", "no possible") productive use--such as packets that claim to be sourced from a place distant from the network they arrived from for example? Aviation is another world for which I have been soured for a number of years (largely because of regulations that make no sense to me, but that screed is for another time and another place), but there used to be a useful segregation--I no longer remember the proper names. But there were classes of aircraft (and of aircrews) that were pretty rigidly regulated, inspected, measured, and tested which were allowed to be used to carry large amounts of human flesh or of other peoples property for hire. Another, less stringently governed group was allowed to do things for hire (read: also in production service) for things not included in my first group, but generally they could go the same places and do the same things. And so on--down to an "Experimental" (I'm pretty sure that is the right name for it) group that was restricted to certain airspace, certain air fields, certain uses all intended to assure that the production folk and folk who did not want to be involved at all were not at undue risk, while allowing innovation and other really good things.
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models.
bingo! and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go. randy
Randy Bush <randy@psg.com> writes:
and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go.
Strangely enough, the only complaints I've heard about t-mob GPRS (aside from whininess about the 800ms latency typical of GPRS) involved a protracted effort, eventually successful, to get them to understand that having *something*, *anything* in the in-addr.arpa for their address pool was a Good Plan... and that was a year or two ago. The ssh client for the Danger Sidekick is extremely popular, and I don't think it would be if the scenario you mention above were true. ---Rob
In the immortal words of Robert E. Seastrom (rs@seastrom.com):
Randy Bush <randy@psg.com> writes:
and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go.
The ssh client for the Danger Sidekick is extremely popular, and I don't think it would be if the scenario you mention above were true.
I suspect that Randy is referring to their "T-Zones Internet" plan, which is a bit different from their normal/real GPRS service. -n ------------------------------------------------------------<memory@blank.org> Being a Unix system administrator is like being a tech in a biological warfare laboratory, except that none of the substances are labeled consistently, any of the compounds are just as likely to kill you by themselves as they are when mixed with one another, and it is never clear what distinction is made between a catastrophic failure in the lab and a successful test in the field. (--M. Tiemann) <http://blank.org/memory/>----------------------------------------------------
and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go. The ssh client for the Danger Sidekick is extremely popular, and I don't think it would be if the scenario you mention above were true. I suspect that Randy is referring to their "T-Zones Internet" plan, which is a bit different from their normal/real GPRS service.
your suspicion is incorrect randy
On 17 Feb 2004, Robert E. Seastrom wrote:
Randy Bush <randy@psg.com> writes:
and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go.
Strangely enough, the only complaints I've heard about t-mob GPRS (aside from whininess about the 800ms latency typical of GPRS) involved a protracted effort, eventually successful, to get them to understand that having *something*, *anything* in the in-addr.arpa for their address pool was a Good Plan... and that was a year or two ago.
whilst in miami i found roaming onto t-mobile to be less than useless (att was barely any better altho i could dial on att). gprs on both networks was slow and kicked me after a couple minutes of being connected, however i could ssh albeit briefly! Steve
On Tue, 17 Feb 2004, Stephen J. Wilcox wrote:
On 17 Feb 2004, Robert E. Seastrom wrote:
Randy Bush <randy@psg.com> writes:
and, if you want to see a particularly broken example, buy "internet service" from t-mobile gprs in the states, port 22 blocked, no smtp relay, ... "walled garden" mentality from the get go.
Strangely enough, the only complaints I've heard about t-mob GPRS (aside from whininess about the 800ms latency typical of GPRS) involved a protracted effort, eventually successful, to get them to understand that having *something*, *anything* in the in-addr.arpa for their address pool was a Good Plan... and that was a year or two ago.
whilst in miami i found roaming onto t-mobile to be less than useless (att was barely any better altho i could dial on att). gprs on both networks was slow and kicked me after a couple minutes of being connected, however i could ssh albeit briefly!
dialing *99# from my laptop gets me this. from my laptop 10.153.102.41 to my work desktop [joelja@joelja-vaio joelja]$ ssh -l joelja twin.uoregon.edu ssh: connect to host twin.uoregon.edu port 22: Network is unreachable ok how bout smtp [joelja@joelja-vaio joelja]$ telnet twin.uoregon.edu 25 Trying 128.223.214.27... Connected to twin.uoregon.edu. Escape character is '^]'. 220 twin.uoregon.edu ESMTP Sendmail 8.12.10/8.12.5; Tue, 17 Feb 2004 17:55:03 -\080 that works, port 80 [joelja@joelja-vaio joelja]$ telnet twin.uoregon.edu 80 Trying 128.223.214.27... Connected to twin.uoregon.edu. Escape character is '^]'. ok another host on another network (my home) [joelja@joelja-vaio joelja]$ ssh -l joelja blotto.ath.cx ssh: connect to host blotto.ath.cx port 22: Connection timed out I haven't had any joy working through t-mobile on why they do this but it sucks and it makes my phone (nokia 3650) a lot less useful. Yes blotto has an ssh on an alternate port but that means hauling eveything back through there instead of being able to just connect back to the whatever host I want. joelja -------------------------------------------------------------------------- Joel Jaeggli Unix Consulting joelja@darkwing.uoregon.edu GPG Key Fingerprint: 5C6E 0104 BAF0 40B0 5BD3 C38B F000 35AB B67F 56B2
--On 17 February 2004 12:17 -0800 Tony Hain <alh-ietf@tndh.net> wrote: [with apologies for rearrangement]
The Internet has value because it allows arbitrary interactions where new applications can be developed and fostered. The centrally controlled model would have prevented IM, web, sip applications, etc. from ever being deployed. If there are any operators out there who still understand the value in allowing the next generation of applications to incubate, you need to push back on this tendency to limit the Internet to an 'approved' list of ports and service models. ... Seriously, filtering is about attempting to prevent the customer from using their target application. Central registration is no better, as its only purpose is exercising power through extortion of additional funds for 'allowing' that application.
Quite right in general. However a) Some forms of filtering, which do occasionally prevent the customer from using their target application, are in general good, as the operational (see, on topic) impact of *not* applying tends to be worse than the disruption of applying them. Examples: source IP filtering on ingress, BGP route filtering. Both of these are known to break harmless applications. I would suggest both are good things. b) The real problem here is that there are TWO problems which interact. It is a specific case of the following general problem: * A desire for any to any end to end connectivity using the protocol concerned => filter free internet * No authentication scheme Applying filters based on IP address & protocol (whether it's by filtering or RBL) is in effect attempting to do authentication by IP address. We know this is not a good model. People do, however, use it because there currently is no realistic widely deployed alternative available. Those that are currently available (e.g. SPF) are not widely deployed, and in any case are far from perfect. Whilst we have no hammer, people will keep using the screwdriver to drive in nails, and who can blame them? Alex
In message <451737404.1077054498@[192.168.100.25]>, Alex Bligh writes:
b) The real problem here is that there are TWO problems which interact. It is a specific case of the following general problem: * A desire for any to any end to end connectivity using the protocol concerned => filter free internet * No authentication scheme
Applying filters based on IP address & protocol (whether it's by filtering or RBL) is in effect attempting to do authentication by IP address. We know this is not a good model. People do, however, use it because there currently is no realistic widely deployed alternative available. Those that are currently available (e.g. SPF) are not widely deployed, and in any case are far from perfect. Whilst we have no hammer, people will keep using the screwdriver to drive in nails, and who can blame them?
Apart from the general undesirability of using IP addresses for authentication -- and I've been writing about that for 15 years -- the problem of authentication for anti-spam is ill-defined. In fact, posing it as an authentication problem misses the point entirely. In almost all circumstances, authentication is useful for one of two things: authorization or retribution. But who says you need "authorization" to send email? Authorized by whom? On what criteria? Attempts to define "official" ISPs leads very quickly to the walled garden model -- you have to be part of the club to be able to send mail to its members, but the members themselves have to enforce good behavior by their subscribers. Retribution doesn't work very well, either -- new identities are very easy to come by, and since most spammers are already committing other illegal acts (ranging from the "products" they advertise to the systems and address blocks they hijack) they're not easily dissuaded by laws. Reasoning like this leads me to schemes that involve imposing cost. It may be financial, it may be CPU cycles, it may be any of a number of things. But it can't be identity based, except for recipient-based whitelists, and they have their own disadvantages. --Steve Bellovin, http://www.research.att.com/~smb
Reasoning like this leads me to schemes that involve imposing cost. It may be financial, it may be CPU cycles, it may be any of a number of things. But it can't be identity based, except for recipient-based whitelists, and they have their own disadvantages.
cost is good. the problem is convincing a reasonable user of smtp that everything would work much better only if everything (for instance) took longer (or cost more) to deliver. can you imagine the joy of debugging a problem-solving challenge/response system? or better yet, getting *everyone* to switch out their client? (and who would actively support "phasing out" smtp clients as they stand as long as it all still worked? keep in mind that uucp is still alive and well because it actually works.) only when the average joe hates spam enough that the cost to him justifies the effort to him can it happen. right now, i think it's mostly tech geeks who get really amped up about it. and of course, they view the cost to them as disproportionately high... s.
Steve, --On 17 February 2004 17:28 -0500 "Steven M. Bellovin" <smb@research.att.com> wrote:
In almost all circumstances, authentication is useful for one of two things: authorization or retribution. But who says you need "authorization" to send email? Authorized by whom? On what criteria?
Authorized by the recipient or some delegee thereof, using whatever algorithms and heuristics they chose. But based on data the authenticity of which they can determine without it being trivially forgeable, and without it being mixed up with the transport protocol. IE in much the same way as say PGP, or BGP.
Attempts to define "official" ISPs leads very quickly to the walled garden model -- you have to be part of the club to be able to send mail to its members, but the members themselves have to enforce good behavior by their subscribers.
I never said anything about "official" ISPs. I am attempting to draw an analogy (and note the difference) between SMTP as currently deployed, and the way this same problem has been solved many times for other well known protocols. We do not have an official BGP authorization repository. Or an official PGP authorization repository. We just have people we chose to trust, and people they in turn chose to trust. Take BGP (by which I mean eBGP) as the case in point: It seems to be general held opinion that the one-and-only canonical central repository for routes does not work well. The trust relationship is important, and we expect some transitivity (no pun intended) in the trust relationshipa to apply. And many end-users in the BGP case - i.e. stub networks - chose to "outsource" their their trust to their upstream; when they don't like how their upstream manages their routes, they move provider. BGP allows me (in commonly deployed form) to run a relatively secure protocol between peers, and deploy (almost) universal end-to-end connectivity for IP packets in a manner that does not necessarily involve end users in needing to know anything about it bar "if the routing doesn't work, I move providers"; and IP packets do not flow "through" BGP, they flow in manners prescribed by BGP. Replace BGP by "a mail authorization protocol" and "IP packets" by "emails" in the foregoing; if the statement still holds, we are getting there (without reverting to bangpaths & pathalias). Oh, and people keep mentioning settlement and how it might fix everything - people said the same about BGP (i.e. IP peering) - may be, may be not - the market seems to have come up with all sorts of ingenious solutions for BGP. Alex
Alex Bligh wrote:
Steve,
--On 17 February 2004 17:28 -0500 "Steven M. Bellovin" <smb@research.att.com> wrote:
In almost all circumstances, authentication is useful for one of two things: authorization or retribution. But who says you need "authorization" to send email? Authorized by whom? On what criteria?
Authorized by the recipient or some delegee thereof, using whatever algorithms and heuristics they chose. But based on data the authenticity of which they can determine without it being trivially forgeable, and without it being mixed up with the transport protocol. IE in much the same way as say PGP, or BGP.
Attempts to define "official" ISPs leads very quickly to the walled garden model -- you have to be part of the club to be able to send mail to its members, but the members themselves have to enforce good behavior by their subscribers.
I never said anything about "official" ISPs. I am attempting to draw an analogy (and note the difference) between SMTP as currently deployed, and the way this same problem has been solved many times for other well known protocols.
No it hasn't, and your comparison to BGP is very much about 'official ISPs'. For starters your examples are not anywhere close to the same scale as the SMTP 'problem', and are restricted to 'IN' players. The closest they get is the blatant attempt to restrict SMTP to the privileged club of BGP speakers.
We do not have an official BGP authorization repository. Or an official PGP authorization repository. We just have people we chose to trust, and people they in turn chose to trust.
Where they specifically form a club and agree to preclude the basement multi-homed site from participating through prefix length filters. This is exactly like the thread comments about preventing consumers from running independent servers by forced filtering and routing through the ISP server. This is not scaled trust; it is a plain and simple power grab. Central censorship is what you are promoting, but you are trying to pass it off as spam control through a provider based transitive trust structure. Either you are clueless about where you are headed, or you think the consumers won't care when you take their rights away. Either way this path is not good news for the future Internet. Tony
Take BGP (by which I mean eBGP) as the case in point: It seems to be general held opinion that the one-and-only canonical central repository for routes does not work well. The trust relationship is important, and we expect some transitivity (no pun intended) in the trust relationshipa to apply. And many end-users in the BGP case - i.e. stub networks - chose to "outsource" their their trust to their upstream; when they don't like how their upstream manages their routes, they move provider. BGP allows me (in commonly deployed form) to run a relatively secure protocol between peers, and deploy (almost) universal end-to-end connectivity for IP packets in a manner that does not necessarily involve end users in needing to know anything about it bar "if the routing doesn't work, I move providers"; and IP packets do not flow "through" BGP, they flow in manners prescribed by BGP. Replace BGP by "a mail authorization protocol" and "IP packets" by "emails" in the foregoing; if the statement still holds, we are getting there (without reverting to bangpaths & pathalias). Oh, and people keep mentioning settlement and how it might fix everything - people said the same about BGP (i.e. IP peering) - may be, may be not - the market seems to have come up with all sorts of ingenious solutions for BGP.
Alex
--On 17 February 2004 16:19 -0800 Tony Hain <alh-ietf@tndh.net> wrote:
Where they specifically form a club and agree to preclude the basement multi-homed site from participating through prefix length filters. This is exactly like the thread comments about preventing consumers from running independent servers by forced filtering and routing through the ISP server. This is not scaled trust; it is a plain and simple power grab. Central censorship is what you are promoting, but you are trying to pass it off as spam control through a provider based transitive trust structure. Either you are clueless about where you are headed, or you think the consumers won't care when you take their rights away. Either way this path is not good news for the future Internet.
Now there was me thinking that I was in general agreeing with you. I am not promoting any sort of censorship, central or otherwise. I believe you have a perfect right to open a port 25 connection to any server, and I have a perfect right to accept or deny it. And of course vice-versa. What I am saying is that I would like, in determining whether to accept or reject your connection, to know who you are and that you act responsibly, or failing that, to know someone who is prepared to vouch for you; failing that, may be I'll accept your email anyway, may be I won't. I do not care what upstream either you or I have. For the avoidance of doubt, I am not talking about forcing people to send mail through their upstreams, or even suggesting that the graph of any web of trust should follow the BGP topology. Indeed the entire point I made about separating the web of trust's topology from IP addresses etc. was rather to enable end users to CHOOSE how they accept/reject mail in a manner that might have nothing to do with network topology. Personally I would be far more happy accepting mail from someone who'd been vouched for by (say) someone on this list I knew, than vouched for by their quite possibly clueless DSL provider. Of course some people will want to use their "ISP", many won't. Just like Joe User can use their upstream's DNS service, but doesn't necessarily need to. Maybe PGP would have been a better analogy as far as the scale bit goes. I think you are assigning motives to the BGP basement-multihoming problems where in general the main motive is not getting return on cost of hardware; however, I don't think the same scale constraints need apply as it is unnecessary to hold a complete table in-core at once. Alex
Clearly I misinterpreted your comments; sorry for reading other parts of the thread into your intent. The bottom line is the lack of a -scalable- trust infrastructure. You are arguing here that the technically inclined could select from a list of partial trust options and achieve 'close enough'. While that is true, Joe-sixpack wouldn't bother even if he could figure out how. Whatever trust infrastructure that comes into existence for the mass market has to appear to be seamless, even if it is technically constructed from multiple parts. Steve Bellovin suggested earlier that identity based approaches wouldn't work. While I agree having the identity won't solve the problems by itself, it does provide a key that the rest of the legal system can start to work with. False identities are common in the real world, so their existence in the electronic world is not really any different. I guess I am looking at this from the opposite side the two of you appear to be, rather than requiring authorization to send, irrefutable identity should be used to deny receipt after proven abuse. Tony
-----Original Message----- From: Alex Bligh [mailto:alex@alex.org.uk] Sent: Tuesday, February 17, 2004 4:48 PM To: Tony Hain; 'Steven M. Bellovin' Cc: nanog@merit.edu; Alex Bligh Subject: RE: Clueless service restrictions (was RE: Anti-spam System Idea)
--On 17 February 2004 16:19 -0800 Tony Hain <alh-ietf@tndh.net> wrote:
Where they specifically form a club and agree to preclude the basement multi-homed site from participating through prefix length filters. This is exactly like the thread comments about preventing consumers from running independent servers by forced filtering and routing through the ISP server. This is not scaled trust; it is a plain and simple power grab. Central censorship is what you are promoting, but you are trying to pass it off as spam control through a provider based transitive trust structure. Either you are clueless about where you are headed, or you think the consumers won't care when you take their rights away. Either way this path is not good news for the future Internet.
Now there was me thinking that I was in general agreeing with you. I am not promoting any sort of censorship, central or otherwise. I believe you have a perfect right to open a port 25 connection to any server, and I have a perfect right to accept or deny it. And of course vice-versa. What I am saying is that I would like, in determining whether to accept or reject your connection, to know who you are and that you act responsibly, or failing that, to know someone who is prepared to vouch for you; failing that, may be I'll accept your email anyway, may be I won't. I do not care what upstream either you or I have. For the avoidance of doubt, I am not talking about forcing people to send mail through their upstreams, or even suggesting that the graph of any web of trust should follow the BGP topology. Indeed the entire point I made about separating the web of trust's topology from IP addresses etc. was rather to enable end users to CHOOSE how they accept/reject mail in a manner that might have nothing to do with network topology. Personally I would be far more happy accepting mail from someone who'd been vouched for by (say) someone on this list I knew, than vouched for by their quite possibly clueless DSL provider. Of course some people will want to use their "ISP", many won't. Just like Joe User can use their upstream's DNS service, but doesn't necessarily need to.
Maybe PGP would have been a better analogy as far as the scale bit goes. I think you are assigning motives to the BGP basement-multihoming problems where in general the main motive is not getting return on cost of hardware; however, I don't think the same scale constraints need apply as it is unnecessary to hold a complete table in-core at once.
Alex
On Tue, 17 Feb 2004, Alex Bligh wrote:
they in turn chose to trust. Take BGP (by which I mean eBGP) as the case in point: [...] The trust relationship is important, [...]. BGP allows me (in commonly deployed form) to run a relatively secure protocol between peers, and deploy (almost) universal end-to-end connectivity for IP packets in a manner that does not necessarily involve end users in needing to know anything about it bar "if the routing doesn't work, I move providers";
Right but: - The world of BGP peers is a rarified one, there are, what, <20k ASes out there? Nearly all are medium sized enterprises, institutions or organisations or bigger. - With BGP's peer-to-peer trust relationships, prefixes get hijacked, rogue ASes collude with spammers. So, despite the small number of players, it still doesnt work, and people are working on adding stronger forms of verification of announcements to to BGP. And you want to try scale this to the millions and millions of SMTP hosts? :)
Alex
regards, -- Paul Jakma paul@clubi.ie paul@jakma.org Key ID: 64A2FF6A warning: do not ever send email to spam@dishone.st Fortune: "You shouldn't make my toaster angry." -- Household security explained in "Johnny Quest"
On Tue, 17 Feb 2004 21:48:18 +0000 Alex Bligh <alex@alex.org.uk> wrote:
a) Some forms of filtering, which do occasionally prevent the customer from using their target application, are in general good, as the operational (see, on topic) impact of *not* applying tends to be worse than the disruption of applying them. Examples: source IP filtering on ingress, BGP route filtering. Both of these are known to break harmless applications. I would suggest both are good things.
There are some potential applications that these can break also. For example, a distributed application that sends out probes might wish to use the source IP address of a remote collector that is used to measure time delay or network path information. If Lumeta could have tunnels to a bunch of hosts, send traceroutes to various Internet places through those tunnels and have the tunneled hosts use Lumeta's IP as the source IP, they could build a pretty cool distributed peacock map. It is of course difficult to find a way to use these legitimate types of apps today without the infrastructure succumbing to attacks such as the source spoofed DoS traffic floods. John
On Tue, 17 Feb 2004, Tony Hain wrote: : Most of the responses to the anti-spam thread, and the comments to Itojun's : IAB presentation in Miami about filtering, show that this community has been : thoroughly infiltrated and is now as CLUELESS as the PSTN providers, and : just as power hungry. The current ISPs have the opportunity to turn the : Internet into the PSTN, where customers can have any service they want as : long as it uses an audio interface and a rotary dial for signaling. ;) Filtering, however, tends to be used to stave off real problems with the use of the service. POTS lines on modern switches will drop voltage to a trickle, for instance, if a device on the customer's end causes intermittent partial shorts or rapidly cycles through off-hook state. So, in making the PSTN analogy, I'd have to say that filtering an application -- based on a trigger -- would be perfectly acceptable for an ISP to do. Mind you, this assessment does not have any relevance to blanket blocking. It only addresses filtering on a trigger basis, which is performed by some residential service providers today. (I personally feel that blanket blocking particularly vulnerable things like NetBIOS, because we can't get the vendor in question to fix the glaring problems on a timely basis and work to prevent future ones, is a benefit to the Internet as a whole. But protocols that don't involve completely broken and security-risking service implementations, such as SMTP itself, don't warrant blanket blocking in my opinion.) -- -- Todd Vierling <tv@duh.org> <tv@pobox.com>
Folks, TH> If you insist on restricting the service to a small set of 'approved' TH> applications, people will simply encapsulate what they really want to do in TH> the approved service and you will lose visibility. A small elaboration: You will make life intolerable for the average user -- ie, the folks not likely to have the skills are working around artificial barriers -- but the non-average user -- ie, including the bad guys -- will encapsulate. DG> The RFC for mail was very well designed. If people simply stuck to the DG> orginal RFC (~800 something) and managed more of their own small systems DG> then this spam thing just wouldn't be the problem that it has become... DG> would it? Well, yes, but no. (I'm finding that a popular response today, because email and spam are so messy.) Email does what it was intended to do pretty well. As with multiaddressing (multihoming and mobility) there is a basic question about the architectural choice for making major changes. I keep wanting the enhancements to stay out of the core. For both areas of work. So, the original Internet mail service does nothing to prevent spoofing or spamming. I think most folks thought that content signing (eg, pgp or s/mime) would be a sufficient solution for authentication and I'd guess we just plain missed the likelihood of spamming. However I still tend to believe that having seen the problem earlier should not necessarily have made the core mail facilities different. In all likelihood, some for of "message" authentication is needed, possibly along the lines of DomainKeys or Teos. My sense is that the technology for this is quite tractable and requires no changes to the email infrastructure. (On the other hand, we need to pay very close attention to the failure to cross the chasm, for pgp and s/mime.) I think that the "registration" oriented authentication mechanisms (spf, rmx, lmap, etc.) can be useful only when the authenticator is the hosting network provider, rather than a message author. SMB> In almost all circumstances, authentication is useful for one of two SMB> things: authorization or retribution. But who says you need SMB> "authorization" to send email? Authorized by whom? On what criteria? , This certainly goes to a core set of issues. The fact that something is authenticated does not mean it is not spam. On the other hand, authentication is a good thing unto itself. On the other hand, making it a pre-requisite for _all_ email activity is very much a _bad_ thing. Authenticating mail will help deal with two problems: forgery and accountability. Forgery of the From field is now a major problem. It has always been a major threat. So, finding a tractable way of prevent or detecting forgery is a worthy goal unto itself. It does not "solve" spam. Rather I think of joe-jobbing and phishing as doing a really spectacular job of market segment development. It has made clear to target customers why they need a solution. Accountability just means that there is a good basis for tracking down problematic sources. That, too, is a good thing. d/) -- Dave Crocker <dcrocker-at-brandenburg-dot-com> Brandenburg InternetWorking <www.brandenburg.com> Sunnyvale, CA USA <tel:+1.408.246.8253>
Dave Crocker wrote:
Folks,
TH> If you insist on restricting the service to a small set of 'approved' TH> applications, people will simply encapsulate what they really want to do in TH> the approved service and you will lose visibility.
A small elaboration:
You will make life intolerable for the average user -- ie, the folks not likely to have the skills are working around artificial barriers -- but the non-average user -- ie, including the bad guys -- will encapsulate.
The bad guys will know they are encapsulating. Joe-sixpack will just realize that application xyz doesn't work unless he installs the 'Internet-fixer'(R) tool that his buddy told him about. He has no idea what it does, and he doesn't really care as long as the app works.
DG> The RFC for mail was very well designed. If people simply stuck to the DG> orginal RFC (~800 something) and managed more of their own small systems DG> then this spam thing just wouldn't be the problem that it has
become...
DG> would it?
Well, yes, but no.
(I'm finding that a popular response today, because email and spam are so messy.)
Email does what it was intended to do pretty well. As with multiaddressing (multihoming and mobility) there is a basic question about the architectural choice for making major changes. I keep wanting the enhancements to stay out of the core. For both areas of work.
So, the original Internet mail service does nothing to prevent spoofing or spamming. I think most folks thought that content signing (eg, pgp or s/mime) would be a sufficient solution for authentication and I'd guess we just plain missed the likelihood of spamming. However I still tend to believe that having seen the problem earlier should not necessarily have made the core mail facilities different.
In all likelihood, some for of "message" authentication is needed, possibly along the lines of DomainKeys or Teos. My sense is that the technology for this is quite tractable and requires no changes to the email infrastructure. (On the other hand, we need to pay very close attention to the failure to cross the chasm, for pgp and s/mime.)
The oversight is not recognizing the leap from technical space to political space. All the engineering in the world will not solve fundamental political trust issues. At one point in my past, the motto for my group was 'technical solutions to political problems r us', knowing full well there are no technical solutions to political problems. The best the technical side can do is window dress so the politicians can claim victory.
I think that the "registration" oriented authentication mechanisms (spf, rmx, lmap, etc.) can be useful only when the authenticator is the hosting network provider, rather than a message author.
SMB> In almost all circumstances, authentication is useful for one of two SMB> things: authorization or retribution. But who says you need SMB> "authorization" to send email? Authorized by whom? On what criteria? ,
This certainly goes to a core set of issues. The fact that something is authenticated does not mean it is not spam. On the other hand, authentication is a good thing unto itself. On the other hand, making it a pre-requisite for _all_ email activity is very much a _bad_ thing.
I disagree that full authentication is a bad thing. What possible down side is there to being able to track the source of every message?
Authenticating mail will help deal with two problems: forgery and accountability. Forgery of the From field is now a major problem. It has always been a major threat. So, finding a tractable way of prevent or detecting forgery is a worthy goal unto itself. It does not "solve" spam. Rather I think of joe-jobbing and phishing as doing a really spectacular job of market segment development. It has made clear to target customers why they need a solution.
Accountability just means that there is a good basis for tracking down problematic sources. That, too, is a good thing.
So why not make it mandatory for all messages. Just to be clear I am of the mindset that a new, parallel messaging system is needed. It does not have to be a complete ground up redesign, but requiring a separate MUA & port for transport would allow people to opt out of the legacy system at their own pace. Assuming a completely separate system, why not make auth mandatory? Tony
d/)
-- Dave Crocker <dcrocker-at-brandenburg-dot-com> Brandenburg InternetWorking <www.brandenburg.com> Sunnyvale, CA USA <tel:+1.408.246.8253>
I think that the "registration" oriented authentication mechanisms (spf, rmx, lmap, etc.) can be useful only when the authenticator is the hosting network provider, rather than a message author.
I think widespread use of SPF will gut the major sources of spam. The problem with spam proxies will be dimished to next to nil. Then, of course, the spammers will find other ways... But the use of domain "authentication" such as SPF will/can be a very powerful tool. Rgds, -GSH
Guðbjörn,
I think that the "registration" oriented authentication mechanisms (spf, rmx, lmap, etc.) can be useful only when the authenticator is the hosting network provider, rather than a message author.
GSH> I think widespread use of SPF will gut the major sources of spam. Well, it will gut a great deal of email mobility and third-party services. It will probably have no meaningful effect on actual spam. For example, as you note: GSH> Then, of course, the spammers will find other ways... That means that _at best_ MTA author registration schemes, like SPF, are tactical responses. The problem is that they cause a _strategic_ change to the email semantic model; and the scaling effect of its administration is really quite terrible. Pretty massive effect, for such a short-term benefit. Not to mention that, on the Internet, it is never possible to deploy anything in a short-term time-frame. And, oh by the way, all SPF tries to do is to authenticate the From field. Forgive me for not being reassured that wide use of SPF will merely mean that the spam I get will have a valid From field. d/ -- Dave Crocker <dcrocker-at-brandenburg-dot-com> Brandenburg InternetWorking <www.brandenburg.com> Sunnyvale, CA USA <tel:+1.408.246.8253>
I think that the "registration" oriented authentication mechanisms (spf, rmx, lmap, etc.) can be useful only when the authenticator is the hosting network provider, rather than a message author.
GSH> I think widespread use of SPF will gut the major sources of spam.
Well, it will gut a great deal of email mobility and third-party services. It will mean you can no longer use just about any SMTP server you like. But why can't you use your ISP's submission server using SMTP AUTH? I do not see that this adjustment to roaming users is serious, there are plenty of ways your organization/ISP can continue to provide email to it's users and use SPF.
It will probably have no meaningful effect on actual spam. Oh, it will.
For example, as you note: GSH> Then, of course, the spammers will find other ways... And we will deal with those ways as well. If not, then lets roll over right now.
That means that _at best_ MTA author registration schemes, like SPF, are tactical responses. There are forums for discussing smtp replacement, SPF is not meant to be a replacement for SMTP but augmentation; yes, that's tactical.
The problem is that they cause a _strategic_ change to the email semantic model; and the scaling effect of its administration is really quite terrible. I don't see that. This is really no different from when just about everybody had to secure their open relays or stop using email, or secure their proxies or go under, or... It's not strategic in and by itself. It's effect on mail server management and efficiency is probably more than using black lists (depends on how many you use today), it will mean some dns administration, but hey! we are in the it business, this is to be expected, we don't expect stagnation do we?
Pretty massive effect, for such a short-term benefit. It's pretty straight forward. There are details to it, especially on the dns records but other than that, it's less massive than black lists probably.
Not to mention that, on the Internet, it is never possible to deploy anything in a short-term time-frame. Not everywhere. It will take some more time than closing open relays perhaps.
And, oh by the way, all SPF tries to do is to authenticate the From field. Not quite. It only "authenticates" the domain part of the From field.
Forgive me for not being reassured that wide use of SPF will merely mean that the spam I get will have a valid From field.
There are estimates that 40-70% of spam today is from spam proxies. If a spam proxy sends mail to a SPF enabled MTA with a MAIL FROM where the domain has SPF records then the MTA can easily slice and dice at will. That's pretty drastic. If it only puts spammers back to the drawing board for a while then it's quite worth it, because their old techniques are becoming very inefficient. Rgds, -GSH
On Sun, 15 Feb 2004, Jon R. Kibler wrote:
OK, I was sloppy in my wording... I should have said that we block published dynamic netblks, including dial, cable, xDSL, and wireless. That still catches something less than 5% of spam originating from DHCP connections.
Then it sounds like you have an incomplete list of dynamic network blocks. Why do you think you will be any more successfull convincing more than 5% of ISPs to block ports, when you haven't been successfull convincing them to give you more than 5% of their dynamic address ranges?
Also, most ISPs (at least that serve the SE U.S.) AUP prohibit the running of any type of server on a DHCP connection. I know of at least one that regularly drop service to any system found running web, mail, IRC, proxy, ftp, telnet, or any of a dozen other different servers on any DHCP connection.
"Most" ISPs prohibit any type of server on a DHCP connection? Some cable providers do this due to some limitations in their network architecture, but I would be surprised if "most" (i.e. more than 50%) ISPs prohibit servers. Why do you think DynDNS type services are so popular? So people can run servers on DHCP addresses. Peer-to-Peer is a very popular server used on mostly dynamic addresses. Do you really want a read-only Internet, where only the Fortune 1000 are permitted to operate servers and everyone else must be a client?
Blocking port 25 blocks the ability of all MTA's to send any type of mail. "Non-legitimate" is a determination best made by the two parties involved in the communication.
Why should hundreds of thousands of MTAs each have to make the determination that a given system wishing to make a connection is running spamware on a hacked system when that user's ISP could simply block that user and save everyone else the grief?
How should an ISP decide whether or not it is "legitimate" for the user to run an MTA? If they pay an extra $10 a month, they can legitimately run a server? Or are you are proposing blocking all access, regardless of its legitimacy? The fact of the matter is system admins need to protect their own systems because you never know if the remote system making the connection has been hacked regardless how the IP address was assigned. Blocking dynamic IP addresses doesn't make you safer if you fail to protect your own computers.
To me, the approach you advocate is something like saying "do away with any centralized law enforcement, force everyone to carry guns, and if anyone suspects that someone else is committing a crime, they are obliged to shoot them." I believe that blocking spam at its source is far easier than blocking it at every possible destination. The less parties involved in blocking the spam, the higher the probability that the spam will be successfully blocked.
In reality there are fewer destinations than sources. Then let's centralize it completely. The FCC will license ISPs and set the regulations they must enforce. Ma Bell will be reformed as the single telecommunications provider. Everyone must use the MTA's operated by Ma Bell. Will that stop spam?
On Sun, 15 Feb 2004, Sean Donelan wrote:
"Most" ISPs prohibit any type of server on a DHCP connection?
Some cable providers do this due to some limitations in their network architecture, but I would be surprised if "most" (i.e. more than 50%) ISPs prohibit servers. Why do you think DynDNS type services are so popular? So people can run servers on DHCP addresses. Peer-to-Peer is a very popular server used on mostly dynamic addresses.
Just because they're using our services doesn't mean their AUP doesn't say they're not supposed to. Charter and Comcast, two pretty good-sized cable MSOs, at least up here in the northeast, both prohibit not only any type of server, but the connection of any LAN/WAN that they don't operate. I'm pretty sure Verizon DSL prohibits any servers, though I don't think they explicitly ban LANs. (I guess that means I've violated the AUP of every provider I've used at home. Whoops.) Forget about servers being prohibited, their AUPs even prohibit the use of those ever-so-popular NAT routers Linksys, D-Link, Netgear, and friends like to spew out. Does that stop people from buying and using them, though? Hell no. I think the statement that most ISPs, oriented towards home use, anyway, prohibit servers is accurate. However, it isn't necessarily /relevant/, because I don't think many of them actively enforce that policy. Tim Wilde -- Tim Wilde twilde@dyndns.org Systems Administrator Dynamic Network Services, Inc. http://www.dyndns.org/
This topic has been consistently ruled off-topic for NANOG by Merit's staff. Please respect those of us who don't want to hear about spam here. For those interested, the IRTF's ASRG is actively studying anti-spam techniques and I'm sure they'd be interested in hearing all of your ideas (after you verify they haven't been tried before). http://www.irtf.org/charters/asrg.html S Stephen Sprunk "Stupid people surround themselves with smart CCIE #3723 people. Smart people surround themselves with K5SSS smart people who disagree with them." --Aaron Sorkin ----- Original Message ----- From: "Tim Thorpe" <tim@cleanyourdirt.com> To: <nanog@merit.edu> Sent: Saturday, 14 February, 2004 02:30 Subject: Anti-spam System Idea
I wanted to run this past you to see what you thought of it and get some feedback on pro's and cons of this type of system.
I have been thinking recently about the ever increasing amount of spam
is flooding the internet, clogging mail servers, and in general pissing us all off.
I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less
stellar reputation and are not very effective on top of that.
95% of spam comes through relays and its headers are forged tracking an E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making this list freely available to ISPs via a secure and tracked site.
to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned)
work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in.
because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is.
the only way to fight spammers is to squeeze them out of hiding, and
that then that's
what I hope this system would be designed to do.
I do not have the coding knowledge to do this I will need coders, I do have the PR skills to work with ISPs. I am also working with my congresswoman to pave the way for legal clearance for this program.
I would greatly appreciate your input on this and anything I may have overlooked. I would also like to know if this would be a DC program you would run.
a lot of people argue the practical application of DC. although we know differently this project would show them what DC can do for them and wake them up to perhaps other DC projects.
On Sun, 15 Feb 2004 22:00:08 CST, Stephen Sprunk said:
For those interested, the IRTF's ASRG is actively studying anti-spam techniques and I'm sure they'd be interested in hearing all of your ideas (after you verify they haven't been tried before). http://www.irtf.org/charters/asrg.html
Also read: http://www.rhyolite.com/anti-spam/you-might-be.html It's quite vicious but true - if you have re-invented one of the schemes mentioned in there, it probably won't be well received unless you include with it *both* of the the following: a) An indication that you've read and understood the literature describing why the idea was shot down the last time it was suggested. b) A *new* way of dealing with the issue that eliminates the difficulty.
Seeing as this system would directly impact network operators (the NO in naNOg) I must disagree. If Merit's staff feels otherwise then I sincerely apologize and will of course move the discussion, I will limit the out of context chatter to a minimum however. Tthorpe opusnet
-----Original Message----- From: Stephen Sprunk [mailto:stephen@sprunk.org] Sent: Sunday, February 15, 2004 8:00 PM To: Tim Thorpe Cc: North American Noise and Off-topic Gripes Subject: Re: Anti-spam System Idea
This topic has been consistently ruled off-topic for NANOG by Merit's staff. Please respect those of us who don't want to hear about spam here.
For those interested, the IRTF's ASRG is actively studying anti-spam techniques and I'm sure they'd be interested in hearing all of your ideas (after you verify they haven't been tried before). http://www.irtf.org/charters/asrg.html
S
Stephen Sprunk "Stupid people surround themselves with smart CCIE #3723 people. Smart people surround themselves with K5SSS smart people who disagree with them." --Aaron Sorkin ----- Original Message ----- From: "Tim Thorpe" <tim@cleanyourdirt.com> To: <nanog@merit.edu> Sent: Saturday, 14 February, 2004 02:30 Subject: Anti-spam System Idea
I wanted to run this past you to see what you thought of it
feedback on pro's and cons of this type of system.
I have been thinking recently about the ever increasing amount of spam
is flooding the internet, clogging mail servers, and in general pissing us all off.
I think it time to do something about it. very few systems are effective at blocking spam at the server level, and the ones that exist have a less
stellar reputation and are not very effective on top of that.
95% of spam comes through relays and its headers are forged
E-mail back that you've received is becoming next to impossible, its also very time consuming and why waste your time on scumbags?
my idea; a DC network that actively scans for active relays and tests them, it compiles a list on a daily basis of compromised IP addresses (or even addresses that are willingly allowing the relay) making
available to ISPs via a secure and tracked site.
to test a relay you actually have to send mail through it, I have a solution for this as well, the clients are set to e-mail a certain address that changes daily the E-mails are signed with a crypto key to verify authenticity (that way spammers can't abuse the address if it doesn't have the key, it get canned)
work with ISP's to correct issues on their network help completely black list IP's from their network that are operating as an open relay and redirect to a page that alerts them of the compromise and solutions to fix the problem. the only way people are going to become aware of security issues such as this is if something happens that wakes them up, if they can't access a % of the web it would hopefully clue them in.
because these scans only need to take place once per IP per day and over a large distribution of computers performing the tests, I don't see network load becoming a big issue, no bigger then it currently is.
the only way to fight spammers is to squeeze them out of hiding, and
what I hope this system would be designed to do.
I do not have the coding knowledge to do this I will need coders, I do have the PR skills to work with ISPs. I am also working with my congresswoman to pave the way for legal clearance for this program.
I would greatly appreciate your input on this and anything I may have overlooked. I would also like to know if this would be a DC
would run.
a lot of people argue the practical application of DC. although we know differently this project would show them what DC can do for
and get some that then tracking an this list freely that's program you them and wake
them up to perhaps other DC projects.
Tim Thorpe wrote:
Seeing as this system would directly impact network operators (the NO in naNOg) I must disagree.
Go right ahead and disagree, however: http://www.nanog.org/listfaq.html
If Merit's staff feels otherwise then I sincerely apologize and will of course move the discussion, I will limit the out of context chatter to a minimum however.
Merit's staff DOES feel otherwise; it's just been the weekend and all, or you'd have heard from Susan by now. Oh, and PUH-LEEZE -- trim your posts. I deleted a bazillion lines of unnecessary cruft from this.
On Sun, 15 Feb 2004 22:00:08 -0600 "Stephen Sprunk" <stephen@sprunk.org> wrote:
This topic has been consistently ruled off-topic for NANOG by Merit's staff. Please respect those of us who don't want to hear about spam here.
For those interested, the IRTF's ASRG is actively studying anti-spam techniques and I'm sure they'd be interested in hearing all of your ideas (after you verify they haven't been tried before). http://www.irtf.org/charters/asrg.html
A few other good places for this sort of discussion: Spam-L http://www.claws-and-paws.com/spam-l/ Spamtools http://www.abuse.net/spamtools.html RIPE anti-spam working group http://www.ripe.net/ripe/wg/anti-spam/ Mark A Jones Systems Administrator netINS, Inc. http://netins.net
participants (36)
-
Alex Bligh
-
Andy Dills
-
Christopher L. Morrow
-
Daniel Reed
-
Dave Crocker
-
Don Gould
-
Etaoin Shrdlu
-
Guðbjörn S. Hreinsson
-
itojun@itojun.org
-
J Bacher
-
jlewis@lewis.org
-
Joel Jaeggli
-
John Kristoff
-
Jon R. Kibler
-
Laurence F. Sheldon, Jr.
-
Mark Jones
-
Michael Wiacek
-
Nathan J. Mehl
-
Paul Jakma
-
Petri Helenius
-
Randy Bush
-
Robert E. Seastrom
-
Scott McGrath
-
Sean Donelan
-
Stephen J. Wilcox
-
Stephen Sprunk
-
Steve Uurtamo
-
Steven Champeon
-
Steven M. Bellovin
-
Tim Thorpe
-
Tim Wilde
-
Timothy R. McKee
-
Todd Vierling
-
Tony Hain
-
Valdis.Kletnieks@vt.edu
-
william(at)elan.net