On April 13, 2008 at 14:24 jgreco@ns.sol.net (Joe Greco) wrote:
I would have thought it was obvious, but to see this sort of enlightened ignorance(*) suggests that it isn't: The current methods of spam filtering require a certain level of opaqueness.
Indeed, that must be the problem. But then you proceed to suggest:
So, on one hand, we have the "filtering by heuristics," which require a level of opaqueness, because if you respond "567 BODY contained www.sex.com, mail blocked" to their mail, you have given the spammer feedback to get around the spam.
Giving the spammer feedback? In the first place, I think s/he/it knows what domain they're using if they're following bounces at all. Perhaps they have to guess among whether it was the sender, body string, sending MTA, but really that's about it and given one of those four often being randomly generated (sender) and another (sender MTA) deducible by seeing if multiple sources were blocked on the same email...my arithmetic says you're down to about two plus or minus. But even that is naive since spammers of the sort anyone should bother worrying about use massive bot armies numbering O(million) and generally, and of necessity, use fire and forget sending techniques. Perhaps you have no conception of the amount of spam the major offenders send out. It's on the order of 100B/day, at least. That's why you and your aunt bessie and all the people on this list get the same exact spam. Because they're being sent out in the hundreds of billions. Per day. Now, what exactly do you base your interesting theory that spammers analyze return codes to improve their techniques for sending through your own specific (not general) mail blocks? Sure they do some bayesian scrambling and so forth but that's general and will work on zillions of sites running spamassassin or similar so that's worthwhile to them. But what, exactly, do you base your interesting theory that if a site returned "567 BODY contained www.sex.com" that spammers in general and such that it's worthy of concern would use this information to tune their efforts? This is not an existence proof, one example is not sufficient, it has to be evidence worthy of concern given O(100 billion) spams per day overwhelmingly sent by botnets which are the actual core of the actual problem. I say you're guessing, and not very convincingly either.
So you have two opaque components to filtering. And senders are deliberately left guessing - is the problem REALLY that a mailbox is full, or am I getting greylisted in some odd manner?
Except that most sites return some indication that a mailbox is full. It's just unfortunately in the realm of heuristics. But look into popular mailing list software packages (mailman, majordomo) and you'll see modules for classifying bounce backs heuristically and automatic list removal (or not if it seems like a temporary failure, e.g., mailbox full.)
Filtering stinks. It is resource-intensive, time-consuming, error-prone, and pretty much an example of something that is desperately flagging "the current e-mail system is failing."
And standardized return codes (for example) will make this worse, how?
You want to define standards? Let's define some standard for establishing permission to mail. If we could solve the permission problem, then the filtering wouldn't be such a problem, because there wouldn't need to be as much (or maybe even any). As a user, I want a way to unambiguously allow a specific sender to send me things, "spam" filtering be damned. I also want a way to retract that permission, and have the mail flow from that sender (or any of their "affiliates") to stop.
Sure, but this is pie in the sky. For starters you'd have to get the spammers to conform which would almost certainly take a design which was very difficult not to conform to, it would have to be technologically involuntary. Whitelists are the closest I can think of but they haven't been very popular and for good reasons. Anyhow, the entire planet awaits your design. A set of standardized return codes was carefully chosen by me as something which could be (other than the standards process itself) adopted practically overnight and with virtually zero backwards compatability problems (oh there'll always be an exception.)
Right now I've got a solution that allows me to do that, but it requires a significant paradigm change, away from single-e-mail-address.
There's nothing new in disposable, single-use addresses (or credit card numbers for that matter, a different realm) if that's what you mean but if you have something more clever the world (i.e., the big round you see when you look down) is your oyster.
Addressing "standards" of the sort you suggest is relatively meaningless in the bigger picture, I think. Nice, but not that important.
Well, first you'd have to indicate that you actually have a view of the problem which supports such a judgment. At any rate you're quibbling the example as I forewarned. But standardizing receiving MTA fail codes is, I suspect, more useful than you give them credit. It would be some progress at little to no cost in the large. It deals less with spam filtering and more with effective MTA to MTA operation. At least it's sticking to the realm of improving standards in a way that can be accomplished. I don't see how I could have given a better example without a lot of hand-waving and vagaries. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwide Software Tool & Die | Public Access Internet | SINCE 1989 *oo*