Once you get into that small club, it's just as hard to get kicked out, and unfortunately that means that if abuse, UCE, etc is coming from those hosts, they've got an even higher chance of hitting your inbox. So while in theory it might work the way you're thinking, in practice it hasn't because once you are in that club, a lot of the financial motivation to prevent abuse of your service - that is, inbox deliverability for your client base - goes away.
+1
Likewise, we're at a point now where if a criminal phish or virus comes from the largest few email hosters, and you provide them emails with full headers - the accounts do NOT get shut down. They literally don't think this is their problem. And likewise, data storage sites (GoogleDrive, OneDrive, etc) from the largest providers often will host malware for weeks or months without being shut down - or the malware at least persists for many days after being reported. The same is often true for their redirectors.
Wwhat is frustrating is that the long-standing industry standard of "you're responsible both for what you both send and host - even if the malware wasn't intended" - seems to be lost.
Likewise,
back in the spring months of 2018, google's "goo[.]gl" shortner
went crazy for a few months, and was being MASSIVELY abused by
spammers, and was being used as an "end run around" URI DNSBLs
(SURBL, URIBL, ivmURI, DBL). I collected 15K examples of abused
shortners that were "live", and sent those to Google. At the
time I sent those, only about 500 of that 15K had been shut
down. What was infuriating was that 80% of these 15K shortners
were pointing to only 12 spammer's domains. These should have
been trivial to prevent!
The OTHER
infuriating thing was that my INITIAL response from my contacts
at Google was - (I paraphrase) "other spam filters should just
follow the redirect, and block these spams based on the URI it
redirects to" - WOW! I sent them a very stern email about that.
(and for comparison, abused Bitly shortners were mostly getting
shut down within 2 hours - so "everyone does it" was NOT a
decent excuse!)
Like I said - the long-standing industry standard of "you're response both for what you both send and host - even if the malware wasn't intended" - seems to be lost on some of these large providers.
Thankfully, this had a happy ending. After some "tough love" - Google replied back and said (I paraphrase), "we were planning on shutting that down - or at least shutting down the ability to add new ones - and due to your feedback - we're going to push that up a few months" - and so soon afterwards, they finally did terminate those 15K shortners - and stopped allowing new ones. So this is to Google's credit - but the problem had persisted for months - and it seemed like a lot of cultural/industry standards in the Internet Security industry seemed lost on them.
Sadly, while
this situation had a good ending - similar problems with the
largest providers persist. At the same time, they sure can be
draconian in how they block smaller providers who had a rare and
short-lived security incident. The hypocrisy is incredible. For
example, Microsoft will sometimes *permanently* block a small
email hoster for a short one or two hour compromised email
account situation that caused spam to be sent from that small
hosters - but that was quickly fixed - even if that hoster sends
MUCH legit email. It almost FEELS like extortion - since many of
the IT people running those small-ish servers sometimes get
frustrating - and move their email to the cloud - and then guess
who OFTEN gets their email hosting business?
-- Rob McEwen, invaluement