
On Friday 2019-10-25 01:22, Rich Kulawiec wrote:
On Thu, Oct 24, 2019 at 01:21:12PM -0700, Mark Milhollan wrote:
My experience says that: their system has learned that your system(s) continued to send messages that their user (yes you, but they don't know that) did not want [and nothing influenced] their AI, which makes mistakes and will never correct them if not fed correcting info.
It is a worst practice in mail systems engineering to allow input from putative users into any decision-making process *absent* manual review of each piece of data by very clueful humans,
Their system, their rules. Their size makes it all but a certainty that plenty of automation will be involved and their modus vivendi is that they be AIs, no matter what you or I might think. Further they do have humans in the loop, their user individually and their users collectively which want messages from others not directly in the loop it just is not quite how we might like it to be when things subjectively "are wrong", as in this case. /mark