Joe Greco wrote on 29/03/2020 21:46:
On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
That's so hideously wrong. It's like claiming web forums don't work because IP packet delivery isn't reliable.
Really, it's nothing like that.
Sure it is. At a certain point you can get web forums to stop working by DDoS. You can't guarantee reliable interaction with a web site if that happens.
this is failure caused by external agency, not failure caused by inherent protocol limitations.
Usenet message delivery at higher levels works just fine, except that on the public backbone, it is generally implemented as "best effort" rather than a concerted effort to deliver reliably.
If you can explain the bit of the protocol that guarantees that all nodes have received all postings, then let's discuss it.
There isn't, just like there isn't a bit of the protocol that guarantees that an IP packet is received by its intended recipient. No magic.
tcp vs udp.
Flood often works fine until you attempt to scale it. Then it breaks, just like Bj??rn admitted. Flooding is inherently problematic at scale.
For... what, exactly? General Usenet?
yes, this is what we're talking about. It couldn't scale to general usenet levels.
Perhaps, but mainly because you do not have a mutual agreement on traffic levels and a bunch of other factors. Flooding works just fine within private hierarchies and since I thought this was a discussion of "free collaborative tools" rather than "random newbie trying to masochistically keep up with a full backbone Usenet feed", it definitely should work fine for a private hierarchy and collaborative use.
Then we're in violent agreement on this point. Great!
delivered it. TAKETHIS managed to sweep these problems under the carpet, but it's a horrible, awful protocol hack.
It's basically cheap pipelining.
no, TAKETHIS is unrestrained flooding, not cheap pipelining.
If you want to call pipelining in general a horrible, awful protocol hack, then that's probably got some validity.
you could characterise pipelining as a necessary reaction to the fact that the speed of light is so damned slow.
which is mostly because there are so few large binary sites these days, i.e. limited distribution model.
No, there are so few large binary sites these days because of consolidation and buyouts.
20 years ago, lots of places hosted binaries. They stopped because it was pointless and wasteful, not because of consolidation.
Right, so you've put your finger on the other major problem relating to flooding which isn't the distribution synchronisation / optimisation problem: all sites get all posts for all groups which they're configured for. This is a profound waste of resources + it doesn't scale in any meaningful way.
So if you don't like that everyone gets everything they are configured to get, you are suggesting that they... what, exactly? Shouldn't get everything they want?
The default distribution model of the 1990s was *. These days, only a tiny handful of sites handle everything, because the overheads of flooding are so awful. To make it clear, this awfulness is resource related, and the knock-on effect is that the resource cost is untenable. Usenet, like other systems, can be reduced to an engineering / economics management problem. If the cost of making it operate correctly doesn't work, then it's non-viable.
None of this changes that it's a robust, mature protocol that was originally designed for handling non-binaries and is actually pretty good in that role. Having the content delivered to each site means that there is no dependence on long-distance interactive IP connections and that each participating site can keep the content for however long they deem useful. Usenet keeps hummin' along under conditions that would break more modern things like web forums.
It's a complete crock of a protocol with robust and mature implementations. Diablo is one and for that, we have people like Matt and you to thank. Nick