On Tue, Mar 31, 2020 at 01:46:09PM +0100, Nick Hilliard wrote:
Joe Greco wrote on 29/03/2020 23:14:
Flood often works fine until you attempt to scale it. Then it breaks, just like Bj??rn admitted. Flooding is inherently problematic at scale.
For... what, exactly? General Usenet?
yes, this is what we're talking about. It couldn't scale to general usenet levels.
The scale issue wasn't flooding, it was bandwidth and storage.
the bandwidth and storage problems happened because of flooding. Short of cutting off content, there's no way to restrict bandwidth usage, but cutting off content restricts the functionality of the ecosystem. You can work around this using remote readers and manually distributing , but there's still a fundamental scaling issue going on here, namely that the model of flooding all posts in all groups to all nodes has terrible scaling design characteristics. It's terrible because it requires all core nodes to linearly scale their individual resourcing requirements according to the overall load of the entire system. You can manually configure load splitting to work around some of these limitations, but it's not possible to ignore the design problems here.
There's a strange disconnect here. The concept behind Usenet is to have a distributed messaging platform. It isn't clear how this would work without ... well, distribution. The choice is between flood fill and perhaps something a little smarter, for which options were proposed and designed and never really caught on. Without the distribution mechanism (flooding), you don't have Usenet, you have something else entirely.
[...]
The Usenet "backbone" with binaries isn't going to be viable without a real large capex investment and significant ongoing opex. This isn't a failure in the technology.
We may need to agree to disagree on this then. Reasonable engineering entails being able to build workable solutions within a feasible budget. If you can't do this, then there's a problem with the technology at the design level.
Kinda like how there's a problem with the technology of the Internet because if I wanna be a massive network or a tier 1 or whatever, I gotta have a massive investment in routers and 100G circuits and all that? Why can't we just build an Internet out of 10 megabit ethernet and T1's? Isn't this just another example of your "problem with the technology at the design level?" See, here's the thing. Twenty six years ago, one of the local NSP's here spent some modest thousands of dollars on a few routers, a switch, and a circuit down to Chicago and set up shop on a folding table (really). This was not an unreasonable outlay of cash to get bootstrapped back in those days. However, within just a few years, the amount of cash that you'd need to invest to get started as an NSP had exploded dramatically. Usage grows. I used to run Usenet on a 24MB Sun 3/60 with a pile of disks and Telebits. Now I'm blowing gigabits through massive machines. This isn't a poorly designed technology. It's scaled well past what anyone would have expected.
Usenet is a great technology for doing collaboration on low bandwidth and lossy connections.
For small, constrained quantities of traffic, it works fine.
It seems like that was the point of this thread... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov