"Stephen J. Wilcox" <steve@opaltelecom.co.uk> writes: [...]
The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news:
We had pretty good luck (modulo some crashing software) with caching news servers, instead of traditional news feeds. We had a master cache in the center of our network, and satellite caches on the edge which connected back to the master. We found that the vast majority of groups never got read, and the ones that did were read consistently, so it was possible to prefetch those groups during off-hours. We convinced a commercial newsfeed to charge us by bandwidth instead of simultaneous readers, had pretty good service (more points of failure, so more downtime, but still pretty good...), and saved a lot of bandwidth, disk space, engineer time, and money. If we had time to get the bugs worked out (the software crashed multiple times per day, then our ISP was purchased), it would have been a perfect system. -------ScottG.
USENET is by its nature a commons facility for sharing rivalrous resources (e.g. bandwidth and storage capacity). Elementary reasoning (if anyone's interested, Lawrence Lessig's "The Future of Ideas" contains as detailed discussion as anyone can bear :) leads to the conclusion that lacking usage control mechanisms such commons are doomed to disintegrate. No exceptions were found so far. (Such control mechanisms do not have to be market-based to be effective and successful; public policy or code may embed usage controls as well - TCP's cooperative congestion control is an excellent example). When the Internet was small, the personal reputation was strong enough limiter to abuse of shared resources. It only works when community is smallish and elitary. This is clearly no longer the case. In other words - USENET cannot be fixed with technological improvements as long as the root problem (admission control) is not solved. Improving transmission or storage systems would only let spammers to send more spam for free. Therefore, i'd say it is time to declare USENET defunct. It was fun while it lasted. --vadim PS. Talking about commons... A lot of network and computing resources are quite under-utilized. Many owners of those resources would be quite willing to donate underused capacity to the community - providing that such donation will not have any noticeable negative impact on resources' performance for primary functions. While most modern OS-es have mostly adequate prioritization mechanisms, a lot of would-be donors are turned out by the effective inability to protect their primary network capacity. Therefore, i would like to ask ISPs to be civic-minded and standartize on an IP TOS for "community" traffic, giving normal IP traffic an absolute queueing and drop-policy preference over packets with community TOS. Correspondingly, though most backbone (and some access) IP routing equipment already has everything needed to implement community TOS, they could be similarly civic-minded by making such preference turned on by default and by improving per-TOS utilization data collection, so NMS-es won't cry wolf seeing links being highly utilized by low-priority traffic. Another area which needs improvement is making L2 switches similarly aware of community TOS in IP packets.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Vadim Antonov Sent: Monday, February 04, 2002 7:18 PM To: nanog@merit.edu Subject: Re: Reducing Usenet Bandwidth [deleted, lots of good ideas and suggestions. I agree with essentially all of it] Another area which needs improvement is making L2 switches similarly aware of community TOS in IP packets. --- Why would an L2 switch need or even want to be aware of TOS? Wouldn't this be a Class 3/4..7 issue? If community TOS is supported, those guys that would benefit from awareness of it in their internal network would address this at a higher level switch I'd think. I took your suggestion as a "best effort" below "normal" effort for "community" TOS. I could be mistaken. Deepak Jain AiNET
On Fri, 8 Feb 2002, Deepak Jain wrote:
Another area which needs improvement is making L2 switches similarly aware of community TOS in IP packets.
Why would an L2 switch need or even want to be aware of TOS? Wouldn't this be a Class 3/4..7 issue? If community TOS is supported, those guys that would benefit from awareness of it in their internal network would address this at a higher level switch I'd think.
Well, maybe, but L2 switches do queueing and drops as well, so there should be some way to indicate which packets to drop, and which to keep. This means that they should be able to look deeper inside frames to extract information for frame classification. (Of course, a cleaner architecture would simply map L3 TOS into some L2 TOS bits at the originating hosts, but this just didn't happen...) One may argue that L2 switches typically are not bottlenecks, and the Internet access circuits effectively limit Ethernet utilization for the exterior traffic. However, there's a potential class of applications in clustered community computing (quite a lot of scientfic simulations, actually) which can generate very high levels of intra-cluster traffic.
I took your suggestion as a "best effort" below "normal" effort for "community" TOS. I could be mistaken.
That is exactly what i had in mind. --vadim
Thus spake "Vadim Antonov" <avg@exigengroup.com>
Well, maybe, but L2 switches do queueing and drops as well, so there should be some way to indicate which packets to drop, and which to keep. This means that they should be able to look deeper inside frames to extract information for frame classification.
See also: IEEE 802.1p
(Of course, a cleaner architecture would simply map L3 TOS into some L2 TOS bits at the originating hosts, but this just didn't happen...)
This would not work, as the L2 TOS information would be discarded at the first L3 hop. Some vendors' routers map L3 TOS into L2 TOS at each hop (if you enable that functionality) for media which supports L2 TOS. PTP L2 media don't really have this problem, as TOS-based L3 forwarding is capable of prioritizing packets itself, and L2 TOS would be redundant.
One may argue that L2 switches typically are not bottlenecks, and the Internet access circuits effectively limit Ethernet utilization for the exterior traffic. However, there's a potential class of applications in clustered community computing (quite a lot of scientfic simulations, actually) which can generate very high levels of intra-cluster traffic.
Imagine a L2 switch with a phone and a PC on a single 10/100 port. It is trivial to find applications which can temporarily saturate the port, introducing unacceptable jitter to the phone's media streams. There are several million of these ports in active use today, and it doesn't take more than a few minutes to discover how prevalent this particular problem is without some attempt at L2 TOS.
I took your suggestion as a "best effort" below "normal" effort for "community" TOS. I could be mistaken.
That is exactly what i had in mind.
That is also what IEEE 802 had in mind. S
avg@exigengroup.com (Vadim Antonov) writes:
... In other words - USENET cannot be fixed with technological improvements as long as the root problem (admission control) is not solved. Improving transmission or storage systems would only let spammers to send more spam for free.
...which is why my proposal didn't involve multicast and assumed that each newsgroup would be authoritatively sourced by a well known server or mirrored cluster of servers. spam or offtopic postings, to be deleted, would only need to be deleted in that one place. then the hierarchical nntpcache graph would simply "not find" the trash rather than needing to be told to remove it as is done today with "full nntp" servers. the other motive here is implicit moderation: no newsgroup which could not find a sponsoring server or mirrored cluster could continue to exist; any such server or cluster which was known to never remove trash would eventually become uninteresting to discerning parties (e.g., me) while still being completely usable by undiscerning parties.
Therefore, i'd say it is time to declare USENET defunct. It was fun while it lasted.
"death of usenet predicted. film at 11."
Quoting Paul Vixie (vixie@as.vix.com):
...which is why my proposal didn't involve multicast and assumed that each
What are your concerns with mcntp? http://sourceforge.net/projects/mcntp/ jonas
Thus spake "Jonas M Luster" <jluster@d-fensive.com>
Quoting Paul Vixie (vixie@as.vix.com):
...which is why my proposal didn't involve multicast and assumed that each
What are your concerns with mcntp? http://sourceforge.net/projects/mcntp/
Or, even better, Newscaster: http://www.newscaster.org S
On 8 Feb 2002, Paul Vixie wrote:
In other words - USENET cannot be fixed with technological improvements as long as the root problem (admission control) is not solved. Improving transmission or storage systems would only let spammers to send more spam for free.
...which is why my proposal didn't involve multicast and assumed that each newsgroup would be authoritatively sourced by a well known server or mirrored cluster of servers. spam or offtopic postings, to be deleted, would only need to be deleted in that one place. then the hierarchical nntpcache graph would simply "not find" the trash rather than needing to be told to remove it as is done today with "full nntp" servers.
Paul -- you know, i was advoicating cacheing for a long time. But the problem with USENET is not in transmission technology. Fundamentally, it is inability to keep the S/N ratio high enough by keeping away trash generators. If someone posts an article, it has to show up in some kind of directory. Otherwise people won't be able to find and to pull it. USENET-style directories (listing subject lines, timestamp, and originator's pseudonym) do not carry enough information to detect spam. The net result - even if someone's using spam filters at his end, he still has to pull all articles. No gain whatsoever. (Pruning newsgroup distribution trees is another issue altogether - it can be done with little modification to the existing software). What USENET needs is a distributed system for collection of per-article and per-sender ratings, and for filtration based on those ratings. That would be useful for other applications, as well :) --vadim
What USENET needs is a distributed system for collection of per-article and per-sender ratings, and for filtration based on those ratings. That would be useful for other applications, as well :)
I would argue that what USENET needs is a way for the cost of publication to be incurred by the publisher; storing the data in your own repository (or repositories) while pointers get flooded through the USENET distribution system would give publishers an incentive to do garbage collection that they do not have today. It would almost be like gluing a USENET distribution front-end onto a collection of Napster back-ends. Stephen
Kinda' off topic, but I've seen this gibberish in many spam posts to usenet groups. What purpose does it serve? Thanks, --Michael "She may will virtually cook above Ron when the sticky wrinkles nibble with the sharp river. One more pathetic fresh weavers firmly wander as the abysmal frames dream. Why will you reject the sweet deep tags before Priscilla does? Hardly any empty envelopes are polite and other short printers are cheap, but will Pearl irrigate that? She'd rather cover eventually than scold with Ophelia's fat dose. Are you kind, I mean, arriving without handsome candles? He might judge the difficult sauce and explain it in front of its light. Every proud dryers dye Jay, and they wistfully receive Elisa too. Get your easily climbing ball inside my mirror. You won't help me recommending outside your smart monument. Some forks fear, excuse, and taste. Others angrily sow." [40 more lines snipped] ----- Original Message ----- From: "Stephen Stuart" <stuart@tech.org> To: "Vadim Antonov" <avg@exigengroup.com> Cc: <nanog@merit.edu> Sent: Friday, February 08, 2002 1:45 PM Subject: Re: Reducing Usenet Bandwidth
What USENET needs is a distributed system for collection of per-article and per-sender ratings, and for filtration based on those ratings. That would be useful for other applications, as well :)
I would argue that what USENET needs is a way for the cost of publication to be incurred by the publisher; storing the data in your own repository (or repositories) while pointers get flooded through the USENET distribution system would give publishers an incentive to do garbage collection that they do not have today.
It would almost be like gluing a USENET distribution front-end onto a collection of Napster back-ends.
Stephen
Michael Painter wrote:
Kinda' off topic, but I've seen this gibberish in many spam posts to usenet groups. What purpose does it serve?
Randomly generated, grammatically-correct (somewhat) text, designed to sneak the message past various filters (like anti-binary, and minimum- length.) I have no idea if it works or not. -- David
I would argue that what USENET needs is a way for the cost of publication to be incurred by the publisher; storing the data in your own repository (or repositories) while pointers get flooded through the USENET distribution system would give publishers an incentive to do garbage collection that they do not have today.
Like many Internet settlement schemes, this seems to not make much sense. If a person reads USENET for many years enjoying all of its wisdom, why should he get a free ride? And why should the people who supply that wisdom have to pay to do so? A USENET transaction is presumed to benefit both parties, or else they wouldn't have configured their computers to make that transaction. Does it make sense for the New York Times to pay me to read it? But perhaps it does for the Weekly Advertiser. The reason that automated schemes such as "publisher pays" will fail is because determining who "should" pay is too complex for automated schemes. You will just push around who takes advantage of who. If you ask a question, you should pay. If I provide you with useful help, you should pay. If I suggest a commercial solution to your problem, who should pay? If I harass you for not knowing the answer to the question, I should pay. DS
participants (10)
-
David Charlap
-
David Schwartz
-
Deepak Jain
-
Jonas M Luster
-
Michael Painter
-
Paul Vixie
-
Scott Gifford
-
Stephen Sprunk
-
Stephen Stuart
-
Vadim Antonov