Reducing Usenet Bandwidth
Hi all, as we all know Usenet traffic is always increasing, a large number of people take full feeds which on my servers is about 35Mb of continuous bandwidth in/out. That produces about 300Gb per day of which only a small fraction ever gets downloaded. The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news: a) Internally to a network If I site multiple peer servers at exchange and peering points then they all exchange traffic, all inter and intra site circuits are filled to the above 35Mb level. b) Externally such as at public peering exchange points If theres 100 networks at an exchange point and half exchange a full feed thats 35x50x2 = 3500Mb of traffic flowing across the exchange peering LAN. For the peering point question I'm thinking some kind of multicast thing, internally I've no suggestions other than perhaps only exchanging message ids between peer servers, hence giving back a partial feed to the local box's external peers. Any thoughts? TIA Steve -- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
There were people that did multicast injection of usenet then it deencapsulated the news and fed it to rnews. What happened is that a number of people migrated to Cyclone/Typhoon and other news transport software that did not allow this (easily). (most) major providers have multicast avaiable to customers and internally. The people running the news servers just need to create a delivery method that allows the articles to be passed around that way and all will be taken care of. The disadvantage is that it would potentially allow spammers to inject massive amounts of articles and servers would have to reject them based on some filtering criteria or just get the multicast access removed for such a customer. I actually don't see them being that bright so I wouldn't worry too much about that. - Jared On Sat, Feb 02, 2002 at 08:20:59PM +0000, Stephen J. Wilcox wrote:
Hi all, as we all know Usenet traffic is always increasing, a large number of people take full feeds which on my servers is about 35Mb of continuous bandwidth in/out. That produces about 300Gb per day of which only a small fraction ever gets downloaded.
The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news:
a) Internally to a network If I site multiple peer servers at exchange and peering points then they all exchange traffic, all inter and intra site circuits are filled to the above 35Mb level.
b) Externally such as at public peering exchange points If theres 100 networks at an exchange point and half exchange a full feed thats 35x50x2 = 3500Mb of traffic flowing across the exchange peering LAN.
For the peering point question I'm thinking some kind of multicast thing, internally I've no suggestions other than perhaps only exchanging message ids between peer servers, hence giving back a partial feed to the local box's external peers.
Any thoughts?
TIA
Steve
-- Stephen J. Wilcox IP Services Manager, Opal Telecom http://www.opaltelecom.co.uk/ Tel: 0161 222 2000 Fax: 0161 222 2008
-- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Feb 02, Jared Mauch <jared@puck.Nether.net> wrote:
The disadvantage is that it would potentially allow spammers to inject massive amounts of articles and servers would It would not, because in the proposed protocol articles where signed by the sender site.
The major drawback of that protocol is that it limits articles size to 64KB, so it does not reduce binary traffic, which is the largest part of a newsfeed. -- ciao, Marco
On Sun, 3 Feb 2002, Marco d'Itri wrote:
The major drawback of that protocol is that it limits articles size to 64KB, so it does not reduce binary traffic, which is the largest part of a newsfeed.
In which case you just modify the protocol (or roll your own) to have articles spread across multiple packets. It's not that hard and our guys did this and I expect it's been done (at least) half a dozen times by other people. Anyway multicasting thing doesn't solve the problem really, you still have to transport 30Mb/s (plus) from outside you network to inside it, put it onto a (relativly expensive) server and give it to the customers. The economics of the whole exercise are very interesting once you get past disccussion ( worth doing for anybody ) and picture groups (worth doing for all but the smallest ISP). Comitting to a good supply for multi-part binary groups probally should involve sitting down with one of the company accountants (if you only have one accountant then your company is probally to small). -- Simon Lyall. | Newsmaster | Work: simon.lyall@ihug.co.nz Senior Network/System Admin | Postmaster | Home: simon@darkmere.gen.nz ihug, Auckland, NZ | Asst Doorman | Web: http://www.darkmere.gen.nz
On Sat, 2 Feb 2002, Stephen J. Wilcox wrote:
bandwidth in/out. That produces about 300Gb per day of which only a small fraction ever gets downloaded.
The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news:
Abolish all binary posts / binaries groups. IMO, posting binaries to usenet is a throw back to the days of low speed networks and few people having the capability to put up their own FTP/HTTP accessible files. These days, it's just abused as a free warez network and free porn hosting/advertising. How else can you send MS Office or 100MB of porn jpegs or mgegs to thousands of people for basically free[1]? [1] at least it's free to the poster....not to the networks that keep having to build ridiculously larger news servers. The first news server I ran for an ISP had 4GB...and that was for the OS and articles. The most recent has 18GB for OS, 288GB for articles...and it's obviously obsolete if others are accepting 300GB/day. When are the operators going to draw the line and say "no more" to binaries? -- ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route System Administrator | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
[1] at least it's free to the poster....not to the networks that keep having to build ridiculously larger news servers. The first news server I ran for an ISP had 4GB...and that was for the OS and articles. The most recent has 18GB for OS, 288GB for articles...and it's obviously obsolete if others are accepting 300GB/day. When are the operators going to draw the line and say "no more" to binaries?
Perhaps the same time the end-users (whose fees go towards paying for the hardware, presumably) say "we don't want binaries anymore." -- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
On Sat, 2 Feb 2002, Alex Rubenstein wrote:
Perhaps the same time the end-users (whose fees go towards paying for the hardware, presumably) say "we don't want binaries anymore."
It would be interesting to survey the customers and see: A) how many would care if we didn't provide usenet B) how many would care if we didn't carry any binaries C) how many have never heard of usenet I suspect A and B would be small, and C large. -- ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route System Administrator | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Sat, 2 Feb 2002 jlewis@lewis.org wrote:
On Sat, 2 Feb 2002, Alex Rubenstein wrote:
Perhaps the same time the end-users (whose fees go towards paying for the hardware, presumably) say "we don't want binaries anymore."
It would be interesting to survey the customers and see:
A) how many would care if we didn't provide usenet B) how many would care if we didn't carry any binaries C) how many have never heard of usenet
I suspect A and B would be small, and C large.
Actually, C would be huge, but the remainder would *all* want binaries. The general solution seems to be to keep porn no matter what, keep all text (a mere drip in the pool anyway), kill "monitor" and other mp3 groups,and take down wannadoo / microsoft / other huge but useless heirarchies. *Every* news site I am intimately familiar with needs porn to keep the users quiet... Our solution for a long time was Cidera in, but hole-filling for text only (binaries that were missed were just too bad), but this was a lot of expense that just didn't pay. Now I outsource the porn ;-) -- Yours, J.A. Terranson sysadmin@mfn.org If Governments really want us to behave like civilized human beings, they should give serious consideration towards setting a better example: Ruling by force, rather than consensus; the unrestrained application of unjust laws (which the victim-populations were never allowed input on in the first place); the State policy of justice only for the rich and elected; the intentional abuse and occassionally destruction of entire populations merely to distract an already apathetic and numb electorate... This type of demogoguery must surely wipe out the fascist United States as surely as it wiped out the fascist Union of Soviet Socialist Republics. The views expressed here are mine, and NOT those of my employers, associates, or others. Besides, if it *were* the opinion of all of those people, I doubt there would be a problem to bitch about in the first place... --------------------------------------------------------------------
Have you consider the Cidera solution ? See <http://www.cidera.com/services/usenet_news/index.shtml> (Basically the idea is satellite broadcast direct to the edge networks) -- Rafi
Hi all, as we all know Usenet traffic is always increasing, a large number of people take full feeds which on my servers is about 35Mb of continuous bandwidth in/out. That produces about 300Gb per day of which only a small fraction ever gets downloaded.
The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news:
a) Internally to a network If I site multiple peer servers at exchange and peering points then they all exchange traffic, all inter and intra site circuits are filled to the above 35Mb level.
b) Externally such as at public peering exchange points If theres 100 networks at an exchange point and half exchange a full feed thats 35x50x2 = 3500Mb of traffic flowing across the exchange peering LAN.
For the peering point question I'm thinking some kind of multicast thing, internally I've no suggestions other than perhaps only exchanging message ids between peer servers, hence giving back a partial feed to the local box's external peers.
Any thoughts?
TIA
Steve
Quoting Stephen J. Wilcox (steve@opaltelecom.co.uk):
For the peering point question I'm thinking some kind of multicast thing, internally I've no suggestions other than perhaps only exchanging message ids between peer servers, hence giving back a partial feed to the local box's external peers.
Any thoughts?
Ask google about "drinking from the firehose USENET multicast news", or have a look in ftp.uu.net:/networking/news/muse. I've no idea what became of this project. I first read the paper when it came out and I was running/hacking on news servers. I clearly thought it interesting enough to keep a copy of it. James
steve@opaltelecom.co.uk ("Stephen J. Wilcox") writes:
as we all know Usenet traffic is always increasing, a large number of people take full feeds which on my servers is about 35Mb of continuous bandwidth in/out. That produces about 300Gb per day of which only a small fraction ever gets downloaded.
The question is, and apologies if I am behind the times, I'm not an expert on news... how is it possible to reduce bandwidth used occupied by news:
Pull it, rather than pushing it. nntpcache is a localized example of how to only transfer the groups and articles that somebody on your end of a link actually wants to read. A more systemic example ought to be developed whereby every group has a well-mirrored home and an nntpcache hierarchy similar to what Squid proposed for web data, and every news reader pulls only what it needs. Posting an article should mean getting it into the well-mirrored home of that group. Removing spam should mean deleting articles from the well-mirrored home of that group. Pushing netnews, with or without multicast, with or without binaries, is just unthinkable at today's volumes but we do it anyway. The effect of increased volume have decreased the utilization of netnews as a media amongst my various friends. Pushing netnews after another three or four doublings is so far beyond the sane/insane boundary that I just know it won't happen, Moore or not. It's well and truly past time to pull it rather than push it.
On 3 Feb 2002, Paul Vixie wrote:
Pull it, rather than pushing it. nntpcache is a localized example of how [...]
Proposed by someone every couple of months for the last 10 years (at least). The current software (diablo especially) even supports it to a good extent, however nobody is doing it for some reason.
Pushing netnews, with or without multicast, with or without binaries, is just unthinkable at today's volumes but we do it anyway. The effect of increased volume have decreased the utilization of netnews as a media amongst my various friends.
Totally wrong on the non-binaries feed bit. A non-binaries feed is around 1-2GB per day or 100-200kb/s which is below the noise level for anyone on this list. Even on the semi 3rd world wages I make I could afford a non-binaries feed to my house and archive it for less than I spend on lunches. Binaries on the other hand is completely different, most people can't afford it and we are moving to a centralized model with the supernews types companies being the only ones with full feeds out there. I am really surprised that the RIAA and similar groups havn't "gone after" usenet to any great degree yet. I can't really see how binaries newsgroups different in any great extent (from the copyright angle) from your random p2p network. Once a few lawsuits are issued (does the ISC count as a distributor?) against the dozen or so top news providers things could be quite interesting. -- Simon Lyall. | Newsmaster | Work: simon.lyall@ihug.co.nz Senior Network/System Admin | Postmaster | Home: simon@darkmere.gen.nz ihug, Auckland, NZ | Asst Doorman | Web: http://www.darkmere.gen.nz
On Mon, 04 Feb 2002 11:47:24 +1300, Simon Lyall <simon.lyall@ihug.co.nz> said:
I am really surprised that the RIAA and similar groups havn't "gone after" usenet to any great degree yet. I can't really see how binaries newsgroups different in any great extent (from the copyright angle) from your random p2p network.
Once a few lawsuits are issued (does the ISC count as a distributor?) against the dozen or so top news providers things could be quite interesting.
At least in the US, the RIAA couldn't bring much of a suit against the ISP, for the same reason they don't usually bring suits against the ISPs in the p2p side of things. 17 USC 512 protects the ISP from behavior on the part of their users *if* the ISP is cooperative in dealing with the problem when notified of infringing material. http://www4.law.cornell.edu/uscode/17/512.html We see a fair number of form-letter complaints from the RIAA regarding p2p and other file-sharing by our users. Every once in a great while we see a complaint about netnews and our standard reply is usually "Hmm.. it was in an alt.* newsgroup? It's been over 48 hours? It's been over-written already..." -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
participants (11)
-
Alex Rubenstein
-
James Fidell
-
Jared Mauch
-
jlewis@lewis.org
-
Marco d'Itri
-
measl@mfn.org
-
Paul Vixie
-
Rafi Sadowsky
-
Simon Lyall
-
Stephen J. Wilcox
-
Valdis.Kletnieks@vt.edu