Dear Andrew; On Jan 6, 2007, at 10:09 AM, Andrew Odlyzko wrote:
A remark and a question:
<snip>
2. The question I don't understand is, why stream? In these days, when a terabyte disk for consumer PCs is about to be introduced, why bother with streaming? It is so much simpler to download (at faster than real- time rates, if possible), and play it back.
I can answer that very simply for myself : We are now making a profit with streaming from advertising. To answer what I suspect is your deeper question : Broadcast is a push model, and will not go away. If fact, I think that the Internet will revitalize the "long tail" in video content, and broadcast will be a crucial part of that. It, after all, has been making more for over a century now. Download appears to be very similar, but is really not the same business model at all IMHO. Doesn't mean it's bad or worse, it may even be better, but it's different. And as long as you can make a profit from broadcasting / streaming...
Andrew
Regards Marshall
On Sat, 6 Jan 2007, Marshall Eubanks wrote:
Note that 220 MB per hour (ugly units) is 489 Kbps, slightly less =20 than our current usage.
The more popular the content is, the more sources it can be pulled =20 from and the less redundant data we send, and that number can be as low as 220MB per hour viewed. (Actually, I find this a tough thing to explain to people in general; it's really counterintuitive to see that more peers =3D=3D less bandwidth - I'm still searching for a useful = user-facing metaphor, anyone got any ideas?).
Why not just say, the more peers, the more efficient it becomes as it =20=
approaches the bandwidth floor set by the chosen streaming ?
Regards Marshall
On Jan 6, 2007, at 9:07 AM, Colm MacCarthaigh wrote:
On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
At 01:52 AM 1/6/2007, Thomas Leavitt <thomas@thomasleavitt.org> =20 wrote:
If this application takes off, I have to presume that everyone's baseline network usage metrics can be tossed out the window...
That's a strong possibility :-)
I'm currently the network person for The Venice Project, and busy building out our network, but also involved in the design and planning work and a bunch of other things.
I'll try and answer any questions I can, I may be a little =20 restricted in revealing details of forthcoming developments and so on, so please forgive me if there's later something I can't answer, but for now I'll try and answer any of the technicalities. Our philosophy is to pretty open about how we work and what we do.
We're actually working on more general purpose explanations of all =20 this, which we'll be putting on-line soon. I'm not from our PR dept, or a spokesperson, just a long-time NANOG reader and ocasional poster answering technical stuff here, so please don't just post the archive link to digg/slashdot or whatever.
The Venice Project will affect network operators and we're working =20 on a range of different things which may help out there. We've designed =20=
our traffic to be easily categorisable (I wish we could mark a DSCP, =20 but the levels of access needed on some platforms are just too restrictive) =20=
and we know how the real internet works. Already we have aggregate per-AS usage statistics, and have some primitive network proximity =20 clustering. AS-level clustering is planned.
This will reduce transit costs, but there's not much we can do for =20 other infrastructural, L2 or last-mile costs. We're L3 and above only. Additionally, we predict a healthy chunk of usage will go to our "Long tail servers", which are explained a bit here;
http://www.vipeers.com/vipeers/2007/01/venice_project_.html
and in the next 6 months or so, we hope to turn up at IX's and arrange private peerings to defray the transit cost of that traffic too. Right now, our main transit provider is BT (AS5400) who are at some well-known IX's.
Interesting. Why does it send so much data?
It's full-screen TV-quality video :-) After adding all the overhead =20=
for p2p protocol and stream resilience we still only use a maximum of =20 320MB per viewing hour.
The more popular the content is, the more sources it can be pulled =20 from and the less redundant data we send, and that number can be as low as 220MB per hour viewed. (Actually, I find this a tough thing to explain to people in general; it's really counterintuitive to see that more peers =3D=3D less bandwidth - I'm still searching for a useful = user-facing metaphor, anyone got any ideas?).
To put that in context; a 45 minute episode grabbed from a file-=20 sharing network will generally eat 350MB on-disk, obviously slightly more is used after you account for even the 2% TCP/IP overhead and p2p =20 protocol headers. And it will usually take longer than 45 minutes to get there.
Compressed digital telivision works out at between 900MB and 3GB an =20=
hour viewed (raw is in the tens of gigabytes). DVD is of the same order. YouTube works out at about 80MB to 230MB per-hour, for a mini-screen (though I'm open to correction on that, I've just multiplied the bitrates out).
Is it a peer to peer type of system where it redistributes a portion of the stream as you are viewing it to other users?
Yes, though not neccessarily as you are viewing it. A proportion of =20=
what you have viewed previously is cached and can be made available to =20 other peers.
--=20 Colm MacC=E1rthaigh Public Key: colm=20 +pgp@stdlib.net