Re: Network end users to pull down 2 gigabytes a day, continuously?
Increased bandwidth consumption does not necessarily cost money on most ISP infrastructure. At my home I have a fairly typical ISP service using BT's DSL. If I use a P2P network to download files from other BT DSL users, then it doesn't cost me a penny more than the basic DSL service. It also doesn't cost BT any more
It does cost BT, as I said someone pays even if it's not obvious to the user
The only time that costs increase is when I download data from outside of BT's network because the increased traffic reaquires larger circuits or more circuits, etc.
Incorrect, DSLAM backhaul costs regardless of where the traffic comes from. ISPs pay for that, it costs more than transit
The real problem with P2P networks is that they don't generally make download decisions based on network architecture.
Indeed, that's what I said. Until then ISPs can only fix it with P2P aware caches, if the protocols did it then they wouldn't need the caches though P2P efficiency may go down It'll be interesting to see how Akamai & co. counter this trend. At the moment they can say it's better to use a local Akamai cluster than have P2P taking content from anywhere on the planet. Once it's mostly local traffic then it's pretty much equivalent to Akamai. It's still moving routing/TE up the stack though so will affect the ISPs network ops.
I have to admit that I have no idea how BT charges ISPs for wholesale ADSL.
Hence your first assertion is unfounded
If there is indeed some kind of metered charging then Internet video will be a big problem for the business model.
It is, BT Wholesale don't give it away (usage or capacity based ISPs still have to pay)
The difference with P2P is that caching is built-in to the model, therefore 100% of users participate in caching.
But those caches are in the wrong place, they'd be better at the ISP end of the ADSL
With HTTP, caches are far from universal, especially to non-business users.
Doesn't matter if they're buying P2P caches they could buy HTTP if they don't have one already. The point is they're buying something brandon
On Jan 7, 2007, at 3:17 PM, Brandon Butterworth wrote:
The real problem with P2P networks is that they don't generally make download decisions based on network architecture.
Indeed, that's what I said. Until then ISPs can only fix it with P2P aware caches, if the protocols did it then they wouldn't need the caches though P2P efficiency may go down
It'll be interesting to see how Akamai & co. counter this trend. At the moment they can say it's better to use a local Akamai cluster than have P2P taking content from anywhere on the planet. Once it's mostly local traffic then it's pretty much equivalent to Akamai. It's still moving routing/TE up the stack though so will affect the ISPs network ops.
ISPs don't pay Akamai, content owners do. Content owners are usually not concerned with the same things an ISP's "newtork ops" are. (I'm not saying that's a good thing, I'm just saying that is reality. Life might be much better all around if the two groups interacted more. Although one could say that Akamai fills that gap as well. :) Anyway, a content provider is going to do what's best for their content, not what's best for the ISP. It's a difficult argument to make to a content provider that putting their content on millions of end user HDs depending on grandma to provide good quality streaming to Joe Smith down the street. At least in my experience. -- TTFN, patrick
On 7-Jan-2007, at 15:17, Brandon Butterworth wrote:
The only time that costs increase is when I download data from outside of BT's network because the increased traffic reaquires larger circuits or more circuits, etc.
Incorrect, DSLAM backhaul costs regardless of where the traffic comes from. ISPs pay for that, it costs more than transit
Setting aside the issue of what particular ISPs today have to pay, the real cost of sending data, best-effort over an existing network which has spare capacity and which is already supported and managed is surely zero. If I acquire content while I'm sleeping, during a low dip in my ISP's usage profile, the chances good that are nobody incurs more costs that month than if I had decided not to acquire it. (For example, you might imagine an RSS feed with BitTorrent enclosures, which requires no human presence to trigger the downloads.) If I acquire content the same time as many other people, since what I'm watching is some coordinated, streaming event, then it seems far more likely that the popularity of the content will lead to network congestion, or push up a peak on an interface somewhere which will lead to a requirement for a circuit upgrade, or affect a 95%ile transit cost, or something. If asynchronous delivery of content is as free as I think it is, and synchronous delivery of content is as expensive as I suspect it might be, it follows that there ought to be more of the former than the latter going on. If it turned out that there was several orders of magnitude more content being shifted around the Internet in a "download when you are able; watch later" fashion than there is content being streamed to viewers in real-time I would be thoroughly unsurprised. Joe
I may have missed it in previous posts, but I think an important point is being missed in much of this discussion: take rate. An assumption being made is one of widespread long time usage. I would argue consumers have little interest in viewing content for more than a few hundred seconds on their PC. Further, existing solutions for media extension to the television are gaining very little foothold outside of technophiles. They tend to be more complex for the average user than many vendors seemingly realize. While Apple may help in this arena, there are many other obstacles to widespread usage of streaming video outside of media extension. In entertainment, content is king. More specifically, new release content is king. While internet distribution may help breathe life into the long tail market, it is hard to imagine any major shift from existing distribution methods. People simply like the latest TV shows and the latest movies. So, this leaves us with little more than what is already offered by the MSOs: linear TV and VoD. This is where things become complex. The studios will never (not any time soon) allow for a subscription based VoD on new content. They would instantly be sued by Time Warner (HBO). This leaves us with a non-subscription VoD option, which still requires an agreement with the each of the major studios, and would likely cost a fortune to obtain. CinemaNow and MovieLink have done this successfully, and use a PushVoD model to distribute their content. CinemaNow allows DVD burning for some of their content, but both companies are otherwise tied to the PC (without a media extender). Furthermore, the download wait is a pain. Their content is good quality 1200-1500 kbps VC-1 *wince*. It is really hard to say when and if either of these will take off as a service. It is a good service, with a great product, and almost no market at the moment. Get it on the TV and things may change dramatically. This leaves us with linear TV, which is another acquisition nightmare. It is very difficult to acquire pass-through/distribution rights for linear television, especially via IP. Without deep pockets, a company might be spinning their wheels trying to get popular channels onto their lineup. And good luck trying to acquire the rights to push linear TV outside of a closed network. The studios will hear none of it. I guess where I am going with all this is simply it is very hard to make this work from a business and marketing side. The network constraints are, likely, a minor issue for some time to come. Interest is low in the public at large for primary (or even major secondary) video service on the PC. By the time interest in the product swells and content providers ease some of their more stringent rules for content distribution, a better solution for multicasting the content will have presented itself. I would argue streaming video across the Internet to a large audience, direct to subscribers, is probably 4+ years away at best. I am not saying we throw in the towel on this problem, but I do think unicast streaming has a limited scope and short life-span for prime content. IPv6 multicast is the real long term solution for Internet video to a wide audience. Of course, there is the other argument. The ILECs and MSOs will keep it from ever getting beyond a unicast model. Why let the competition in, right? *sniff* I smell lobbyists and legislation. :-) Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Gian Constantine Sent: Sunday, January 07, 2007 7:18 PM To: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
In entertainment, content is king. More specifically, new release content is king. While internet distribution may help breathe life into the long tail market, it is hard to imagine any major shift from existing distribution methods. People simply like the latest TV shows and the latest movies.
What's new to you is very different from what's new to me? I am very happy watching 1 year old episodes of Top Gear whereas if you are located in the UK, you may consider this as old news. The story here is about the cost of storing the video content (which is asymptotically zero) and the cost of distributing it (which is also asymptotically approaching zero, despite the ire of the SPs).
So, this leaves us with little more than what is already offered by the MSOs: linear TV and VoD. This is where things become complex.
The studios will never (not any time soon) allow for a subscription based VoD on new content. They would instantly be sued by Time Warner (HBO).
This is a very US-centric view of the world. I am sure there are hundreds of TV stations from India, Turkey, Greece, etc that would love to put their content online and make money off the long tail.
I guess where I am going with all this is simply it is very hard to make this work from a business and marketing side. The network constraints are, likely, a minor issue for some time to come. Interest is low in the public at large for primary (or even major secondary) video service on the PC.
Again, your views are very US centric, and are mono-cultural. If you open your horizons, I think there is a world of content out there that the content owners would be happy to license and sell at < 10 cents a pop. To them it is dead content, but it turns out that they are worth something to someone out there. This is what iTunes, and Rhapsody are doing with music. And the day of the video is coming. Bora -- Off to raise some venture funds now. (Just kidding ;)
Well, yes. My view on this subject is U.S.-centric. In fairness to me, this is NANOG, not AFNOG or EuroNOG or SANOG. I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero. You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull. I agree there is a market for ethnic and niche content, but it is not the broad market many companies look for. The investment becomes much more of a gamble than marketing the latest and greatest (again debatable :-) ) to the larger market of...well...everyone. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 8, 2007, at 5:15 PM, Bora Akyol wrote:
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Gian Constantine Sent: Sunday, January 07, 2007 7:18 PM To: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
In entertainment, content is king. More specifically, new release content is king. While internet distribution may help breathe life into the long tail market, it is hard to imagine any major shift from existing distribution methods. People simply like the latest TV shows and the latest movies.
What's new to you is very different from what's new to me?
I am very happy watching 1 year old episodes of Top Gear whereas if you are located in the UK, you may consider this as old news.
The story here is about the cost of storing the video content (which is asymptotically zero) and the cost of distributing it (which is also asymptotically approaching zero, despite the ire of the SPs).
So, this leaves us with little more than what is already offered by the MSOs: linear TV and VoD. This is where things become complex.
The studios will never (not any time soon) allow for a subscription based VoD on new content. They would instantly be sued by Time Warner (HBO).
This is a very US-centric view of the world. I am sure there are hundreds of TV stations from India, Turkey, Greece, etc that would love to put their content online and make money off the long tail.
I guess where I am going with all this is simply it is very hard to make this work from a business and marketing side. The network constraints are, likely, a minor issue for some time to come. Interest is low in the public at large for primary (or even major secondary) video service on the PC.
Again, your views are very US centric, and are mono-cultural.
If you open your horizons, I think there is a world of content out there that the content owners would be happy to license and sell at < 10 cents a pop. To them it is dead content, but it turns out that they are worth something to someone out there. This is what iTunes, and Rhapsody are doing with music. And the day of the video is coming.
Bora
-- Off to raise some venture funds now. (Just kidding ;)
Please see my comments inline:
-----Original Message----- From: Gian Constantine [mailto:constantinegi@corp.earthlink.net] Sent: Monday, January 08, 2007 4:27 PM To: Bora Akyol Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
To the end user, there is no cost to downloading videos when they are sleeping. I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time. What the downloading (possibly 24x7) does is to stress the ISP network to its max since the assumptions of statistical multiplexing goes out the window. Think of a Tivo that downloads content off the Internet 24x7. The user is still paying for only what they pay each month, and this is "network neutrality 2.0" all over again.
You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull.
I would argue against your distinction between visual and auditory content. There is a lot of content out there that a lot of people watch and the content is 20-40+ years old. Think Brady Bunch, Bonanza, or archived games from NFL, MLB etc. What about Smurfs (for those of us with kids)? This is only the beginning. If I can get a 500GB box and download MP4 content, that's a lot of essentially free storage. Coming back to NANOG content, I think video (not streamed but multi-path distributed video) is going to bring the networks down not by sheer bandwidth alone but by challenging the assumptions behind the engineering of the network. I don't think you need huge SANs per se to store the content either, since it is multi-source/multi-sink, the reliability is built-in. The SPs like Verizon & ATT moving fiber to the home hoping to get in on the "value add" action are in for an awakening IMHO. Regards Bora ps. I apologize for the tone of my previous email. That sounded grumpier than I usually am.
There may have been a disconnect on my part, or at least, a failure to disclose my position. I am looking at things from a provider standpoint, whether as an ISP or a strict video service provider. I agree with you. From a consumer standpoint, a trickle or off-peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title. Service providers wishing to provide this service to their customers may see some success where they control the access medium (copper loop, coax, FTTH). Offering such a service to customers outside of this scope would prove very expensive, and likely, would never see a return on the investment without extensive peering arrangements. Even then, distribution rights would be very difficult to attain without very deep pockets and crippling revenue sharing. The studios really dislike the idea of transmission outside of a closed network. Don't forget. Even the titles you mentioned are still owned by very large companies interested in squeezing every possible dime from their assets. They would not be cheap to acquire. Further, torrent-like distribution is a long long way away from sign off by the content providers. They see torrents as the number one tool of content piracy. This is a major reason I see the discussion of tripping upstream usage limits through content distribution as moot. I am with you on the vision of massive content libraries at the fingertips of all, but I see many roadblocks in the way. And, almost none of them are technical in nature. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 8, 2007, at 7:51 PM, Bora Akyol wrote:
Please see my comments inline:
-----Original Message----- From: Gian Constantine [mailto:constantinegi@corp.earthlink.net] Sent: Monday, January 08, 2007 4:27 PM To: Bora Akyol Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
To the end user, there is no cost to downloading videos when they are sleeping. I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time. What the downloading (possibly 24x7) does is to stress the ISP network to its max since the assumptions of statistical multiplexing goes out the window. Think of a Tivo that downloads content off the Internet 24x7.
The user is still paying for only what they pay each month, and this is "network neutrality 2.0" all over again.
You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull.
I would argue against your distinction between visual and auditory content. There is a lot of content out there that a lot of people watch and the content is 20-40+ years old. Think Brady Bunch, Bonanza, or archived games from NFL, MLB etc. What about Smurfs (for those of us with kids)?
This is only the beginning.
If I can get a 500GB box and download MP4 content, that's a lot of essentially free storage.
Coming back to NANOG content, I think video (not streamed but multi- path distributed video) is going to bring the networks down not by sheer bandwidth alone but by challenging the assumptions behind the engineering of the network. I don't think you need huge SANs per se to store the content either, since it is multi-source/multi-sink, the reliability is built-in.
The SPs like Verizon & ATT moving fiber to the home hoping to get in on the "value add" action are in for an awakening IMHO.
Regards
Bora ps. I apologize for the tone of my previous email. That sounded grumpier than I usually am.
So, kind of back to the original question: what is going to be the reaction of your average service provider to the presence of an increasing number of people sucking down massive amounts of video and spitting it back out again... nothing? throttling all traffic of a certain type? shutting down customers who exceed certain thresholds? or just throttling their traffic? massive upgrades of internal network hardware? Is it your contention that there's no economic model, given the architecture of current networks, which would would generate enough revenue to offset the cost of traffic generated by P2P video? Thomas Gian Constantine wrote:
There may have been a disconnect on my part, or at least, a failure to disclose my position. I am looking at things from a provider standpoint, whether as an ISP or a strict video service provider.
I agree with you. From a consumer standpoint, a trickle or off-peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title.
Service providers wishing to provide this service to their customers may see some success where they control the access medium (copper loop, coax, FTTH). Offering such a service to customers outside of this scope would prove very expensive, and likely, would never see a return on the investment without extensive peering arrangements. Even then, distribution rights would be very difficult to attain without very deep pockets and crippling revenue sharing. The studios really dislike the idea of transmission outside of a closed network. Don't forget. Even the titles you mentioned are still owned by very large companies interested in squeezing every possible dime from their assets. They would not be cheap to acquire.
Further, torrent-like distribution is a long long way away from sign off by the content providers. They see torrents as the number one tool of content piracy. This is a major reason I see the discussion of tripping upstream usage limits through content distribution as moot.
I am with you on the vision of massive content libraries at the fingertips of all, but I see many roadblocks in the way. And, almost none of them are technical in nature.
Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net <mailto:constantinegi@corp.earthlink.net>
On Jan 8, 2007, at 7:51 PM, Bora Akyol wrote:
Please see my comments inline:
-----Original Message----- From: Gian Constantine [mailto:constantinegi@corp.earthlink.net] Sent: Monday, January 08, 2007 4:27 PM To: Bora Akyol Cc: nanog@merit.edu <mailto:nanog@merit.edu> Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
To the end user, there is no cost to downloading videos when they are sleeping. I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time. What the downloading (possibly 24x7) does is to stress the ISP network to its max since the assumptions of statistical multiplexing goes out the window. Think of a Tivo that downloads content off the Internet 24x7.
The user is still paying for only what they pay each month, and this is "network neutrality 2.0" all over again.
You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull.
I would argue against your distinction between visual and auditory content. There is a lot of content out there that a lot of people watch and the content is 20-40+ years old. Think Brady Bunch, Bonanza, or archived games from NFL, MLB etc. What about Smurfs (for those of us with kids)?
This is only the beginning.
If I can get a 500GB box and download MP4 content, that's a lot of essentially free storage.
Coming back to NANOG content, I think video (not streamed but multi-path distributed video) is going to bring the networks down not by sheer bandwidth alone but by challenging the assumptions behind the engineering of the network. I don't think you need huge SANs per se to store the content either, since it is multi-source/multi-sink, the reliability is built-in.
The SPs like Verizon & ATT moving fiber to the home hoping to get in on the "value add" action are in for an awakening IMHO.
Regards
Bora ps. I apologize for the tone of my previous email. That sounded grumpier than I usually am.
-- Thomas Leavitt - thomas@thomasleavitt.org - 831-295-3917 (cell) *** Independent Systems and Network Consultant, Santa Cruz, CA ***
My contention is simple. The content providers will not allow P2P video as a legal commercial service anytime in the near future. Furthermore, most ISPs are going to side with the content providers on this one. Therefore, discussing it at this point in time is purely academic, or more so, diversionary. Personally, I am not one for throttling high use subscribers. Outside of the fine print, which no one reads, they were sold a service of Xkbps down and Ykbps up. I could not care less how, when, or how often they use it. If you paid for it, burn it up. I have questions as to whether or not P2P video is really a smart distribution method for service provider who controls the access medium. Outside of being a service provider, I think the economic model is weak, when there can be little expectation of a large scale take rate. Ultimately, my answer is: we're not there yet. The infrastructure isn't there. The content providers aren't there. The market isn't there. The product needs a motivator. This discussion has been putting the cart before the horse. A lot of big pictures pieces are completely overlooked. We fail to question whether or not P2P sharing is a good method in delivering the product. There are a lot of factors which play into this. Unfortunately, more interest has been paid to the details of this delivery method than has been paid to whether or not the method is even worthwhile. From a big picture standpoint, I would say P2P distribution is a non- starter, too many reluctant parties to appease. From a detail standpoint, I would say P2P distribution faces too many hurdles in existing network infrastructure to be justified. Simply reference the discussion of upstream bandwidth caps and you will have a wonderful example of those hurdles. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 8, 2007, at 9:49 PM, Thomas Leavitt wrote:
So, kind of back to the original question: what is going to be the reaction of your average service provider to the presence of an increasing number of people sucking down massive amounts of video and spitting it back out again... nothing? throttling all traffic of a certain type? shutting down customers who exceed certain thresholds? or just throttling their traffic? massive upgrades of internal network hardware?
Is it your contention that there's no economic model, given the architecture of current networks, which would would generate enough revenue to offset the cost of traffic generated by P2P video?
Thomas
Gian Constantine wrote:
There may have been a disconnect on my part, or at least, a failure to disclose my position. I am looking at things from a provider standpoint, whether as an ISP or a strict video service provider.
I agree with you. From a consumer standpoint, a trickle or off- peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title.
Service providers wishing to provide this service to their customers may see some success where they control the access medium (copper loop, coax, FTTH). Offering such a service to customers outside of this scope would prove very expensive, and likely, would never see a return on the investment without extensive peering arrangements. Even then, distribution rights would be very difficult to attain without very deep pockets and crippling revenue sharing. The studios really dislike the idea of transmission outside of a closed network. Don't forget. Even the titles you mentioned are still owned by very large companies interested in squeezing every possible dime from their assets. They would not be cheap to acquire.
Further, torrent-like distribution is a long long way away from sign off by the content providers. They see torrents as the number one tool of content piracy. This is a major reason I see the discussion of tripping upstream usage limits through content distribution as moot.
I am with you on the vision of massive content libraries at the fingertips of all, but I see many roadblocks in the way. And, almost none of them are technical in nature.
Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net <mailto:constantinegi@corp.earthlink.net>
On Jan 8, 2007, at 7:51 PM, Bora Akyol wrote:
Please see my comments inline:
-----Original Message----- From: Gian Constantine [mailto:constantinegi@corp.earthlink.net] Sent: Monday, January 08, 2007 4:27 PM To: Bora Akyol Cc: nanog@merit.edu <mailto:nanog@merit.edu> Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
To the end user, there is no cost to downloading videos when they are sleeping. I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time. What the downloading (possibly 24x7) does is to stress the ISP network to its max since the assumptions of statistical multiplexing goes out the window. Think of a Tivo that downloads content off the Internet 24x7. The user is still paying for only what they pay each month, and this is "network neutrality 2.0" all over again.
You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull.
I would argue against your distinction between visual and auditory content. There is a lot of content out there that a lot of people watch and the content is 20-40+ years old. Think Brady Bunch, Bonanza, or archived games from NFL, MLB etc. What about Smurfs (for those of us with kids)?
This is only the beginning.
If I can get a 500GB box and download MP4 content, that's a lot of essentially free storage.
Coming back to NANOG content, I think video (not streamed but multi-path distributed video) is going to bring the networks down not by sheer bandwidth alone but by challenging the assumptions behind the engineering of the network. I don't think you need huge SANs per se to store the content either, since it is multi-source/multi-sink, the reliability is built-in.
The SPs like Verizon & ATT moving fiber to the home hoping to get in on the "value add" action are in for an awakening IMHO.
Regards
Bora ps. I apologize for the tone of my previous email. That sounded grumpier than I usually am.
-- Thomas Leavitt - thomas@thomasleavitt.org - 831-295-3917 (cell)
*** Independent Systems and Network Consultant, Santa Cruz, CA ***
<thomas.vcf>
On 8-Jan-2007, at 22:26, Gian Constantine wrote:
My contention is simple. The content providers will not allow P2P video as a legal commercial service anytime in the near future. Furthermore, most ISPs are going to side with the content providers on this one. Therefore, discussing it at this point in time is purely academic, or more so, diversionary.
There are some ISPs in North America who tell me that something like 80% of their traffic *today* is BitTorrent. I don't know how accurate their numbers are, or whether those ISPs form a representative sample, but it certainly seems possible that the traffic exists regardless of the legality of the distribution. If the traffic is real, and growing, the question is neither academic nor diversionary. However, if we close our eyes and accept for a minute that P2P video isn't happening, and all growth in video over the Internet will be in real-time streaming, then I think the future looks a lot more scary. When TSN.CA streamed the World Junior Hockey Championship final via Akamai last Friday, there were several ISPs in Toronto who saw their transit traffic *double* during the game. Joe
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably. If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 9, 2007, at 11:01 AM, Joe Abley wrote:
On 8-Jan-2007, at 22:26, Gian Constantine wrote:
My contention is simple. The content providers will not allow P2P video as a legal commercial service anytime in the near future. Furthermore, most ISPs are going to side with the content providers on this one. Therefore, discussing it at this point in time is purely academic, or more so, diversionary.
There are some ISPs in North America who tell me that something like 80% of their traffic *today* is BitTorrent. I don't know how accurate their numbers are, or whether those ISPs form a representative sample, but it certainly seems possible that the traffic exists regardless of the legality of the distribution.
If the traffic is real, and growing, the question is neither academic nor diversionary.
However, if we close our eyes and accept for a minute that P2P video isn't happening, and all growth in video over the Internet will be in real-time streaming, then I think the future looks a lot more scary. When TSN.CA streamed the World Junior Hockey Championship final via Akamai last Friday, there were several ISPs in Toronto who saw their transit traffic *double* during the game.
Joe
On 9-Jan-2007, at 11:29, Gian Constantine wrote:
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
The act of regulating p2p traffic is a bit like playing whack-a-mole. At what point does it cost more to play that game than it costs to build out to carry the traffic?
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
The trouble with IP multicast is that it doesn't exist, in a wide- scale, deployed, inter-provider sense. Joe
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change. One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 9, 2007, at 11:47 AM, Joe Abley wrote:
On 9-Jan-2007, at 11:29, Gian Constantine wrote:
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
The act of regulating p2p traffic is a bit like playing whack-a- mole. At what point does it cost more to play that game than it costs to build out to carry the traffic?
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
The trouble with IP multicast is that it doesn't exist, in a wide- scale, deployed, inter-provider sense.
Joe
On 9-Jan-2007, at 13:04, Gian Constantine wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change. One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates.
So you are of the opinion that inter-domain multicast doesn't exist today for technical reasons, and those technical reasons are fixed in IPv6? Joe
The available address space for multicast in IPv4 is limited. IPv6 vastly expands this space. And here, I may have been guilty of putting the cart before the horse. Inter-AS multicast does not exist today because the motivators are not there. It is absolutely possible, but providers have to want to do it. Consumers need to see some benefit from it. Again, the benefit needs to be seen by a large market. Providers make decisions in the interest of their bottom line. A niche service is not a motivator for inter-AS multicast. If demand for variety in service provider selection grows with the proliferation of IPTV, we may see the required motivation for inter- AS multicast, which places us in a position moving to the large multicast space available in IPv6. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 9, 2007, at 1:09 PM, Joe Abley wrote:
On 9-Jan-2007, at 13:04, Gian Constantine wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change. One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates.
So you are of the opinion that inter-domain multicast doesn't exist today for technical reasons, and those technical reasons are fixed in IPv6?
Joe
On Jan 9, 2007, at 1:04 PM, Gian Constantine wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change.
I am curious. Why do you think that ? Regards Marshall
One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates.
Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net
On Jan 9, 2007, at 11:47 AM, Joe Abley wrote:
On 9-Jan-2007, at 11:29, Gian Constantine wrote:
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
The act of regulating p2p traffic is a bit like playing whack-a- mole. At what point does it cost more to play that game than it costs to build out to carry the traffic?
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
The trouble with IP multicast is that it doesn't exist, in a wide- scale, deployed, inter-provider sense.
Joe
This is a little presumptuous on my part, but what other reason would motivate a migration to IPv6. I fail to see us running out of unicast addresses any time soon. I have been hearing IPv6 is coming for many years now. I think video service is really the only motivation for migrating. I am wrong on plenty of things. This may very well be one of them. :-) Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 9, 2007, at 1:21 PM, Marshall Eubanks wrote:
On Jan 9, 2007, at 1:04 PM, Gian Constantine wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change.
I am curious. Why do you think that ?
Regards Marshall
One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates.
Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net
On Jan 9, 2007, at 11:47 AM, Joe Abley wrote:
On 9-Jan-2007, at 11:29, Gian Constantine wrote:
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
The act of regulating p2p traffic is a bit like playing whack-a- mole. At what point does it cost more to play that game than it costs to build out to carry the traffic?
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
The trouble with IP multicast is that it doesn't exist, in a wide- scale, deployed, inter-provider sense.
Joe
On Tue, 09 Jan 2007 13:55:47 EST, Gian Constantine said:
This is a little presumptuous on my part, but what other reason would motivate a migration to IPv6. I fail to see us running out of unicast addresses any time soon.
That's OK, I don't see us running out of multicast addresses any time soon either. :)
On Tue, 9 Jan 2007 13:21:38 -0500 Marshall Eubanks <tme@multicasttech.com> wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change.
I am curious. Why do you think that ?
I could have said the same thing, but with an opposite end meaning. You take one 10+ year technology with minimal deployment and put it on top of another 10+ year technology also far from being widely deployed and you end up with something quickly approaching zero deployment, instantly. :-) John
Fair enough. :-) Nearly everything has a time and place, though. Pretty much everything on this thread is speculative. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 9, 2007, at 2:13 PM, John Kristoff wrote:
On Tue, 9 Jan 2007 13:21:38 -0500 Marshall Eubanks <tme@multicasttech.com> wrote:
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change.
I am curious. Why do you think that ?
I could have said the same thing, but with an opposite end meaning. You take one 10+ year technology with minimal deployment and put it on top of another 10+ year technology also far from being widely deployed and you end up with something quickly approaching zero deployment, instantly. :-)
John
On Tue, 9 Jan 2007, Gian Constantine wrote:
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
Which is why ISPs will see all of the above. There will be store-and-forward video, streaming video, on demand video, real-time interactive video, and probably 10 other types I can't think of. The concern for university or ISP networks isn't that some traffic uses 70% of their network, its that 5% of the users is using 70%, 80%, 90%, 100% of their network regardless of what that traffic is. It isn't "background" traffic using "excess" capacity, it peaks at the same time as other peak traffic times. P2P congestion isn't constrained to a single transit bottleneck, it causes bottlenecks in every path local and transit. Local congestion is often more of a concern than transit. The big question is whether the 5% of the users will continue to pay for 5% of the network, or if they use 70% of the network will they pay for 70% of the network? Will 95% of the users see their prices fall and 5% of the users see their prices rise?
On Tue, 09 Jan 2007 11:29:32 EST, Gian Constantine said:
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
Multicast streaming may be a big win when you're only streaming the top 5 or 10 networks (for some value of 5 or 10). What's the performance characteristics if you have 300K customers, and at any given time, 10% are watching something from the "long tail" - what's the difference between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
On Tue, 9 Jan 2007, Valdis.Kletnieks@vt.edu wrote:
Multicast streaming may be a big win when you're only streaming the top 5 or 10 networks (for some value of 5 or 10). What's the performance characteristics if you have 300K customers, and at any given time, 10% are watching something from the "long tail" - what's the difference between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
1/2, 1/3, etc the bandwidth for each additional viewer of the same stream? The worst case for a multicast stream is the same as the unicast stream, but the unicast stream is always the worst case. Multicast doesn't have to be real-time. If you collect interested subscribers over a longer time period, e.g. scheduled downloads over the next hour, day, week, month, you can aggregate more multicast receivers through the same stream. TiVo collects its content using a broadcast schedule. A "long tail" distribution includes not only the tail, but also the head. 30K unicast streams may be the same as 30K multicast streams, but 30K multicast streams is a lot better than 300,000 unicast streams. Although the long tail steams may have 1, 2, 3 receivers of a stream, the Parato curve also has 1, 2, 3 streams with 50K, 25K, 12K receivers. With Source-Specific Multicast addressing there isn't a shortage of multicast addresses for the typical broadcast usage. At least not until we also run out of IPv4 unicast addresses. There is rarely only one way to solve a problem. There will be multiple ways to distribute data, video, voice, etc.
There you go. SSM would be a great solution. Who the hell supports it, though? We still get back to the issue of large scale market acceptance. High take rate will be limited to the more popular channels, which are run by large media conglomerates, who are reluctant to let streams out of a closed network. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 10, 2007, at 12:08 AM, Sean Donelan wrote:
On Tue, 9 Jan 2007, Valdis.Kletnieks@vt.edu wrote:
Multicast streaming may be a big win when you're only streaming the top 5 or 10 networks (for some value of 5 or 10). What's the performance characteristics if you have 300K customers, and at any given time, 10% are watching something from the "long tail" - what's the difference between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
1/2, 1/3, etc the bandwidth for each additional viewer of the same stream? The worst case for a multicast stream is the same as the unicast stream, but the unicast stream is always the worst case.
Multicast doesn't have to be real-time. If you collect interested subscribers over a longer time period, e.g. scheduled downloads over the next hour, day, week, month, you can aggregate more multicast receivers through the same stream. TiVo collects its content using a broadcast schedule.
A "long tail" distribution includes not only the tail, but also the head. 30K unicast streams may be the same as 30K multicast streams, but 30K multicast streams is a lot better than 300,000 unicast streams. Although the long tail steams may have 1, 2, 3 receivers of a stream, the Parato curve also has 1, 2, 3 streams with 50K, 25K, 12K receivers.
With Source-Specific Multicast addressing there isn't a shortage of multicast addresses for the typical broadcast usage. At least not until we also run out of IPv4 unicast addresses.
There is rarely only one way to solve a problem. There will be multiple ways to distribute data, video, voice, etc.
Sean Donelan wrote:
1/2, 1/3, etc the bandwidth for each additional viewer of the same stream? The worst case for a multicast stream is the same as the unicast stream, but the unicast stream is always the worst case.
However unicast stream does not require state in the intermediate boxes (unless they are intentionally keeping some) while even single receiver multicast stream generates state all along the path. This will quite quickly limit the number of groups feasible. Pete
Dear Valdis; On Jan 9, 2007, at 10:02 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 09 Jan 2007 11:29:32 EST, Gian Constantine said:
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
Multicast streaming may be a big win when you're only streaming the top 5 or 10 networks (for some value of 5 or 10). What's the performance characteristics if you have 300K customers, and at any given time, 10% are watching something from the "long tail" - what's the difference between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
This is a very good point. It is very reasonable to expect viewing choices to follow a Pareto distribution (such as Zipf's law or the 80-20 rule). That plus some reasonable economic assumptions make 30K commercial channels not an unreasonable assumptions in a few years. But that also implies that it is _not_ realistic to have "30K multicast streams that each have only one or at most 2-3 viewers." You may have 30K streams, most may have only a few viewers, and still have fairly large savings. To flesh out your example, if you have 1 million viewers on your network, and if you assume 30K channels and the same Pareto distribution as web sites, - the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast. These same models IMHO makes cell phone RF multicast not incredibly compelling. Because there is less feedback on a multicast RF, the power has to be greater for a multicast channel, and in that case the bandwidth (or RF power) savings are small or even negative, because you only have maybe 100 people on a cell watching. Regards Marshall
On Jan 10, 2007, at 1:49 AM, Marshall Eubanks wrote:
Dear Valdis;
On Jan 9, 2007, at 10:02 PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 09 Jan 2007 11:29:32 EST, Gian Constantine said:
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
Multicast streaming may be a big win when you're only streaming the top 5 or 10 networks (for some value of 5 or 10). What's the performance characteristics if you have 300K customers, and at any given time, 10% are watching something from the "long tail" - what's the difference between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
This is a very good point. It is very reasonable to expect viewing choices to follow a Pareto distribution (such as Zipf's law or the 80-20 rule). That plus some reasonable economic assumptions make 30K commercial channels not an unreasonable assumptions in a few years. But that also implies that it is _not_ realistic to have "30K multicast streams that each have only one or at most 2-3 viewers." You may have 30K streams, most may have only a few viewers, and still have fairly large savings.
To flesh out your example, if you have 1 million viewers on your network, and if you assume 30K channels and the same Pareto distribution as web sites,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
These same models IMHO makes cell phone RF multicast not incredibly compelling. Because there is less feedback on a multicast RF, the power has to be greater for a multicast channel, and in that case the bandwidth (or RF power) savings are small or even negative, because you only have maybe 100 people on a cell watching.
s/watching/watching content from your 30K channels/ Regards
Regards Marshall
Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life. For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing. Regards Michal
Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life. For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing. Regards Michal
If we're becoming a VOD world, does multicast play any practical role in video distribution? Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michal Krsek Sent: Wednesday, January 10, 2007 2:28 AM To: Marshall Eubanks Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously? Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life. For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing. Regards Michal
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 12, 2007, at 10:05 AM, Frank Bulk wrote:
If we're becoming a VOD world, does multicast play any practical role in video distribution?
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michal Krsek Sent: Wednesday, January 10, 2007 2:28 AM To: Marshall Eubanks Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not
only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life.
For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center
in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing.
Regards Michal
Dear Gian, from my perspecitve (central europe) it looks like the linear programming is used only in TV/radio channels. But this is only a part of the media industry. Cinema, DVD and other forms of content distribution aren't linear. I don't like to waste Internet capacity with URLs to large VoD community servers. I don't have enough speaking power to write any strict statements, but I think the world of media industry will use every existing channel of revenue. The question isn't if, but when. Some people prefer having their "eleven button remote", but some want to consume content they had chosen at time they had chosen. May be I'm wrong but I don't know anybody from teen generation who likes to be TV channel driven (may be I'm in a bad country :-)). Regards Michal ----- Original Message ----- From: Gian Constantine To: frnkblk@iname.com Cc: nanog@merit.edu Sent: Friday, January 12, 2007 4:26 PM Subject: Re: Network end users to pull down 2 gigabytes a day, continuously? I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 12, 2007, at 10:05 AM, Frank Bulk wrote: If we're becoming a VOD world, does multicast play any practical role in video distribution? Frank -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michal Krsek Sent: Wednesday, January 10, 2007 2:28 AM To: Marshall Eubanks Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously? Hi Marshall, - the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast. I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life. For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing. Regards Michal
On Fri, 12 Jan 2007, Gian Constantine wrote:
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not.
My experience is that when you show people VoD, they like it. A lot of people won't abandon linear programming because it's easy to just watch whatever is "on", but if you give them the possibility of watching VoD (DVD sales of TV series for instance) some will definately start doing both. Same thing with HDTV, until you show it to people they couldn't care less, but when you've shown them they do start to get interested. I have been trying to find out the advertising ARPU for the cable companies for a prime time TV show in the US, ie how much would I need to pay them to get the same content but without the advertising, and then add the cost of VoD delivery. This is purely theoretical, but it would give a rough indication on what a VoD distribution model might cost the end user if we were to add that distribution channel. Does anyone know any rough figures for advertising ARPU per hour on primetime? I'd love to hear it. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sat, 13 Jan 2007, Mikael Abrahamsson wrote:
My experience is that when you show people VoD, they like it.
I have to admit the wow factor is there. But I already have access to VoD through my cable company and its set-top boxes. TV over IP brings my family exactly zero additional benefits. -- Steve Sobol, Professional Geek ** Java/VB/VC/PHP/Perl ** Linux/*BSD/Windows Victorville, California PGP:0xE3AE35ED It's all fun and games until someone starts a bonfire in the living room.
Its the Energizer Bunny thread of 2007 ... 135 messages so far and still going strong. Steve Sobol wrote:
On Sat, 13 Jan 2007, Mikael Abrahamsson wrote:
My experience is that when you show people VoD, they like it.
I have to admit the wow factor is there. But I already have access to VoD through my cable company and its set-top boxes. TV over IP brings my family exactly zero additional benefits.
Steve That's mostly because the DVR boxes given by the cable companies (mine is a Moto from Comcast) are terrible. The UI just plain is unusable esp for on-demand portion of the DVR guide. I have caught up with the thread this morning and I have to say, I don't understand why people think of video distribution via the Internet as "channels." The only reason why channels exist is due to the medium when TV was started. I expect the next generation of video to be a lot like GooTube or iTunes. Most of it is pushed while you are sleeping and a few (<200) mcast streams for live content like news, etc. The question I asked earlier was, whether the last-mile SP networks can handle 24x7 100% link utilization for all of their customers. I don't think they can. And frankly, I don't know how they are going to get revenue from the content distributors to upgrade the networks. Does Apple reimburse Comcast (my SP) when I download a song? I don't think so? What about a movie? Again, I don't think so. You see where I am going with this. Bora
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Steve Sobol Sent: Friday, January 12, 2007 9:37 PM To: Mikael Abrahamsson Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
On Sat, 13 Jan 2007, Mikael Abrahamsson wrote:
My experience is that when you show people VoD, they like it.
I have to admit the wow factor is there. But I already have access to VoD through my cable company and its set-top boxes. TV over IP brings my family exactly zero additional benefits.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bora Akyol wrote:
The question I asked earlier was, whether the last-mile SP networks can handle 24x7 100% link utilization for all of their customers. I don't think they can. And frankly, I don't know how they are going to get revenue from the content distributors to upgrade the networks. Does Apple reimburse Comcast (my SP) when I download a song? I don't think so? What about a movie? Again, I don't think so.
You see where I am going with this.
The past solution to repetitive requests for the same content has been caching, either reactive (webcaching) or proactive (Akamaizing.) I think it is the latter we will see; service providers will push reasonably cheap servers close to the edge where they aren't too oversubscribed, and stuff their content there. A cluster of servers with terabytes of disk at a regional POP will cost a lot less than upgrading the upstream links. And even if the SPs do not want to invest in developing this product platform for themselves, the price will likely be paid by the content providers who need performance to keep subscribers. I think the biggest stumbling block isn't technical. It is a question of getting enough content to attract viewers, or alternately, getting enough viewers to attract content. Plus, you're going to a format where the ability to fast-forward commercials is a fact, not a risk, and you'll have to find a way to get advertisers' products in front of the viewer to move past pay-per-view. It's all economics and politics now. - -Dave -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFq/hz+dqB2cHPe1URAkNIAJ9/juPTl45djTF3ijZdYubXdFJoqwCgiZDm Sv2cacmnM6Lld0cRRFo9vlo= =tFPO -----END PGP SIGNATURE-----
Thus spake "Dave Israel" <davei@otd.com>
The past solution to repetitive requests for the same content has been caching, either reactive (webcaching) or proactive (Akamaizing.) I think it is the latter we will see; service providers will push reasonably cheap servers close to the edge where they aren't too oversubscribed, and stuff their content there. A cluster of servers with terabytes of disk at a regional POP will cost a lot less than upgrading the upstream links. And even if the SPs do not want to invest in developing this product platform for themselves, the price will likely be paid by the content providers who need performance to keep subscribers.
Caching per se doesn't apply to P2P networks, since they already do that as part of their normal operation. The key is getting users to contact peers who are topologically closer, limiting the bits * distance product. It's ridiculous that I often get better transfer rates with peers in Europe than with ones a few miles away. The key to making things more efficient is not to limit the bandwidth to/from the customer premise, but limit it leaving the POP and between ISPs. If I can transfer at 100kB/s from my neighbors but only 10kB/s from another continent, my opportunistic client will naturally do what my ISP wants as a side effect. The second step, after you've relocated the rate limiting points, is for ISPs to add their own peers in each POP. Edge devices would passively detect when more than N customers have accessed the same torrent, and they'd signal the ISP's peer to add them to its list. That peer would then download the content, and those N customers would get it from the ISP's peer. Creative use of rate limits and acess control could make it even more efficient, but they're not strictly necessary. The third step is for content producers to directly add their torrents to the ISP peers before releasing the torrent directly to the public. This gets "official" content pre-positioned for efficient distribution, making it perform better (from a user's perspective) than pirated content. The two great things about this are (a) it doesn't require _any_ changes to existing clients or protocols since it exploits existing behavior, and (b) it doesn't need to cover 100% of the content or be 100% reliable, since if a local peer isn't found with the torrent, the clients will fall back to their existing behavior (albeit with lower performance). One thing that _does_ potentially break existing clients is forcing all of the tracker (including DHT) requests through an ISP server. The ISP could then collect torrent popularity data in one place, but more importantly it could (a) forward the request upstream, replacing the IP with its own peer, and (b) only inform clients of other peers (including the ISP one) using the same intercept point. This looks a lot more like a traditional transparent cache, with the attendant reliability and capacity concerns, but I wouldn't be surprised if this were the first mechanism to make it to market.
I think the biggest stumbling block isn't technical. It is a question of getting enough content to attract viewers, or alternately, getting enough viewers to attract content. Plus, you're going to a format where the ability to fast-forward commercials is a fact, not a risk, and you'll have to find a way to get advertisers' products in front of the viewer to move past pay-per-view. It's all economics and politics now.
I think BitTorrent Inc's recent move is the wave of the short-term future: distribute files freely (and at low cost) via P2P, but DRM-protect the files so that people have to acquire a license to open the files. I can see a variety of subscription models that could pay for content effectively under that scheme. However, it's going to be competing with a deeply-entrenched pirate culture, so the key will be attractive new users who aren't technical enough to use the existing tools via an easy-to-use interface. Not surprisingly, the same folks are working on deals to integrate BT (the protocol) into STBs, routers, etc. so that users won't even know what's going on beneath the surface -- they'll just see a TiVo-like interface and pay a monthly fee like with cable. S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
Said Sprunk: Caching per se doesn't apply to P2P networks, since they already do that
as part of their normal operation. The key is getting users to contact peers who are topologically closer, limiting the bits * distance product. It's ridiculous that I often get better transfer rates with peers in Europe than with ones a few miles away. The key to making things more efficient is not to limit the bandwidth to/from the customer premise, but limit it leaving the POP and between ISPs. If I can transfer at 100kB/s from my neighbors but only 10kB/s from another continent, my opportunistic client will naturally do what my ISP wants as a side effect.
The second step, after you've relocated the rate limiting points, is for ISPs to add their own peers in each POP. Edge devices would passively detect when more than N customers have accessed the same torrent, and they'd signal the ISP's peer to add them to its list. That peer would then download the content, and those N customers would get it from the ISP's peer. Creative use of rate limits and acess control could make it even more efficient, but they're not strictly necessary.
Good thinking. Where do I sign? Regarding your first point, it's really surprising that existing P2P applications don't include topology awareness. After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer. Those are likely to be the closest, if not in distance then topologically, and the ones with the most bandwidth. Further, imagine that it caches the search - so when you next seek a file, it checks for it first on the hosts nearest to it in its "routing table", stepping down progressively if it's not there. It's a form of local-pref. The third step is for content producers to directly add their torrents
to the ISP peers before releasing the torrent directly to the public. This gets "official" content pre-positioned for efficient distribution, making it perform better (from a user's perspective) than pirated content.
The two great things about this are (a) it doesn't require _any_ changes to existing clients or protocols since it exploits existing behavior, and (b) it doesn't need to cover 100% of the content or be 100% reliable, since if a local peer isn't found with the torrent, the clients will fall back to their existing behavior (albeit with lower performance).
Importantly, this option makes it perform better without making everyone else's perform worse, a big difference to a lot of proposed QOS schemes. This non-evilness is much to be preferred. Further, it also makes use of the Zipf behaviour discussed upthread - if 20 per cent of the content and 20 per cent of the users eat 80 per cent of the bandwidth, forward-deploying that 20 per cent of the content will save 80 per cent of the inter-provider bandwidth (which is what we care about, right, 'cos we're paying for it).
One thing that _does_ potentially break existing clients is forcing all of the tracker (including DHT) requests through an ISP server. The ISP could then collect torrent popularity data in one place, but more importantly it could (a) forward the request upstream, replacing the IP with its own peer, and (b) only inform clients of other peers (including the ISP one) using the same intercept point. This looks a lot more like a traditional transparent cache, with the attendant reliability and capacity concerns, but I wouldn't be surprised if this were the first mechanism to make it to market.
It's a nice idea to collect popularity data at the ISP level, because the decision on what to load into the local torrent servers could be automated. Once torrent X reaches a certain trigger level of popularity, the local server grabs it and begins serving, and the local-pref function on the clients finds it. Meanwhile, we drink coffee. However, it's a potential DOS magnet - after all, P2P is really a botnet with a badge. And the point of a topology-aware P2P client is that it seeks the nearest host, so if you constrain it to the ISP local server only, you're losing part of the point of P2P for no great saving in peering/transit. However, it's going to be competing with a deeply-entrenched pirate
culture, so the key will be attractive new users who aren't technical enough to use the existing tools via an easy-to-use interface. Not surprisingly, the same folks are working on deals to integrate BT (the protocol) into STBs, routers, etc. so that users won't even know what's going on beneath the surface -- they'll just see a TiVo-like interface and pay a monthly fee like with cable.
As long as they don't interfere with the user's right to choose someone else's content, fine. Alex
On 21-Jan-2007, at 07:14, Alexander Harrowell wrote:
Regarding your first point, it's really surprising that existing P2P applications don't include topology awareness. After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer. Those are likely to be the closest, if not in distance then topologically, and the ones with the most bandwidth. Further, imagine that it caches the search - so when you next seek a file, it checks for it first on the hosts nearest to it in its "routing table", stepping down progressively if it's not there. It's a form of local-pref.
Remember though that the dynamics of the system need to assume that individual clients will be selfish, and even though it might be in the interests of the network as a whole to choose local peers, if you can get faster *throughput* (not round-trip response) from a remote peer, it's a necessary assumption that the peer will do so. Protocols need to be designed such that a client is rewarded in faster downloads for uploading in a fashion that best benefits the swarm.
The third step is for content producers to directly add their torrents to the ISP peers before releasing the torrent directly to the public. This gets "official" content pre-positioned for efficient distribution, making it perform better (from a user's perspective) than pirated content.
If there was a big fast server in every ISP with a monstrous pile of disk which retrieved torrents automatically from a selection of popular RSS feeds, which kept seeding torrents for as long as there was interest and/or disk, and which had some rate shaping installed on the host such that traffic that wasn't on-net (e.g. to/from customers) or free (e.g. to/from peers) was rate-crippled, how far would that go to emulating this behaviour with existing live torrents? Speaking from a technical perspective only, and ignoring the legal minefield. If anybody has tried this, I'd be interested to hear whether on-net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere. Joe
There 's other developments as well... Simple Minds and Motorpyscho live. Mashed Up. Still need to get a better grip on what the new world of Mashup business models <http://www.capgemini.com/ctoblog/2006/11/mashup_corporations_the_shape.php>really is leading to? Have a look at this new mashup service of Fabchannel<http://www.fabchannel.com/>: until now 'just' an award-winning website which gave its members access to videos of rock concerts in Amsterdam's famous Paradiso<http://www..paradiso.nl/>concert hall. Not any more. Today Fabchannel launched a new, unique service<http://fabchannel.blogspot.com/2007/01/fabchannel-releases-unique-embedded.html>which enables music fans to create their own, custom made concert videos and then share them with others through their blogs, community profiles, websites or any other application. So suppose you have this weird music taste, which sort of urges you to create an ideal concert featuring the Simple Minds, Motorpsycho, The Fun Loving Criminals, Ojos de Brujo and Bauer & the Metrople Orchestra. *Just suppose it's true*. The only thing you need to do is click this concert together at Fabchannel's site – choosing from the many hundreds of videos available -, customize it with your own tags, image and description and then have Fabchannel automatically create the few lines of html code that you need to embed this tailor-made concert in whatever web application you want. As Fabchannel put it in their announcement, "this makes live concerts available to fans all over the world. Not centralised in one place, but where the fans gather online". And this is precisely the major concept behind the Mashup Corporation <http://www.mashupcorporations.com/>: - supply the outside world with simple, embeddable, services – support and facilitate the community that starts to use them and – watch growth and innovation take place in many unexpected ways. Fabchannel expects to attract many more fans than they currently do. Not by having more hits at their website, but rather through the potentially thousands and thousands of blogs, myspace pages, websites, forums and desktop widgets that all could reach their own niche group of music fans, mashing up the Fabplayer service with many other services that the Fabchannel crew – no matter how creative – would have never thought of. Maximise your growth, attract less people to your site. Sounds like a paradox. But not in a Mashup world. By all means view my customised concert, underneath. I'm particularly fond of the Barcelonan band Ojos de Brujo, with their very special mix of classic flamenco, hip hop and funk. Mashup music indeed. In all respects. http://www.capgemini.com/ctoblog/2007/01/simple_minds_and_motorpyscho_l.php On 1/21/07, Joe Abley <jabley@ca.afilias.info> wrote:
On 21-Jan-2007, at 07:14, Alexander Harrowell wrote:
Regarding your first point, it's really surprising that existing P2P applications don't include topology awareness. After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer. Those are likely to be the closest, if not in distance then topologically, and the ones with the most bandwidth. Further, imagine that it caches the search - so when you next seek a file, it checks for it first on the hosts nearest to it in its "routing table", stepping down progressively if it's not there. It's a form of local-pref.
Remember though that the dynamics of the system need to assume that individual clients will be selfish, and even though it might be in the interests of the network as a whole to choose local peers, if you can get faster *throughput* (not round-trip response) from a remote peer, it's a necessary assumption that the peer will do so.
Protocols need to be designed such that a client is rewarded in faster downloads for uploading in a fashion that best benefits the swarm.
The third step is for content producers to directly add their torrents to the ISP peers before releasing the torrent directly to the public. This gets "official" content pre-positioned for efficient distribution, making it perform better (from a user's perspective) than pirated content.
If there was a big fast server in every ISP with a monstrous pile of disk which retrieved torrents automatically from a selection of popular RSS feeds, which kept seeding torrents for as long as there was interest and/or disk, and which had some rate shaping installed on the host such that traffic that wasn't on-net (e.g. to/from customers) or free (e.g. to/from peers) was rate-crippled, how far would that go to emulating this behaviour with existing live torrents? Speaking from a technical perspective only, and ignoring the legal minefield.
If anybody has tried this, I'd be interested to hear whether on-net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere.
Joe
-- Evolution favors speed and that's why bacteria rule and we're just baggage living off their ecology. --Bob Frankston
On Sun, Jan 21, 2007 at 06:15:52PM +0100, D.H. van der Woude wrote:
Simple Minds and Motorpyscho live. Mashed Up. Still need to get a better grip on what the new world of Mashup business models
Are mashups like: http://www.popmodernism.org/scrambledhackz/ -- ``Unthinking respect for authority is the greatest enemy of truth.'' -- Albert Einstein -><- <URL:http://www.subspacefield.org/~travis/>
Joe Abley wrote:
If anybody has tried this, I'd be interested to hear whether on-net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere.
The local seed would serve bulk of the data because as soon as a piece is served from it, the client issues a new request and if the latency and bandwidth is there, as is the case for ADSL/cable clients, usually
80% of a file is served "locally". I don't think additional optimization is done nor needed in the client.
Pete
Thus spake "Joe Abley" <jabley@ca.afilias.info>
If there was a big fast server in every ISP with a monstrous pile of disk which retrieved torrents automatically from a selection of popular RSS feeds, which kept seeding torrents for as long as there was interest and/or disk, and which had some rate shaping installed on the host such that traffic that wasn't on-net (e.g. to/from customers) or free (e.g. to/from peers) was rate-crippled, how far would that go to emulating this behaviour with existing live torrents?
Every torrent indexing site I'm aware of has RSS feeds for newly-added torrents, categorized many different ways. Any ISP that wanted to set up such a service could do so _today_ with _existing_ tools. All that's missing is the budget and a go-ahead from the lawyers.
Speaking from a technical perspective only, and ignoring the legal minefield.
Aside from that, Mrs. Lincoln, how was the play?
If anybody has tried this, I'd be interested to hear whether on-net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere.
Clients pull data from everywhere that'll send it to them. The important thing is what percentage of the bits come from where. If I can reach local peers at 90kB/s and remote peers at 10kB/s, then local peers will end up accounting for 90% of the bits I download. Unfortunately, due to asymmetric connections, rate limiting, etc. it frequently turns out that remote peers perform better than local ones in today's consumer networks. Uploading doesn't work exactly the same way, but it's similar. During the leeching phase, clients will upload to a handful of peers that they get the best download rates from. However, the "optimistic unchoke" algorithm will lead to some bits heading off to poorer-performing peers. During the seeding phase, clients will upload to a handful of peers that they get the best _upload_ rates to, plus a few bits off to "optimistic unchoke" peers. Do I have hard data? No. Is there any reason to think real-world behavior doesn't match theory? No. I frequently stare at the "Peer" stats window on my BT client and it's doing exactly what Bram's original paper says it should be doing. That I get better transfer rates with people in Malaysia and Poland than with my next-door neighbor is the ISPs' fault, not Bram's. S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
On 21-Jan-2007, at 14:07, Stephen Sprunk wrote:
Every torrent indexing site I'm aware of has RSS feeds for newly- added torrents, categorized many different ways. Any ISP that wanted to set up such a service could do so _today_ with _existing_ tools. All that's missing is the budget and a go-ahead from the lawyers.
Yes, I know.
If anybody has tried this, I'd be interested to hear whether on- net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere.
[...] Do I have hard data? No. [...]
So, has anybody actually tried this? Speculating about how clients might behave is easy, but real experience is more interesting. Joe
On Sun, 21 Jan 2007, Joe Abley wrote:
Remember though that the dynamics of the system need to assume that individual clients will be selfish, and even though it might be in the interests of the network as a whole to choose local peers, if you can get faster *throughput* (not round-trip response) from a remote peer, it's a necessary assumption that the peer will do so.
It seems like if there's an issue here it's that different parties have different self-interests, and those whose interests aren't being served aren't passing on the costs to the decision makers. The users' performance interests are served by getting the fastest downloads possible. The ISP's financial interests would be served by their flat rate customers getting their data from somewhere close by. If it becomes enough of a problem that the ISPs are motivated to deal with it, one approach would be to get the customers' financial interests better aligned with their own, with differentiated billing for local and long distance traffic. Perth, on the West Coast of Australia, claims to be the world's most isolated "capitol" city (for some definition of capitol). Next closest is probably Adelaide, at 1300 miles. Jakarta and Sydney are both 2,000 miles away. Getting stuff, including data, in and out is expensive. Like Seattle, Perth has many of its ISPs in the same downtown sky scraper, and a very active exchange point in the building. It is much cheaper for ISPs to hand off local traffic to each other than to hand off long distance traffic to their far away transit providers. Like ISPs in a lot of similar places, the ISPs in Perth charge their customers different rates for cheap local bandwidth than for expensive long distance bandwidth. When I was in Perth a couple of years ago, I asked my usual questions about what effect this billing arrangement was having on user behavior. I was told about a Perth-only file sharing network. Using the same file sharing networks as the rest of the world was expensive, as they would end up hauling lots of data over the expensive long distance links and users didn't want to pay for that. Instead, they'd put together their own, which only allowed local users and thus guaranteed that uploads and downloads would happen at cheap local rates. Googling for more information just now, what I found were lots of stories about police raids, so I'm not sure if it's still operational. Legal problems seem to be an issue for file sharing networks regardless of geographic focus, so that's probably not relevant to this particular point. In the US and Western Europe, there's still enough fiber between cities that high volumes of long distance traffic don't seem to be causing issues, and pricing is becoming less distance sensitive. The parts of the world with shortages of external connectivity pay to get to us, so we don't see those costs either. If that changes, I suspect we'll see it reflected in the pricing models and user self-interests will change. The software that users will be using will change accordingly, as it did in Perth. -Steve
Gibbard: It seems like if there's an issue here it's that different parties have different self-interests, and those whose interests aren't being served
aren't passing on the costs to the decision makers. The users' performance interests are served by getting the fastest downloads possible. The ISP's financial interests would be served by their flat rate customers getting their data from somewhere close by. If it becomes enough of a problem that the ISPs are motivated to deal with it, one approach would be to get the customers' financial interests better aligned with their own, with differentiated billing for local and long distance traffic.
That could be seen as a confiscation of a major part of the value customers derive from ISPs. Perth, on the West Coast of Australia, claims to be the world's most
isolated "capitol" city (for some definition of capitol). Next closest is probably Adelaide, at 1300 miles. Jakarta and Sydney are both 2,000 miles away. Getting stuff, including data, in and out is expensive. Like Seattle, Perth has many of its ISPs in the same downtown sky scraper, and a very active exchange point in the building. It is much cheaper for ISPs to hand off local traffic to each other than to hand off long distance traffic to their far away transit providers. Like ISPs in a lot of similar places, the ISPs in Perth charge their customers different rates for cheap local bandwidth than for expensive long distance bandwidth.
When I was in Perth a couple of years ago, I asked my usual questions about what effect this billing arrangement was having on user behavior. I was told about a Perth-only file sharing network. Using the same file sharing networks as the rest of the world was expensive, as they would end up hauling lots of data over the expensive long distance links and users didn't want to pay for that. Instead, they'd put together their own, which only allowed local users and thus guaranteed that uploads and downloads would happen at cheap local rates.
Googling for more information just now, what I found were lots of stories about police raids, so I'm not sure if it's still operational.
Brendan Behan: There is no situation that cannot be made worse by the presence of a policeman. -Steve
[ Note: please do not send MIME/HTML messages to mailing lists ] Thus spake Alexander Harrowell
Good thinking. Where do I sign? Regarding your first point, it's really surprising that existing P2P applications don't include topology awareness. After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer. Those are likely to be the closest, if not in distance then topologically, and the ones with the most bandwidth.
The BT algorithm favors peers with the best performance, not peers that are close. You can rail against this all you want, but expecting users to do anything other than local optimization is a losing proposition. The key is tuning the network so that local optimization coincides with global optimization. As I said, I often get 10x the throughput with peers in Europe vs. peers in my own city. You don't like that? Well, rate-limit BT traffic at the ISP border and _don't_ rate-limit within the ISP. (s/ISP/POP/ if desired) Make the cheap bits fast and a the expensive bits slow, and clients will automatically select the cheapest path.
Further, imagine that it caches the search - so when you next seek a file, it checks for it first on the hosts nearest to it in its "routing table", stepping down progressively if it's not there. It's a form of local-pref.
Experience shows that it's not necessary, though if it has a non-trivial positive effect I wouldn't be surprised if it shows up someday.
It's a nice idea to collect popularity data at the ISP level, because the decision on what to load into the local torrent servers could be automated.
Note that collecting popularity data could be done at the edges without forcing all tracker requests through a transparent proxy.
Once torrent X reaches a certain trigger level of popularity, the local server grabs it and begins serving, and the local-pref function on the clients finds it. Meanwhile, we drink coffee. However, it's a potential DOS magnet - after all, P2P is really a botnet with a badge.
I don't see how. If you detect that N customers are downloading a torrent, then having the ISP's peer download that torrent and serve it to the customers means you consume 1/N upstream bandwidth. That's an anti-DOS :)
And the point of a topology-aware P2P client is that it seeks the nearest host, so if you constrain it to the ISP local server only, you're losing part of the point of P2P for no great saving in peering/transit.
That's why I don't like the idea of transparent proxies for P2P; you can get 90% of the effect with 10% of the evilness by setting up sane rate-limits.
As long as they don't interfere with the user's right to choose someone else's content, fine.
If you're getting it from an STB, well, there may not be a way for users to add 3rd party torrents; how many users will be able to figure out how to add the torrent URLs (or know where to find said URLs) even if there is an option? Remember, we're talking about Joe Sixpack here, not techies. You would, however, be able to pick whatever STB you wanted (unless ISPs deliberately blocked competitors' services). S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
Sprunk:
It's a nice idea to collect popularity data at the ISP level, because the decision on what to load into the local torrent servers could be automated.
Note that collecting popularity data could be done at the edges without forcing all tracker requests through a transparent proxy.
Yes. This is my point. It's a good thing to do, but centralising it is an ungood thing to do, because...
Once torrent X reaches a certain trigger level of popularity, the
local server grabs it and begins serving, and the local-pref function on the clients finds it. Meanwhile, we drink coffee. However, it's a potential DOS magnet - after all, P2P is really a botnet with a badge.
I don't see how. If you detect that N customers are downloading a torrent, then having the ISP's peer download that torrent and serve it to the customers means you consume 1/N upstream bandwidth. That's an anti-DOS :)
All true. My point is that forcing all tracker requests through a proxy makes that machine an obvious DDOS target. It's got to have an open interface to all hosts on your network on one side, and to $world on the other, and if it goes down, then everyone on your network loses service. And you're expecting traffic distributed over a large number of IP addresses because it's a P2P application, so distinguishing normal traffic from a botnet attack will be hard.
And the point of a topology-aware P2P client is that it seeks the
nearest host, so if you constrain it to the ISP local server only, you're losing part of the point of P2P for no great saving in peering/transit.
That's why I don't like the idea of transparent proxies for P2P; you can get 90% of the effect with 10% of the evilness by setting up sane rate-limits.
OK.
As long as they don't interfere with the user's right to choose
someone else's content, fine.
If you're getting it from an STB, well, there may not be a way for users to add 3rd party torrents; how many users will be able to figure out how to add the torrent URLs (or know where to find said URLs) even if there is an option? Remember, we're talking about Joe Sixpack here, not techies.
You would, however, be able to pick whatever STB you wanted (unless ISPs deliberately blocked competitors' services).
Please. Joe has a right to know these things. How long before Joe finds out anyway?
Good thinking. Where do I sign? Regarding your first point, it's really surprising that existing P2P applications don't include topology awareness. After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer. Those are likely to be the closest, if not in distance then topologically, and the ones with the most bandwidth. Further, imagine that it caches the search - so when you next seek a file, it checks for it first on the hosts nearest to it in its "routing table", stepping down progressively if it's not there. It's a form of local-pref.
When I investigated bit torrent clients a couple of years ago, the tracker would only send you a small subset of it's peers at random, so as a client you often weren't told about the peer that was right beside you. Trackers could in theory send you peers that were close to you (eg send you anyone thats in the same /24, a few from the same /16, a few more from the same /8 and a handful from other places. But the tracker has no idea which areas you get good speeds to, and generally wants to be as simple as possible. Also in most unixes you can query the tcp stack to ask for it's current estimate of the rtt on a TCP connection with: #include <sys/types.h> #include <sys/socket.h> #include <netinet/tcp.h> #include <stdio.h> int fd; struct tcp_info tcpinfo; socklen_t len = sizeof(tcpinfo); if (getsockopt(fd,SOL_TCP,TCP_INFO,&tcpinfo,&len)!=-1) { printf("estimated rtt: %.04f (seconds)", tcpinfo.tcpi_rtt/1000000.0); } Due to rate limiting you can often find you'll get very similar performance to a reasonably large subset of your peers, so using tcp's rtt estimate as a tie breaker might provide a reasonable cost savings to the ISP (although the end user probably won't notice the difference)
On Sun, Jan 21, 2007 at 12:14:56PM +0000, Alexander Harrowell wrote:
After all, the underlying TCP already has mechanisms to perceive the relative nearness of a network entity - counting hops or round-trip latency. Imagine a BT-like client that searches for available torrents, and records the round-trip time to each host it contacts. These it places in a lookup table and picks the fastest responders to initiate the data transfer.
Better yet, I was reading some introductory papers on machine learning, and there are a number of algorithms for learning. The one I think might be relevant is to use these various network parameters to predict high speed downloads, and treat as "oracles", adjusting their weights to reflect their judgement accuracy. They typically give performance e-close to the best "expert", and can easily learn which expert is the best over time, even if that changes. -- ``Unthinking respect for authority is the greatest enemy of truth.'' -- Albert Einstein -><- <URL:http://www.subspacefield.org/~travis/>
On Jan 12, 2007, at 11:27 PM, Mikael Abrahamsson wrote:
On Fri, 12 Jan 2007, Gian Constantine wrote:
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not.
My experience is that when you show people VoD, they like it. A lot of people won't abandon linear programming because it's easy to just watch whatever is "on", but if you give them the possibility of watching VoD (DVD sales of TV series for instance) some will definately start doing both. Same thing with HDTV, until you show it to people they couldn't care less, but when you've shown them they do start to get interested.
I have been trying to find out the advertising ARPU for the cable companies for a prime time TV show in the US, ie how much would I need to pay them to get the same content but without the advertising, and then add the cost of VoD delivery. This is purely theoretical, but it would give a rough indication on what a VoD distribution model might cost the end user if we were to add that distribution channel. Does anyone know any rough figures for advertising ARPU per hour on primetime? I'd love to hear it.
Generally, in the US, the content is sent to the cable company with Ads already inserted, although they might get their own Ad slots. You would need to talk to the source, i.e., the network. Since you would be threatening the business model of their major customers, you would need patience and a lot of financial backing. For the US, an analysis by Kenneth Wilbur http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885465 , table 1, from this recent meeting in DC http://www.web.virginia.edu/media/agenda.html shows that the cost per thousand per ad (the CPM) averaged over 5 networks and all nights of the week, was $ 24 +- 9; these are 1/2 minute ads. The mean ad level per half-hour is 5.15 minutes, so that's 10.3 x $ 24 or $ 247 / hour / 1000. This is for the evening; rates and audiences at other times or less. So, for a 1/2 hour evening show, on average the VOD would need to cost at least $ 0.12 US to re-coup the ad revenues. Popular shows get a higher CPM, so they would cost more. The Wilbur paper and some of the other papers at this conference present a lot of breakdown of these sorts of statistics, if you are interested. Regards Marshall
-- Mikael Abrahamsson email: swmike@swm.pp.se
On Sat, 13 Jan 2007, Marshall Eubanks wrote:
For the US, an analysis by Kenneth Wilbur http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885465 , table 1, from this recent meeting in DC http://www.web.virginia.edu/media/agenda.html
Couldn't read the PDFs so I'll just go from your below figures:
shows that the cost per thousand per ad (the CPM) averaged over 5 networks and all nights of the week, was $ 24 +- 9; these are 1/2 minute ads. The mean ad level per half-hour is 5.15 minutes, so that's 10.3 x $ 24 or $ 247 / hour / 1000. This is for the evening; rates and audiences at other times or less. So, for a 1/2 hour evening show, on average the VOD would need to cost at least $ 0.12 US to re-coup the ad revenues. Popular shows get a higher CPM, so they would cost more. The Wilbur paper and some of the other papers at this conference present a lot of breakdown of these sorts of statistics, if you are interested.
Thanks for the figures. So basically if we can encode a 23 minute show (30 minutes minus ads) into a gig of traffic the network (precomputed HD 1080i with high VBR) cost would be around $0.2 (figure from my previous email, on margin) and pay $0.2 to the content owner, they would make the same amount of money as they do now? So basically the marginal cost of this service would be around $0.4-0.5 per show, and double that for a 45 minute episode (current 1 hour show format)? So question becomes whether people might be inclined to pay $1 to watch an adfree TV show? If they're paying $1.99 to iTunes for the actual download right now, they might be willing to pay $0.99 to watch it over VoD? As you said, of course this would take enormous amount of time and effort to convince the content owners of this model. Wonder if ISPs would be interested at these levels, that's also a good question. -- Mikael Abrahamsson email: swmike@swm.pp.se
Dear Mikael; On Jan 13, 2007, at 6:45 AM, Mikael Abrahamsson wrote:
On Sat, 13 Jan 2007, Marshall Eubanks wrote:
For the US, an analysis by Kenneth Wilbur http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885465 , table 1, from this recent meeting in DC http://www.web.virginia.edu/media/agenda.html
Couldn't read the PDFs so I'll just go from your below figures:
shows that the cost per thousand per ad (the CPM) averaged over 5 networks and all nights of the week, was $ 24 +- 9; these are 1/2 minute ads. The mean ad level per half-hour is 5.15 minutes, so that's 10.3 x $ 24 or $ 247 / hour / 1000. This is for the evening; rates and audiences at other times or less. So, for a 1/2 hour evening show, on average the VOD would need to cost at least $ 0.12 US to re-coup the ad revenues. Popular shows get a higher CPM, so they would cost more. The Wilbur paper and some of the other papers at this conference present a lot of breakdown of these sorts of statistics, if you are interested.
Thanks for the figures. So basically if we can encode a 23 minute show (30 minutes minus ads) into a gig of traffic the network (precomputed HD 1080i with high VBR) cost would be around $0.2 (figure from my previous email, on margin) and pay $0.2 to the content owner, they would make the same amount of money as they do now? So basically the marginal cost of this service would be around $0.4-0.5 per show, and double that for a 45 minute episode (current 1 hour show format)?
Yes - you saw I made a factor of two error in this (per hour vs per half hour), but, yes, that's the size you are talking about. A technical issue that I have to deal with is that you get a 30 minute show (actually 24 minutes of content) as 30 minutes, _with the ads slots included_. To show it without ads, you actually have to take the show into a video editor and remove the ad slots, which costs video editor time, which is expensive.
So question becomes whether people might be inclined to pay $1 to watch an adfree TV show? If they're paying $1.99 to iTunes for the actual download right now, they might be willing to pay $0.99 to watch it over VoD?
As you said, of course this would take enormous amount of time and effort to convince the content owners of this model. Wonder if ISPs would be interested at these levels, that's also a good question.
A business model I have wondered about is, take the network feed, pay the subscriber cost, and sell it over the Internet as an encrypted channel _with ads_. Would you be willing to pay $ 5 or even $ 10 per month to watch just one channel, as shown over the air ? I would, and here's why. In the USA at least, the cable companies make you pay for "bundles" to get channels you want. I have to pay for 3 bundles to get 2 channels we actually want to watch. (One of these bundle is apparently only sold if you are already getting another, which we don't actually care about.) So, it actually costs us $ 40 + / month to get the two channels we want (plus a bunch we don't.) So, it occurs to me that there is a business selling solo channels on the Internet, as is, with the ads, for order $ 5 - $ 10 per subscriber per month, which should leave a substantial profit after the payments to the networks and bandwidth costs.
-- Mikael Abrahamsson email: swmike@swm.pp.se
Regards Marshall
On Sat, 13 Jan 2007, Marshall Eubanks wrote:
A technical issue that I have to deal with is that you get a 30 minute show (actually 24 minutes of content) as 30 minutes, _with the ads slots included_. To show it without ads, you actually have to take the show into a video editor and remove the ad slots, which costs video editor time, which is expensive.
Well, in this case you'd hopefully get the show directly from whoever is producing it without ads in the first place, basically the same content you might see if you buy the show on DVD.
In the USA at least, the cable companies make you pay for "bundles" to get channels you want. I have to pay for 3 bundles to get 2 channels we actually want to watch. (One of these bundle is apparently only sold if you are already getting another, which we don't actually care about.) So, it actually costs us $ 40 + / month to get the two channels we want (plus a bunch we don't.) So, it occurs to me that there is a business selling solo channels on the Internet, as is, with the ads, for order $ 5 - $ 10 per subscriber per month, which should leave a substantial profit after the payments to the networks and bandwidth costs.
There is zero problem for the cable companies to immediately compete with you by offering the same thing, as soon as there is competition. Since their channel is the most established, my guess is that you would have a hard time succeeding where they already have a footprint and established customers. Where you could do well with your proposal, is where there is no cable TV available at all. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Jan 13, 2007, at 7:36 AM, Mikael Abrahamsson wrote:
On Sat, 13 Jan 2007, Marshall Eubanks wrote:
A technical issue that I have to deal with is that you get a 30 minute show (actually 24 minutes of content) as 30 minutes, _with the ads slots included_. To show it without ads, you actually have to take the show into a video editor and remove the ad slots, which costs video editor time, which is expensive.
Well, in this case you'd hopefully get the show directly from whoever is producing it without ads in the first place, basically the same content you might see if you buy the show on DVD.
I do get it from the producer; that is what they produce. (And the video editor time referred to is people time, not machine time, which is trivial.)
In the USA at least, the cable companies make you pay for "bundles" to get channels you want. I have to pay for 3 bundles to get 2 channels we actually want to watch. (One of these bundle is apparently only sold if you are already getting another, which we don't actually care about.) So, it actually costs us $ 40 + / month to get the two channels we want (plus a bunch we don't.) So, it occurs to me that there is a business selling solo channels on the Internet, as is, with the ads, for order $ 5 - $ 10 per subscriber per month, which should leave a substantial profit after the payments to the networks and bandwidth costs.
There is zero problem for the cable companies to immediately compete with you by offering the same thing, as soon as there is competition. Since their channel is the most established, my guess is that you would have a hard time succeeding where they already have a footprint and established customers.
Yes, and that has the potential of immediately reducing their income by a factor of 2 or more. I suspect that they would compete at first by putting pressure on the channel aggregators not to sell to such businesses. (note : this is NOT a business I am pursuing at present.) What I do conclude from this is that the oncoming wave of IPTV and Internet Television is going to be very disruptive.
Where you could do well with your proposal, is where there is no cable TV available at all.
Regards
-- Mikael Abrahamsson email: swmike@swm.pp.se
The cable companies have been chomping at the bit for unbundled channels for years, so have consumers. The content providers will never let it happen. Their claim is the popular channels support the diversity of not-so-popular channels. Apparently, production costs are high all around (not surprising) and most channels do not support themselves entirely. The MSOs have had a la carte on their Santa wish list for years and the content providers do not believe in Santa Claus. :-) They believe in Benjamin Franklin...lots and lots of Benjamin Franklin. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 13, 2007, at 7:14 AM, Marshall Eubanks wrote:
In the USA at least, the cable companies make you pay for "bundles" to get channels you want. I have to pay for 3 bundles to get 2 channels we actually want to watch. (One of these bundle is apparently only sold if you are already getting another, which we don't actually care about.) So, it actually costs us $ 40 + / month to get the two channels we want (plus a bunch we don't.) So, it occurs to me that there is a business selling solo channels on the Internet, as is, with the ads, for order $ 5 - $ 10 per subscriber per month, which should leave a substantial profit after the payments to the networks and bandwidth costs.
[ Note: Please don't send MIME/HTML messages to mailing lists ] Thus spake Gian Constantine:
The cable companies have been chomping at the bit for unbundled channels for years, so have consumers. The content providers will never let it happen. Their claim is the popular channels support the diversity of not-so-popular channels. Apparently, production costs are high all around (not surprising) and most channels do not support themselves entirely.
Regulators too. The city here tried forcing the MSOs to unbundle, and the result was that a single channel cost the same as the bundle it normally came in -- the content providers weren't willing to license them individually. The city gave in and dropped it. Just like the providers want to force people to pay for unpopular channels to subsidize the popular ones, they likewise want people to pay for unpopular programs to subsidize the popular ones. Consumers, OTOH, want to buy _programs_, not _channels_. Hollywood isn't dumb enough to fall for that, since they know 90% (okay, that's being conservative) of what they produce is crap and the only way to get people to pay for it is to jack up the price of the 10% that isn't crap and give the other 90% away. Of course, the logical solution is to quit producing crap so that such games aren't necessary, but since when has any MPAA or RIAA member decided to go that route? S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
On Jan 13, 2007, at 3:01 PM, Stephen Sprunk wrote:
Consumers, OTOH, want to buy _programs_, not _channels_.
This is a very important point - perceived disintermediation, perceived unbundling, ad reduction/elimination, and timeshifting are the main reasons that DVRs are so popular (and now, placeshifting with things like Slingbox and Tivo2Go, though it's very early days in that regard). So, at least on the face of it, there appears to be a high degree of congruence between the things which make DVRs attractive and things which make P2P attractive. As to an earlier comment about video editing in order to remove ads, this is apparently the norm in the world of people who are heavy uploaders/crossloaders of video content via P2P systems. It seems there are different 'crews' who compete to produce a 'quality product' in terms of the quality of the encoding, compression, bundling/remixing, etc.; it's very reminiscent of the 'warez' scene in that regard. I believe that many of the people engaged in the above process do so because it's become a point of pride with them in the social circles they inhabit, again a la the warez community. It's an interesting question as to whether or not the energy and 'professional pride' of this group of people could somehow be harnessed in order to provide and distribute content legally (as almost all of what people really want seems to be infringing content under the current standard model), and monetized so that they receive compensation and essentially act as the packaging and distribution arm for content providers willing to try such a model. A related question is just how important the perceived social cachet of editing/rebundling/ redistributing -infringing- content is to them, and whether normalizing this behavior from a legal standpoint would increase or decrease the motivation of the 'crews' to continue providing these services in a legitimized commercial environment. As a side note, it seems there's a growing phenomenon of 'upload cheating' taking place in the BitTorrent space, with clients such as BitTyrant and BitThief becoming more and more popular while at the same time disrupting the distribution economies of P2P networks. This has caused a great deal of consternation in the infringing- oriented P2P community of interest, with the developers/operators of various BitTorrent-type systems such as BitComet working at developing methods of detecting and blocking downloading from users who 'cheat' in this fashion; it is instructive (and more than a little ironic) to watch as various elements within the infringing- oriented P2P community attempt to outwit and police one another's behavior, especially when compared/contrasted with the same classes of ongoing conflict between the infringing-oriented P2P community, content producers, and SPs. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Sat, 13 Jan 2007, Roland Dobbins wrote:
again a la the warez community. It's an interesting question as to whether or not the energy and 'professional pride' of this group of people could somehow be harnessed in order to provide and distribute content legally (as almost all of what people really want seems to be infringing content under the current standard model), and monetized so that they receive compensation and essentially act as the packaging and distribution arm for content providers willing to try such a model. A related question is just how
You make a lot of very valid points in your email, but I just had to respond to the above. The only reason they have for ripping, adremoving and distributing TV series over the internet is because there is no legal way to obtain these in the quality they provide. So you're right, they provide a service people want at a price they want (remember that people spend quite a lot of money on harddrives, broadband connections etc to give them the experience they require). If this same experience could be enjoyed via a VoD box from a service provider at a low enough price that people would want to pay for it (along the prices I mentioned earlier) I am sure that a lot of regular people would switch away from getting their content via P2P and get it directly from the source. Why go over ripping, ad-removing, xvid-encoding, warez-scene, then to P2P sites, then you have to unpack the content to watch it, perhaps on your computer, when the content creator is sitting there on a perhaps 50-100 megabit/s MPEG stream of the content that you directly could create a high VBR MPEG4 stream from via some replication system, and then VoD to your home via your broadband internet connection? There is only one reason for those people doing what they do, it's because the content owners want to control the distribution channel and they're not realising they never will be able to do that. DRM has always failed, systems like Macrovision, region coding (DVD), encryption (DVD) and now I read that the HDDVD system, are all broken and future systems will be broken. So the key is convenience and quality at a low price, aka price/performance on the experience. Make it cheap and convenient enough that the current hassle is just not worth it. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sat, Jan 13, 2007 at 06:11:32PM -0800, Roland Dobbins wrote:
This is a very important point - perceived disintermediation, perceived unbundling, ad reduction/elimination, and timeshifting are the main reasons that DVRs are so popular
I am an unusual case, not having much time or interest in passive entertainment, but I have moved to a MythTV box for my entertainment center. I don't have cable TV and my broadcast quality is such that I don't bother with it. I can find sufficient things on the net to occupy those idle times, and can watch them on my limited schedule and terms. The BBC in particular has some interesting documentaries, and I point you to a doubly relevant video series below. Some others have mentioned that a pay system that was significantly easier to use than the infringing technologies would turn the tide in illicit copying. Those interested in the direction things are going should read up on Peter Gutmann's paper on the costs of Vista Content Protection. It is unfortunate the content owners are more interested in making illicit copying hard than in making legal purchase and use of the content easy. I don't intend to pay for systems that I don't control, don't intend to store my data in formats I don't have documentation for, and don't anticipate paying for DRM-encoded files ever, mostly because I'd have to pay for a crippled system which reminds me of buying a car with the hood welded shut in order to have the privilege of renting content. Usually in such situations the industry is willing to engage in some loss leaders; I'd take a free crippled media player, but probably in the end would resent its closed nature, its lack of flexibility or expandability, and all the things that led me to personal computers and software in the first place.
As to an earlier comment about video editing in order to remove ads, this is apparently the norm in the world of people who are heavy uploaders/crossloaders of video content via P2P systems. It seems there are different 'crews' who compete to produce a 'quality product' in terms of the quality of the encoding, compression, bundling/remixing, etc.; it's very reminiscent of the 'warez' scene in that regard.
This is an interesting free video series on the illicit movie copying "scene": http://www.welcometothescene.com/ It is somewhat unusual in that most of the videos are split screenshots, and most of the conversation is typed, and that an understanding of various technical topics is necessary to be able to follow the show at all.
It's an interesting question as to whether or not the energy and 'professional pride' of this group of people could somehow be harnessed in order to provide and distribute content legally (as almost all of what people really want seems to be infringing content under the current standard model), and monetized so that they receive compensation and essentially act as the packaging and distribution arm for content providers willing to try such a model.
IMHO I fail to see how they would be (or remain) any different from the current distribution channels. It's akin to asking if the open-source community could somehow be harnessed and paid for creating software. Yes; it's already being done, and there are qualitative differences in the results. When there is no financial interest, artisanship and craftsmanship predominate as motivators. When driven by financial interests, often those languish, and the market forces of suckification move the product inexorably from one which is the most desirable to use, to one with as many built-in annoyances and advertisements as the end-user will tolerate, all the useless features necessary to confuse the purchaser into rational ignorance, and all plausible mechanisms to lock the user in over time (or otherwise raise their switching costs). But I'm not cynical... ;-) This is way off charter, but I recently read of a study where art students were asked to create some artwork. One group was given a financial reward. The results were anonymized, and evaluators judged the results. Once unblinded, the study found that the group with the financial reward was statistically significantly judged as less creative and as producing lower-quality work.
As a side note, it seems there's a growing phenomenon of 'upload cheating' taking place in the BitTorrent space, with clients such as BitTyrant and BitThief becoming more and more popular while at the same time disrupting the distribution economies of P2P networks. This has caused a great deal of consternation in the infringing- oriented P2P community of interest, with the developers/operators of various BitTorrent-type systems such as BitComet working at developing methods of detecting and blocking downloading from users who 'cheat' in this fashion; it is instructive (and more than a little ironic) to watch as various elements within the infringing- oriented P2P community attempt to outwit and police one another's behavior, especially when compared/contrasted with the same classes of ongoing conflict between the infringing-oriented P2P community, content producers, and SPs.
It has a poetic quality that one could only surpass by using someone's botnet to DDoS them every time you catch them online. Reminds me of "The Grifters". -- ``If you can't trust a fixed fight, what can you trust?'' -- Miller's Crossing -><- <URL:http://www.subspacefield.org/~travis/>
On Jan 13, 2007, at 6:12 AM, Marshall Eubanks wrote:
On Jan 12, 2007, at 11:27 PM, Mikael Abrahamsson wrote:
On Fri, 12 Jan 2007, Gian Constantine wrote:
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not.
My experience is that when you show people VoD, they like it. A lot of people won't abandon linear programming because it's easy to just watch whatever is "on", but if you give them the possibility of watching VoD (DVD sales of TV series for instance) some will definately start doing both. Same thing with HDTV, until you show it to people they couldn't care less, but when you've shown them they do start to get interested.
I have been trying to find out the advertising ARPU for the cable companies for a prime time TV show in the US, ie how much would I need to pay them to get the same content but without the advertising, and then add the cost of VoD delivery. This is purely theoretical, but it would give a rough indication on what a VoD distribution model might cost the end user if we were to add that distribution channel. Does anyone know any rough figures for advertising ARPU per hour on primetime? I'd love to hear it.
Generally, in the US, the content is sent to the cable company with Ads already inserted, although they might get their own Ad slots. You would need to talk to the source, i.e., the network. Since you would be threatening the business model of their major customers, you would need patience and a lot of financial backing.
For the US, an analysis by Kenneth Wilbur http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885465 , table 1, from this recent meeting in DC http://www.web.virginia.edu/media/agenda.html
shows that the cost per thousand per ad (the CPM) averaged over 5 networks and all nights of the week, was $ 24 +- 9; these are 1/2 minute ads. The mean ad level per half-hour is 5.15 minutes, so that's 10.3 x $ 24 or $ 247 / hour / 1000. This
Sorry, that should be per half-hour (i.e., there are 10.3 half-minute ads per half-hour on average.)
is for the evening; rates and audiences at other times or less. So, for a 1/2 hour evening show, on average the VOD would need to cost at least $ 0.12 US to re-coup the ad revenues. Popular shows get a higher CPM, so they would cost
So that should be $ 0.25 per half hour per person. I think that the advertising world needs a more "metric" system of measuring things and that I need some coffee.
more. The Wilbur paper and some of the other papers at this conference present a lot of breakdown of these sorts of statistics, if you are interested.
Regards Marshall
Regards
-- Mikael Abrahamsson email: swmike@swm.pp.se
On 12 Jan 2007, at 15:26, Gian Constantine wrote:
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not.
There are already cheap and efficient ways of doing VoD-like services with a PVR - I timeshift almost everything that I want to watch because it's on at inconvenient times. So shows get spooled to disk whilst they're broadcasted efficiently, and I can watch them later. Any sort of Broadcast-Video-over-IP system that employed that technology would be a winner. You don't need to 'broadcast' the show in real time either if it's going to be spooled to disk, even as it is viewed. -a
I am pretty sure we are not becoming a VoD world. Linear programming is much better for advertisers. I do not think content providers, nor consumers, would prefer a VoD only service. A handful of consumers would love it, but many would not.
There are already cheap and efficient ways of doing VoD-like services with a PVR - I timeshift almost everything that I want to watch because it's on at inconvenient times. So shows get spooled to disk whilst they're broadcasted efficiently, and I can watch them later.
Any sort of Broadcast-Video-over-IP system that employed that technology would be a winner. You don't need to 'broadcast' the show in real time either if it's going to be spooled to disk, even as it is viewed. This system works perfectly in our linear-line distribution (channels). As user you can choose time you want to see the show, but not the show itself. Capacity on PVR device is finite and if you don't want to waste
the space with any broadcasted content you have to program the device. I have ten channels in my cable TV and sometimes I'm confused what to record. Beeing in the US and paid for ~100 channels will make me mad to crawl channel schedules :-) So the technology is nice, but not a "What you want is what you get". So you cannot address the long tail using this technology. Regards Michal
On 15-Jan-2007, at 08:48, Michal Krsek wrote:
This system works perfectly in our linear-line distribution (channels). As user you can choose time you want to see the show, but not the show itself. Capacity on PVR device is finite and if you don't want to waste the space with any broadcasted content you have to program the device. I have ten channels in my cable TV and sometimes I'm confused what to record. Beeing in the US and paid for ~100 channels will make me mad to crawl channel schedules :-)
So the technology is nice, but not a "What you want is what you get". So you cannot address the long tail using this technology.
These are all UI details. The (Scientific Atlanta, I think) PVRs that Rogers Cable gives subscribers here in Ontario let you specify the *names* of shows that you like, rather than selecting specific channels and times; I seem to think you can also tell it to automatically ditch old recorded material when disk space becomes low. One thing that may not be obvious to people who haven't had this misfortune of consuming it at first hand is that North American TV, awash with channels as it is, contains a lot of duplicated content. The same episode of the same show might be broadcast tens of times per week; the same advertisement might be broadcast tens of times per hour. How much more programming would the existing networks support if they were able to reduce those retransmissions, relying on the ubiquity of set-top boxes with PVR functionality? Joe
The problem with this all (or mostly) VoD model is the entrenched culture. In countries outside of the U.S. with smaller channel lineups, an all VoD model might be easier to migrate to over time. In the U.S., where we have 200+ channel lineups, consumers have become accustomed to the massive variety and instant gratification of a linear lineup. If you leave it to the customer to choose their programs, and then wait for them to arrive and be viewed, the instant gratification aspect is lost. This is important to consumers here. While I do not think an all or mostly VoD model will work for consumers in U.S. in the near term (next 5 years), it may work in the long term (7-10 years). There are so many obstacles in the way from a business side of things, though. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 15, 2007, at 9:31 AM, Joe Abley wrote:
On 15-Jan-2007, at 08:48, Michal Krsek wrote:
This system works perfectly in our linear-line distribution (channels). As user you can choose time you want to see the show, but not the show itself. Capacity on PVR device is finite and if you don't want to waste the space with any broadcasted content you have to program the device. I have ten channels in my cable TV and sometimes I'm confused what to record. Beeing in the US and paid for ~100 channels will make me mad to crawl channel schedules :-)
So the technology is nice, but not a "What you want is what you get". So you cannot address the long tail using this technology.
These are all UI details.
The (Scientific Atlanta, I think) PVRs that Rogers Cable gives subscribers here in Ontario let you specify the *names* of shows that you like, rather than selecting specific channels and times; I seem to think you can also tell it to automatically ditch old recorded material when disk space becomes low.
One thing that may not be obvious to people who haven't had this misfortune of consuming it at first hand is that North American TV, awash with channels as it is, contains a lot of duplicated content. The same episode of the same show might be broadcast tens of times per week; the same advertisement might be broadcast tens of times per hour.
How much more programming would the existing networks support if they were able to reduce those retransmissions, relying on the ubiquity of set-top boxes with PVR functionality?
Joe
At 09:50 a.m. 15/01/2007 -0500, Gian Constantine wrote:
The problem with this all (or mostly) VoD model is the entrenched culture. In countries outside of the U.S. with smaller channel lineups, an all VoD model might be easier to migrate to over time. In the U.S., where we have 200+ channel lineups, consumers have become accustomed to the massive variety and instant gratification of a linear lineup. If you leave it to the customer to choose their programs, and then wait for them to arrive and be viewed, the instant gratification aspect is lost. This is important to consumers here.
While I do not think an all or mostly VoD model will work for consumers in U.S. in the near term (next 5 years), it may work in the long term (7-10 years). There are so many obstacles in the way from a business side of things, though.
I don't see many obstacles for content and neither do other broadcasters. The broadcast world is changing. Late last year ABC or NBC (sorry brain fade) announced the lay off of 700 News staff, saying news is no longer king. Instead they are moving to a strategy similar to that of the BBC. ie lots of on-demand content on the Internet. Rich
The changes in network news have little to do with consumer tendencies or entrenched content provider culture. News departments have operated at a financial loss for many many years. The big networks supported news as a service to the public, not as a moneymaker. Furthermore, the internet has really changed the way news is consumed. I really think it falls outside of the entertainment discussion. It is a very different product. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 15, 2007, at 5:53 PM, Richard Naylor wrote:
At 09:50 a.m. 15/01/2007 -0500, Gian Constantine wrote:
The problem with this all (or mostly) VoD model is the entrenched culture. In countries outside of the U.S. with smaller channel lineups, an all VoD model might be easier to migrate to over time. In the U.S., where we have 200+ channel lineups, consumers have become accustomed to the massive variety and instant gratification of a linear lineup. If you leave it to the customer to choose their programs, and then wait for them to arrive and be viewed, the instant gratification aspect is lost. This is important to consumers here.
While I do not think an all or mostly VoD model will work for consumers in U.S. in the near term (next 5 years), it may work in the long term (7-10 years). There are so many obstacles in the way from a business side of things, though.
I don't see many obstacles for content and neither do other broadcasters. The broadcast world is changing. Late last year ABC or NBC (sorry brain fade) announced the lay off of 700 News staff, saying news is no longer king. Instead they are moving to a strategy similar to that of the BBC. ie lots of on-demand content on the Internet.
Rich
On Tue, Jan 16, 2007 at 11:53:25AM +1300, Richard Naylor wrote: [...]
I don't see many obstacles for content and neither do other broadcasters. The broadcast world is changing. Late last year ABC or NBC (sorry brain fade) announced the lay off of 700 News staff, saying news is no longer king.
Was it ever? Allegedly Murdoch's Sky only launched their Sky News channel so they could claim to be a reputable broadcaster.
On Jan 12, 2007, at 10:05 AM, Frank Bulk wrote:
If we're becoming a VOD world, does multicast play any practical role in video distribution?
Not to end users. I think multicast is used a fair amount for precaching; presumably that would increase in this scenario. Regards Marshall P.S. Of course, I do not agree we are moving to a pure VOD world. I agree with Michal Krsek in this regard.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michal Krsek Sent: Wednesday, January 10, 2007 2:28 AM To: Marshall Eubanks Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not
only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life.
For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center
in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing.
Regards Michal
Of course, this below is for inter-domain. There is no shortage of multicast walled garden deployments. Regards Marshall On Jan 12, 2007, at 7:44 PM, Marshall Eubanks wrote:
On Jan 12, 2007, at 10:05 AM, Frank Bulk wrote:
If we're becoming a VOD world, does multicast play any practical role in video distribution?
Not to end users.
I think multicast is used a fair amount for precaching; presumably that would increase in this scenario.
Regards Marshall
P.S. Of course, I do not agree we are moving to a pure VOD world. I agree with Michal Krsek in this regard.
Frank
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Michal Krsek Sent: Wednesday, January 10, 2007 2:28 AM To: Marshall Eubanks Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
Hi Marshall,
- the largest channel has 1.8% of the audience - 50% of the audience is in the largest 2700 channels - the least watched channel has ~ 10 simultaneous viewers - the multicast bandwidth usage would be 3% of the unicast.
I'm a bit skeptic for future of channels. For making money from the long tail, you have to have to adapt your distribution to user's needs. It is not
only format, codec ... but also time frame. You can organise your programs in channels, but they will not run simultaneously for all the users. I want to control my TV, I don't want to my TV jockey my life.
For the distribution, you as content owner have to help the ISP find the right way to distribute your content. In example: having distribution center
in Tier1 ISP network will make money from Tier2 ISP connected directly to Tier1. Probably, having CDN (your own or pay for service) will be the only one way for large scale non synchronous programing.
Regards Michal
On Tue, 9 Jan 2007, Valdis.Kletnieks@vt.edu wrote:
between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
My opinion on the downside of video multicast is that if you want it realtime your SLA figures on acceptable packet loss goes down from fractions of a percent into the thousands of a percent, at least with current implementations of video. Imagine internet multicast and having customers complain about bad video quality and trying to chase down that last 1/100000 packet loss that makes peoples video pixelate every 20-30 minutes, and the video stream doesn't even originate in your network? For multicast video to be easier to implement we need more robust video codecs that can handle jitter and packet loss that are currently present in networks and handled acceptably by TCP for unicast. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Jan 10, 2007, at 5:42 AM, Mikael Abrahamsson wrote:
On Tue, 9 Jan 2007, Valdis.Kletnieks@vt.edu wrote:
between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
My opinion on the downside of video multicast is that if you want it realtime your SLA figures on acceptable packet loss goes down from fractions of a percent into the thousands of a percent, at least with current implementations of video.
Actually, this is true with unicast as well. This can (I think) largely be handled by a fairly moderate amount of Forward Error Correction. Regards Marshall
Imagine internet multicast and having customers complain about bad video quality and trying to chase down that last 1/100000 packet loss that makes peoples video pixelate every 20-30 minutes, and the video stream doesn't even originate in your network?
For multicast video to be easier to implement we need more robust video codecs that can handle jitter and packet loss that are currently present in networks and handled acceptably by TCP for unicast.
-- Mikael Abrahamsson email: swmike@swm.pp.se
Marshall Eubanks wrote:
Actually, this is true with unicast as well.
This can (I think) largely be handled by a fairly moderate amount of Forward Error Correction.
Regards Marshall Before "streaming" meant HTTP-like protocols over port 80 and UDP was actually used, we did some experiments with FEC and discovered that reasonable interleaving (so that two consequtive packets lost could be recovered) and 1:10 FEC resulted in zero-loss environment in all cases we tested.
Pete
Sounds a little like low buffering and sparse I-frames, but I'm no MPEG expert. :-) Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 10, 2007, at 5:42 AM, Mikael Abrahamsson wrote:
On Tue, 9 Jan 2007, Valdis.Kletnieks@vt.edu wrote:
between handling 30K unicast streams, and 30K multicast streams that each have only one or at most 2-3 viewers?
My opinion on the downside of video multicast is that if you want it realtime your SLA figures on acceptable packet loss goes down from fractions of a percent into the thousands of a percent, at least with current implementations of video.
Imagine internet multicast and having customers complain about bad video quality and trying to chase down that last 1/100000 packet loss that makes peoples video pixelate every 20-30 minutes, and the video stream doesn't even originate in your network?
For multicast video to be easier to implement we need more robust video codecs that can handle jitter and packet loss that are currently present in networks and handled acceptably by TCP for unicast.
-- Mikael Abrahamsson email: swmike@swm.pp.se
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Gian Constantine Sent: Monday, January 08, 2007 7:27 PM To: Thomas Leavitt Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
My contention is simple. The content providers will not allow P2P video as a legal commercial service anytime in the near future. Furthermore, most ISPs are going to side with the content providers on this one. Therefore, discussing it at this point in time is purely academic, or more so, diversionary.
I don't think they have a choice really. The state of the art in application aware QoS/rate shaping is so behind the times that by the time it caught, the application would have changed. Bora
On Mon Jan 08, 2007 at 10:26:30PM -0500, Gian Constantine wrote:
My contention is simple. The content providers will not allow P2P video as a legal commercial service anytime in the near future.
Furthermore, most ISPs are going to side with the content providers on this one. Therefore, discussing it at this point in time is purely academic, or more so, diversionary.
In my experience, content providers want to use P2P because it "reduces" their distribution costs (in quotes, because I'm not convinced it does, in the real world). Content providers don't care whether access providers like P2P or not, just whether it works or not. On one hand, access providers are putting in place rate limiting or blocking of P2P (subject to discussions of how effective those are), but on the other hand, content providers are saying that P2P is the future... Simon
I am not sure what I was thinking. Mr Bonomi was kind enough to point out a failed calculation for me. Obviously, a HD file would only be about 3.7GB for a 90 minute file at 5500kbps. In my haste, I neglected to convert bits to bytes. My apologies. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net On Jan 8, 2007, at 9:07 PM, Gian Constantine wrote:
There may have been a disconnect on my part, or at least, a failure to disclose my position. I am looking at things from a provider standpoint, whether as an ISP or a strict video service provider.
I agree with you. From a consumer standpoint, a trickle or off-peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title.
Service providers wishing to provide this service to their customers may see some success where they control the access medium (copper loop, coax, FTTH). Offering such a service to customers outside of this scope would prove very expensive, and likely, would never see a return on the investment without extensive peering arrangements. Even then, distribution rights would be very difficult to attain without very deep pockets and crippling revenue sharing. The studios really dislike the idea of transmission outside of a closed network. Don't forget. Even the titles you mentioned are still owned by very large companies interested in squeezing every possible dime from their assets. They would not be cheap to acquire.
Further, torrent-like distribution is a long long way away from sign off by the content providers. They see torrents as the number one tool of content piracy. This is a major reason I see the discussion of tripping upstream usage limits through content distribution as moot.
I am with you on the vision of massive content libraries at the fingertips of all, but I see many roadblocks in the way. And, almost none of them are technical in nature.
Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. Office: 404-748-6207 Cell: 404-808-4651 Internal Ext: x22007 constantinegi@corp.earthlink.net
On Jan 8, 2007, at 7:51 PM, Bora Akyol wrote:
Please see my comments inline:
-----Original Message----- From: Gian Constantine [mailto:constantinegi@corp.earthlink.net] Sent: Monday, January 08, 2007 4:27 PM To: Bora Akyol Cc: nanog@merit.edu Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
<snip>
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
To the end user, there is no cost to downloading videos when they are sleeping. I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time. What the downloading (possibly 24x7) does is to stress the ISP network to its max since the assumptions of statistical multiplexing goes out the window. Think of a Tivo that downloads content off the Internet 24x7.
The user is still paying for only what they pay each month, and this is "network neutrality 2.0" all over again.
You are correct on the long tail nature of music. But music is not consumed in a similar manner as TV and movies. Television and movies involve a little more commitment and attention. Music is more for the moment and the mood. There is an immediacy with music consumption. Movies and television require a slight degree more patience from the consumer. The freshness (debatable :-) ) of new release movies and TV can often command the required patience from the consumer. Older content rarely has the same pull.
I would argue against your distinction between visual and auditory content. There is a lot of content out there that a lot of people watch and the content is 20-40+ years old. Think Brady Bunch, Bonanza, or archived games from NFL, MLB etc. What about Smurfs (for those of us with kids)?
This is only the beginning.
If I can get a 500GB box and download MP4 content, that's a lot of essentially free storage.
Coming back to NANOG content, I think video (not streamed but multi-path distributed video) is going to bring the networks down not by sheer bandwidth alone but by challenging the assumptions behind the engineering of the network. I don't think you need huge SANs per se to store the content either, since it is multi-source/multi-sink, the reliability is built-in.
The SPs like Verizon & ATT moving fiber to the home hoping to get in on the "value add" action are in for an awakening IMHO.
Regards
Bora ps. I apologize for the tone of my previous email. That sounded grumpier than I usually am.
Gian Constantine wrote:
I agree with you. From a consumer standpoint, a trickle or off-peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title.
Kilobits, not bytes. So it's 3.7GB for 720p 90minutes at 5.5Mbps. Regularly transferred over the internet. Popular content in the size category 2-4GB has tens of thousands and in some cases hundreds of thousands of downloads from a single tracker. Saying it's "out of question" does not make it go away. But denial is usually the first phase anyway. Pete
Actually, I acknowledged the calculation mistake in a subsequent post. Gian Anthony Constantine Senior Network Design Engineer Earthlink, Inc. On Jan 21, 2007, at 11:11 AM, Petri Helenius wrote:
Gian Constantine wrote:
I agree with you. From a consumer standpoint, a trickle or off- peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title.
Kilobits, not bytes. So it's 3.7GB for 720p 90minutes at 5.5Mbps. Regularly transferred over the internet. Popular content in the size category 2-4GB has tens of thousands and in some cases hundreds of thousands of downloads from a single tracker. Saying it's "out of question" does not make it go away. But denial is usually the first phase anyway.
Pete
On Jan 9, 2007, at 1:51 AM, Bora Akyol wrote: [...]
I would argue that other than sports (and some news) events, there is pretty much no content that needs to be real time.
I'm not sure I agree. I've noticed that almost any form of live TV, with the exception of news and sports programming, uses the benefit of real time transmission to allow audience interaction. For instance: - Phone in discussion and quiz shows - Any show with voting - Video request shows Not only does this type of programming require real-time distribution, as these shows are quite often cheaper to produce than pre-recorded entertainment or documentaries they tend to fill a large portion of the schedule. In some cases the show producers share revenue from the phone calls, too. That makes them more attractive to commissioning editors, I suspect. Leo
Not only does this type of programming require real-time distribution, as these shows are quite often cheaper to produce than pre-recorded entertainment or documentaries they tend to fill a large portion of the schedule.
And since there are so many of these reality shows in existence and the existing broadcast technology seems to perfectly meet the needs of the show producers, what is the point of trying to shift these shows to the Internet? If it ain't broke, don't fix it! I do believe that the amount of video content on the Internet will increase dramatically over the next few years, just as it has in the past. But I don't believe that existing video businesses, such as TV channels, are going to shift to Internet distribution other than through specialized services. The real driver behind the future increase in video on the Internet is the falling cost of video production and the widespread knowledge of how to create watchable video. Five years ago in high school, my son was taking a video production course. Where do you think YouTube gets their content? In the past, it was broadband to the home, webcams and P2P that drove the increase in video content, but the future is not just more of the same. YouTube has leveraged the increased level of video production skills in the population but only in a crude way. Let's put it this way. How much traffic on the net was a result of dead-tree newpapers converting to Internet delivery, and how much was due to the brand-new concept of blogging? --Michael Dillon
On Mon, 8 Jan 2007, Gian Constantine wrote:
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
Lets see what I can do using today's technology: According to the itunes website they have over 3.5 million songs. Lets call it 4 million. Assume a decent bit rate and make them average 10 MB each. That's 40 TB which would cost me $6k per month to store on Amazon S3. Lets assume we use Amazon EC3 to only allow torrents of the files to be downloaded and we transfer each file twice per month. Total cost around $20k per month or $250k per year. Add $10k to pay somebody to create the interface and put up a few banner ads and it'll be self supporting. That sort of setup could come out of petty cash for larger ISPs marketing Departments. Of course there are a few problems with the above business model (mostly legal) but infrastructure costs are not one of them. Plug in your own numbers for movies and tv shows but 40 TB for each will probably be enough. -- Simon J. Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT.
We have looked at Amazon's S3 solution for storage since it is relatively cheap. But the transit costs from Amazon are quite expensive when it comes to moving media files at a large scale. At $0.20 per GB of data transferred, that would get extremely expensive. At Pando we move roughly 60 TB a day just from our super nodes. Amazon is cheap storage but expensive delivery on a large scale. Keith O'Neill Sr. Network Engineer *Pando Networks* Simon Lyall wrote:
On Mon, 8 Jan 2007, Gian Constantine wrote:
I would also argue storage and distribution costs are not asymptotically zero with scale. Well designed SANs are not cheap. Well designed distribution systems are not cheap. While price does decrease when scaled upwards, the cost of such an operation remains hefty, and increases with additions to the offered content library and a swelling of demand for this content. I believe the graph becomes neither asymptotic, nor anywhere near zero.
Lets see what I can do using today's technology:
According to the itunes website they have over 3.5 million songs. Lets call it 4 million. Assume a decent bit rate and make them average 10 MB each. That's 40 TB which would cost me $6k per month to store on Amazon S3. Lets assume we use Amazon EC3 to only allow torrents of the files to be downloaded and we transfer each file twice per month. Total cost around $20k per month or $250k per year. Add $10k to pay somebody to create the interface and put up a few banner ads and it'll be self supporting.
That sort of setup could come out of petty cash for larger ISPs marketing Departments.
Of course there are a few problems with the above business model (mostly legal) but infrastructure costs are not one of them. Plug in your own numbers for movies and tv shows but 40 TB for each will probably be enough.
Gian Constantine wrote:
Well, yes. My view on this subject is U.S.-centric. In fairness to me, this is NANOG, not AFNOG or EuroNOG or SANOG.
I thought Québec and Mexico did belong to the North American Network too. ...
I agree there is a market for ethnic and niche content, but it is not the broad market many companies look for. The investment becomes much more of a gamble than marketing the latest and greatest (again debatable :-) ) to the larger market of...well...everyone.
There is only a minority in north america who happens to be white and only some of them do speak english. I remember the times when I could watch mexican tv transmitted from a studio in florida. Today everything is crypted on the sats. We have to use the internet when we want someting special here in germany. I guess Karin and me are not the only ones who do net even own a tv set. The internet is the richer choice. Even if it is mostly audio, video is nasty overseas, I am shure it does make an impact in north america. Listening to my VoIP fone is mostly impossible now at least overseas. I used to be able to fone overseas. but even the landline has deteriorated because the fonecompanies have switched to VoIP themselves. Cheers Peter and Karin -- Peter and Karin Dambier Cesidian Root - Radice Cesidiana Rimbacher-Strasse 16 D-69509 Moerlenbach-Bonsweiher +49(6209)795-816 (Telekom) +49(6252)750-308 (VoIP: sipgate.de) mail: peter@peter-dambier.de mail: peter@echnaton.serveftp.com http://iason.site.voila.fr/ https://sourceforge.net/projects/iason/ http://www.cesidianroot.com/
I remember the times when I could watch mexican tv transmitted from a studio in florida.
If it comes from a studio in Florida then it is AMERICAN TV, not Mexican TV. I believe there are three national TV networks in the USA, which are headquartered in Miami and which broadcast in Spanish. --Michael Dillon
On Sun, 7 Jan 2007, Joe Abley wrote:
Setting aside the issue of what particular ISPs today have to pay, the real cost of sending data, best-effort over an existing network which has spare capacity and which is already supported and managed is surely zero.
As long as the additional traffic doesn't exceed the existing capacity. But what happens when 5% of the paying subscribers use 95% of the existing capacity, and then the other 95% of the subscribers complain about poor performance? What is the real cost to the ISP needing to upgrade the network to handle the additional traffic being generated by 5% of the subscribers when there isn't "spare" capacity?
If I acquire content while I'm sleeping, during a low dip in my ISP's usage profile, the chances good that are nobody incurs more costs that month than if I had decided not to acquire it. (For example, you might imagine an RSS feed with BitTorrent enclosures, which requires no human presence to trigger the downloads.)
The reason why many universities buy rate-shaping devices is dorm users don't restrain their application usage to only off-peak hours, which may or may not be related to sleeping hours. If peer-to-peer applications restrained their network usage during periods of peak network usage so it didn't result in complaints from other users, it would probably have a better reputation.
If I acquire content the same time as many other people, since what I'm watching is some coordinated, streaming event, then it seems far more likely that the popularity of the content will lead to network congestion, or push up a peak on an interface somewhere which will lead to a requirement for a circuit upgrade, or affect a 95%ile transit cost, or something.
Depends on when and where the replication of the content is taking place. Broadcasting is a very efficient way to distribute the same content to large numbers of people, even when some people may watch it later. You can broadcast either streaming or file downloads. You can also unicast either streaming or file downloads. Unicast tends to be less efficient to distribute the same content to large numbers of people. Then there is lots of events in the middle. Some content is only of interest to a some people. Streaming vs download and broadcast vs unicast. There are lots of combinations. One way is not necessarily the best way for every situation. Sometimes store-and-forward e-mail is useful, other times instant messenger communications is useful. Things may change over time. For example, USENET has mostly stopped being a widely flooded through every ISP and large institution, and is now accessed on demand by users from a few large aggregators. Distribution methods aren't mutually exclusive.
If asynchronous delivery of content is as free as I think it is, and synchronous delivery of content is as expensive as I suspect it might be, it follows that there ought to be more of the former than the latter going on.
If it turned out that there was several orders of magnitude more content being shifted around the Internet in a "download when you are able; watch later" fashion than there is content being streamed to viewers in real-time I would be thoroughly unsurprised.
If you limit yourself to the Internet, you exclude a lot of content being shifted around and consumed in the world. The World Cup or Superbowl are still much bigger events than Internet-only events. Broadcast television shows with even bottom ratings are still more popular than most Internet content. The Internet is good for narrowcasting, but its still working on mass audience events. "Asynchronous receivers" are more expensive and usually more complicated than "synchronous receivers." Not everyone owns a computer or spends a several hundred dollars for a DVR. If you already own a computer, you might consider it "free." But how many people want to buy a computer for each television set? In the USA, Congress debated whether it should spend $40 per digital receiver so people wouldn't lose their over the air broadcasting. Gadgets that interest 5% of the population versus reaching 95% of the population may have different trade-offs.
Joe Abley said: >>(For example, you might imagine an RSS feed with BitTorrent enclosures, which requires no human presence to trigger the downloads.) I think that is essentially the Democracy client I mentioned. Great thread so far, btw.
But what happens when 5% of the paying subscribers use 95% of the existing capacity, and then the other 95% of the subscribers complain about poor performance?
"Capacity" is too vague of a word here. If we assume that the P2P software can be made to recognize the ISP's architecture and prefer peers that are topologically nearby, then the issue focuses on the ISP's own internal capacity. It should not have a major impact on the ISP's upstream capacity which involves stuff that is rented from others (transit, peering). Also, because P2P traffic has its sources evenly distributed, it makes a case for cheap local BGP peering connections, again, to offload traffic from more expensive upstream transit/peering.
What is the real cost to the ISP needing to upgrade the network to handle the additional traffic being generated by 5% of the subscribers when there isn't "spare" capacity?
In the case of DSL/Cable providers, I suspect it is mostly in the Ethernet switches that tie the subscriber lines into the network.
The reason why many universities buy rate-shaping devices is dorm users don't restrain their application usage to only off-peak hours, which may
or may not be related to sleeping hours. If peer-to-peer applications restrained their network usage during periods of peak network usage so it didn't result in complaints from other users, it would probably have a better reputation.
I am suggesting that ISP folks should be cooperating with P2P software developers. Typically, the developers have a very vague understanding of how the network is structured and are essentially trying to reverse engineer network capabilities. It should not be too difficult to develop P2P clients that receive topology hints from their local ISPs. If this results in faster or more reliable/predictable downloads, then users will choose to use such a client.
The Internet is good for narrowcasting, but its still working on mass audience events.
Then, perhaps we should not even try to use the Internet for mass audience events. Is there something wrong with the current broadcast model? Did TV replace radio? Did radio replace newspapers? --Michael Dillon
On Mon, 8 Jan 2007 10:25:54 +0000 Michael.Dillon@btradianz.com wrote: <snip>
I am suggesting that ISP folks should be cooperating with P2P software developers. Typically, the developers have a very vague understanding of how the network is structured and are essentially trying to reverse engineer network capabilities. It should not be too difficult to develop P2P clients that receive topology hints from their local ISPs. If this results in faster or more reliable/predictable downloads, then users will choose to use such a client.
I'd think TCP's underlying and constant round trip time measurement to peers could be used for that. I've wondered if P2P protocols did that fairly recently, however hadn't found the time to see if it was so. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
Dear Sean; On Jan 8, 2007, at 2:34 AM, Sean Donelan wrote:
On Sun, 7 Jan 2007, Joe Abley wrote:
Setting aside the issue of what particular ISPs today have to pay, the real cost of sending data, best-effort over an existing network which has spare capacity and which is already supported and managed is surely zero.
As long as the additional traffic doesn't exceed the existing capacity.
But what happens when 5% of the paying subscribers use 95% of the existing capacity, and then the other 95% of the subscribers complain about poor performance? What is the real cost to the ISP needing to upgrade the network to handle the additional traffic being generated by 5% of the subscribers when there isn't "spare" capacity?
If I acquire content while I'm sleeping, during a low dip in my ISP's usage profile, the chances good that are nobody incurs more costs that month than if I had decided not to acquire it. (For example, you might imagine an RSS feed with BitTorrent enclosures, which requires no human presence to trigger the downloads.)
The reason why many universities buy rate-shaping devices is dorm users don't restrain their application usage to only off-peak hours, which may or may not be related to sleeping hours. If peer- to-peer applications restrained their network usage during periods of peak network usage so it didn't result in complaints from other users, it would probably have a better reputation.
Do not count on demand being geographically localized or limited to certain times of day. The audience for streaming is world-wide (for an example, see http://www.americafree.tv/Ads/geographical.html for a few hour slice in the early evening EST on a Sunday - note, BTW, that this is for English language content). The roughly equal distribution to the US and the EU is entirely normal; typically the peak-to-trough bandwidth usage variation during a day is less than a factor of 2, and frequently it disappears all together. Regards Marshall
If I acquire content the same time as many other people, since what I'm watching is some coordinated, streaming event, then it seems far more likely that the popularity of the content will lead to network congestion, or push up a peak on an interface somewhere which will lead to a requirement for a circuit upgrade, or affect a 95%ile transit cost, or something.
Depends on when and where the replication of the content is taking place.
Broadcasting is a very efficient way to distribute the same content to large numbers of people, even when some people may watch it later. You can broadcast either streaming or file downloads. You can also unicast either streaming or file downloads. Unicast tends to be less efficient to distribute the same content to large numbers of people. Then there is lots of events in the middle. Some content is only of interest to a some people.
Streaming vs download and broadcast vs unicast. There are lots of combinations. One way is not necessarily the best way for every situation. Sometimes store-and-forward e-mail is useful, other times instant messenger communications is useful. Things may change over time. For example, USENET has mostly stopped being a widely flooded through every ISP and large institution, and is now accessed on demand by users from a few large aggregators.
Distribution methods aren't mutually exclusive.
If asynchronous delivery of content is as free as I think it is, and synchronous delivery of content is as expensive as I suspect it might be, it follows that there ought to be more of the former than the latter going on.
If it turned out that there was several orders of magnitude more content being shifted around the Internet in a "download when you are able; watch later" fashion than there is content being streamed to viewers in real-time I would be thoroughly unsurprised.
If you limit yourself to the Internet, you exclude a lot of content being shifted around and consumed in the world. The World Cup or Superbowl are still much bigger events than Internet-only events. Broadcast television shows with even bottom ratings are still more popular than most Internet content. The Internet is good for narrowcasting, but its still working on mass audience events.
"Asynchronous receivers" are more expensive and usually more complicated than "synchronous receivers." Not everyone owns a computer or spends a several hundred dollars for a DVR. If you already own a computer, you might consider it "free." But how many people want to buy a computer for each television set? In the USA, Congress debated whether it should spend $40 per digital receiver so people wouldn't lose their over the air broadcasting.
Gadgets that interest 5% of the population versus reaching 95% of the population may have different trade-offs.
On 8-Jan-2007, at 02:34, Sean Donelan wrote:
On Sun, 7 Jan 2007, Joe Abley wrote:
Setting aside the issue of what particular ISPs today have to pay, the real cost of sending data, best-effort over an existing network which has spare capacity and which is already supported and managed is surely zero.
As long as the additional traffic doesn't exceed the existing capacity.
Indeed. So perhaps we should expect to see distribution price models whose success depends on that spare (off-peak, whatever) capacity being available being replaced by others which don't. If that's the case, and assuming the cost benefits of using slack capacity continue to be exploited, the bandwidth metrics mentioned in the original post might be those which assume a periodic utilisation profile, rather than those which just assume that spare bandwidth will be used. (It's still accounting based on peak; the difference might be that in the second model there really isn't that much of a peak any more, and the effect of that is a bonus window during which existing capacity models will sustain the flood.)
If you limit yourself to the Internet, you exclude a lot of content being shifted around and consumed in the world. The World Cup or Superbowl are still much bigger events than Internet-only events. Broadcast television shows with even bottom ratings are still more popular than most Internet content. The Internet is good for narrowcasting, but its still working on mass audience events.
Ah, but I wasn't comparing internet distribution with cable/satellite/ UHF/whatever -- I was comparing content which is streamed with content which isn't. The cost differences between those are fairly well understood, I think. Reliable, high-quality streaming media is expensive (ask someone like Akamai for a quote), whereas asynchronous delivery of content (e.g. through BitTorrent trackers) can result in enormous distribution of data with a centralised investment in hardware and network which is demonstrably sustainable by voluntary donations.
"Asynchronous receivers" are more expensive and usually more complicated than "synchronous receivers."
Well, there's no main-stream, blessed product which does the kind of asynchronous acquisition of content on anything like the scale of digital cable terminals; however, that's not to say that one couldn't be produced for the same cost. I'd guess that most of those digital cable boxes are running linux anyway, which makes it a software problem. If we're considering a fight between an intelligent network (one which can support good-quality, isochronous streaming video at high data rates from the producer to the consumer) and a stupid one (which concentrates on best-effort distribution of data, asynchronously, with a smarter edge) then absent external constraints regarding copyright, digital rights, etc, I presume we'd expect the stupid network model to win. Eventually.
Not everyone owns a computer or spends a several hundred dollars for a DVR. If you already own a computer, you might consider it "free."
Since I was comparing two methods of distributing material over the Internet, the availability of a computer is more or less a given. I'm not aware of a noticeable population of broadband users who don't own a computer, for example (apart from those who are broadband users without noticing, e.g. through a digital cable terminal which talks IP to the network). Joe
participants (34)
-
Alexander Harrowell
-
Andy Davidson
-
Bora Akyol
-
Brandon Butterworth
-
D.H. van der Woude
-
Dave Israel
-
Frank Bulk
-
Gian Constantine
-
Joe Abley
-
John Kristoff
-
Keith
-
Leo Vegoda
-
Mark Smith
-
Marshall Eubanks
-
Michael.Dillon@btradianz.com
-
Michal Krsek
-
Mikael Abrahamsson
-
nealr
-
Patrick W. Gilmore
-
Perry Lorier
-
Peter Corlett
-
Peter Dambier
-
Petri Helenius
-
Richard Naylor
-
Roland Dobbins
-
Sean Donelan
-
Simon Lockhart
-
Simon Lyall
-
Stephen Sprunk
-
Steve Gibbard
-
Steve Sobol
-
Thomas Leavitt
-
Travis H.
-
Valdis.Kletnieks@vt.edu