The 100 Gbit/s problem in your network
- Well, as it turns out, we don't have that kind of a problem. - You don't? - No, we do not have that kind of a problem in our network. We have plenty of bandwidth available to our customers, thank-you-every-much. - Do you have, just to make an example, about 10 000 customers in a specific area, like an city/county or part of a city/county? - Yes, of course! - Does these customers have at least 10 Mbit/s connection to the Internet? - Yes! Who do you think we are, like stupid! Haha! - Could all those 10 000 customers, just to make it theoretical, hit the 'play'-button on their Internet-connected-TV, at the same time, to watch the latest Quad-HD movie? - Yes. Oh wait a minute now! This is not fair! Damn. We're toast. -- //fredan
"Akamai". The actual example is "to watch the Super Bowl". :-) fredrik danerklint <fredan-nanog@fredan.se> wrote:
- Well, as it turns out, we don't have that kind of a problem.
- You don't?
- No, we do not have that kind of a problem in our network. We have plenty of bandwidth available to our customers, thank-you-every-much.
- Do you have, just to make an example, about 10 000 customers in a specific area, like an city/county or part of a city/county?
- Yes, of course!
- Does these customers have at least 10 Mbit/s connection to the Internet?
- Yes! Who do you think we are, like stupid! Haha!
- Could all those 10 000 customers, just to make it theoretical, hit the 'play'-button on their Internet-connected-TV, at the same time, to watch the latest Quad-HD movie?
- Yes. Oh wait a minute now! This is not fair! Damn. We're toast.
-- //fredan
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
"Multicast" Aled On 8 February 2013 13:42, Jay Ashworth <jra@baylink.com> wrote:
"Akamai".
The actual example is "to watch the Super Bowl". :-)
fredrik danerklint <fredan-nanog@fredan.se> wrote:
- Well, as it turns out, we don't have that kind of a problem.
- You don't?
- No, we do not have that kind of a problem in our network. We have plenty of bandwidth available to our customers, thank-you-every-much.
- Do you have, just to make an example, about 10 000 customers in a specific area, like an city/county or part of a city/county?
- Yes, of course!
- Does these customers have at least 10 Mbit/s connection to the Internet?
- Yes! Who do you think we are, like stupid! Haha!
- Could all those 10 000 customers, just to make it theoretical, hit the 'play'-button on their Internet-connected-TV, at the same time, to watch the latest Quad-HD movie?
- Yes. Oh wait a minute now! This is not fair! Damn. We're toast.
-- //fredan
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
On 2013-02-08 15:39 , Adam Vitkovsky wrote:
to watch the latest Quad-HD movie "Multicast" -I'm afraid it has to be unicast so that people can pause/resume anytime they need to go... well you know what I mean
Works fine too with multicast, for instance with FuzzyCast: https://marcel.wanda.ch/Fuzzycast/ The only little snag with multicast is that it typically is only on the last leg and it fails when a transit or simply multiple domains become involved. Greets, Jeroen
to watch the latest Quad-HD movie "Multicast" -I'm afraid it has to be unicast so that people can pause/resume anytime they need to go... well you know what I mean
Works fine too with multicast, for instance with FuzzyCast: https://marcel.wanda.ch/Fuzzycast/
(I did notice that this was developed in 2001 - 2002!) That works if you are only distributing Video on Demands content. "32 seconds after the later, after the initial delay, enough data has been received such that playout can begin" So we are back to the b..u..f..f..e..r..i..n..g.. thing, again? If you also want, for example, to have the possibility to distribute software, (static content as well), can you do that with Fussycast? -- //fredan
On 2013-02-08 16:13 , fredrik danerklint wrote:
to watch the latest Quad-HD movie "Multicast" -I'm afraid it has to be unicast so that people can pause/resume anytime they need to go... well you know what I mean
Works fine too with multicast, for instance with FuzzyCast: https://marcel.wanda.ch/Fuzzycast/
(I did notice that this was developed in 2001 - 2002!)
You really think people did not have problems with the 1mbit links they had back then? And you really think that we won't have problems with Zillion-HD or whatever they will call it in another 20 years?
That works if you are only distributing Video on Demands content.
Thus the question becomes, for what would it not work?
"32 seconds after the later, after the initial delay, enough data has been received such that playout can begin"
So we are back to the b..u..f..f..e..r..i..n..g.. thing, again?
If you also want, for example, to have the possibility to distribute software, (static content as well), can you do that with Fussycast?
and: On 2013-02-08 16:17 , Adam Vitkovsky wrote:
And 30sec delay is unacceptable. You can use 10 cheaper VOD servers closer to eyeballs making it 1000 customers abusing the particular portion of the local access/aggregation network.
Read the documents and other related literature on that site a little bit further: you can overcome those first couple of seconds by fetching those 'quickly' using unicast. Yes, that does not make it a full multicast solution, but the whole idea of multicast usage in these scenarios: less traffic on the backbone. With this setup you only get the hits for the first couple of seconds and after that they have it all from multicast. And one can of course employ strategies as used currently by for instance UPC's Horizon TV boxes that already 'tune in' to the channel that the user is likely going to zap to next, thus shaving off another few bits there too... Greets, Jeroen
You really think people did not have problems with the 1mbit links they had back then?
Yes, I do.
And you really think that we won't have problems with Zillion-HD or whatever they will call it in another 20 years?
I think that this is something I'm trying to say, with the creation of this thread.
That works if you are only distributing Video on Demands content. Thus the question becomes, for what would it not work? If you also want, for example, to have the possibility to distribute software, (static content as well), can you do that with Fussycast?
As I asked; Static content, like in files (*.zip, *.tar.gz, *.iso, etc...)
Read the documents and other related literature on that site a little bit further: you can overcome those first couple of seconds by fetching those 'quickly' using unicast.
Since you are back to the Unicast thing, and as you sad the problem with the 1 Mbit/s links, I do think your question whould be: How could we put the cache servers right next to our DSLAM:s, aggregation switches (or what ever you want to place them in your network) and have everything that's static content, cached? I do have an suggestion for how to solve this. See my message yesterday to the mailing list. -- //fredan
On 2013-02-08 17:03 , fredrik danerklint wrote:
You really think people did not have problems with the 1mbit links they had back then?
Yes, I do.
And you really think that we won't have problems with Zillion-HD or whatever they will call it in another 20 years?
I think that this is something I'm trying to say, with the creation of this thread.
That works if you are only distributing Video on Demands content. Thus the question becomes, for what would it not work? If you also want, for example, to have the possibility to distribute software, (static content as well), can you do that with Fussycast?
As I asked; Static content, like in files (*.zip, *.tar.gz, *.iso, etc...)
There is a difference in serving FriendsS01E01.mpg, FriendsS020E3.mkv and FriendsS03.iso ??? Video On Demand is pretty static, perfect for distributing with multicast. (now you will run out of multicast groups in IPv4/Ethernet if you have a large amount of small files though, but there are other protocols for that around) [..]
I do have an suggestion for how to solve this. See my message yesterday to the mailing list.
Ah, I get it, you are trying to get people to acknowledge the non-existence of your tool that does what every transparent HTTP proxy has been doing for years! ;) For that you do not need to do strange DNS-stealing hacks or coordination with various parties, one only has to steal port 80. For instance see this nice FAQ from 2002: http://tldp.org/HOWTO/TransparentProxy.html Fortunately quite a few content providers are moving to HTTPS so that that can't happen anymore. Greets, Jeroen
I do have an suggestion for how to solve this. See my message yesterday to the mailing list.
Ah, I get it, you are trying to get people to acknowledge the non-existence of your tool that does what every transparent HTTP proxy has been doing for years! ;)
Where exactly do you put those transparent http proxy servers in your network?
For that you do not need to do strange DNS-stealing hacks or coordination with various parties, one only has to steal port 80.
There is two thing that The Last Mile Cache does _not_ do; Steal either the DNS nor the port 80 part. (I have to give it to you that it is a DNS solution part involved in TLMC as well as a reverse proxy server). It's an solution which does not force either the CSP (Content Service Provider) nor the ISP to participate in TLMC. It will tough, allow a customer of an ISP (which has to participate in TLMC in the first place) to have it's own cache server at their home. (And yes, the CSP needs to participate as well for it to work).
Fortunately quite a few content providers are moving to HTTPS so that that can't happen anymore.
If you want your content cached at various ISP:s around the world, encrypt the content, not the session. -- //fredan
Works fine too with multicast, for instance with FuzzyCast: Well yes but you need to make some compromises on behalf of user experience.
And 30sec delay is unacceptable. You can use 10 cheaper VOD servers closer to eyeballs making it 1000 customers abusing the particular portion of the local access/aggregation network. adam
On (2013-02-08 14:15 +0000), Aled Morris wrote:
"Multicast"
I don't see multicast working in Internet scale. Essentially multicast means core is flow-routing. So we'd need some way to decide who gets to send their content as multicast and who are forced to send unicast. It could create de-facto monopolies, as new entries to the market wont have their multicast carried, they cannot compete pricing wise with established players who are carried. -- ++ytti
On 2/8/13 9:02 AM, Saku Ytti wrote:
On (2013-02-08 14:15 +0000), Aled Morris wrote:
"Multicast" I don't see multicast working in Internet scale.
Essentially multicast means core is flow-routing. So we'd need some way to decide who gets to send their content as multicast and who are forced to send unicast. The market already ruled on who gets to insert MSDP state in your routers. inter-domain multicast to the extent that it exists is between consenting adults.
Which is fine, it turns out we don't need it for youtube or justin.tv to exist, and I don't need to signal into the core of the internet to make my small group conferencing app work.
It could create de-facto monopolies, as new entries to the market wont have their multicast carried, they cannot compete pricing wise with established players who are carried.
I don't see a need for multicast to work in Internet scale, ever. adam -----Original Message----- From: Saku Ytti [mailto:saku@ytti.fi] Sent: Friday, February 08, 2013 6:02 PM To: nanog@nanog.org Subject: Re: The 100 Gbit/s problem in your network On (2013-02-08 14:15 +0000), Aled Morris wrote:
"Multicast"
I don't see multicast working in Internet scale. Essentially multicast means core is flow-routing. So we'd need some way to decide who gets to send their content as multicast and who are forced to send unicast. It could create de-facto monopolies, as new entries to the market wont have their multicast carried, they cannot compete pricing wise with established players who are carried. -- ++ytti
I don't see why, as an ISP, I should carry multiple, identical, payload packets for the same content. I'm more than happy to replicate them closer to my subscribers on behalf of the content publishers. How we do this is the question, i.e. what form the "multi"-"casting" takes. It would be nice if we could take advantage of an inherent design of IP and the hardware it runs on, to duplicate the actual packets in-flow as near as is required to the destination. Installing L7 content delivery boxes or caches is OK, but doesn't seem as efficient as an overall technical solution. Aled On 11 February 2013 11:03, Adam Vitkovsky <adam.vitkovsky@swan.sk> wrote:
I don't see a need for multicast to work in Internet scale, ever.
adam -----Original Message----- From: Saku Ytti [mailto:saku@ytti.fi] Sent: Friday, February 08, 2013 6:02 PM To: nanog@nanog.org Subject: Re: The 100 Gbit/s problem in your network
On (2013-02-08 14:15 +0000), Aled Morris wrote:
"Multicast"
I don't see multicast working in Internet scale.
Essentially multicast means core is flow-routing. So we'd need some way to decide who gets to send their content as multicast and who are forced to send unicast. It could create de-facto monopolies, as new entries to the market wont have their multicast carried, they cannot compete pricing wise with established players who are carried.
-- ++ytti
On (2013-02-11 12:16 +0000), Aled Morris wrote:
I don't see why, as an ISP, I should carry multiple, identical, payload packets for the same content. I'm more than happy to replicate them closer to my subscribers on behalf of the content publishers. How we do this is the question, i.e. what form the "multi"-"casting" takes.
It would be nice if we could take advantage of an inherent design of IP and the hardware it runs on, to duplicate the actual packets in-flow as near as is required to the destination.
Installing L7 content delivery boxes or caches is OK, but doesn't seem as efficient as an overall technical solution.
As an overall technical solution Internet scale multicast simply does not work today. If it did work, then our next hurdle would be, how to get tier1 to play ball, they get money on bits transported, it's not in their best interested to reduce that amount. Now maybe, if we really did want, we could do some N:1 compression of IP traffic, where N is something like 3<10. Far worse than multicast, but with this method, we might be able to device technical solution where IP core does not learn replication states at all. We could abuse long IPv6 addresses DADDR + SADDR + extension header to pack information about destination ASN who should receive this group, this could be handled without states in HW in core networks, only at ASN edge, you'd need to add classic multicast state intelligence. -- ++ytti
On 2/11/2013 7:23 AM, Saku Ytti wrote:
On (2013-02-11 12:16 +0000), Aled Morris wrote:
I don't see why, as an ISP, I should carry multiple, identical, payload packets for the same content. I'm more than happy to replicate them closer to my subscribers on behalf of the content publishers. How we do this is the question, i.e. what form the "multi"-"casting" takes.
It would be nice if we could take advantage of an inherent design of IP and the hardware it runs on, to duplicate the actual packets in-flow as near as is required to the destination.
Installing L7 content delivery boxes or caches is OK, but doesn't seem as efficient as an overall technical solution. As an overall technical solution Internet scale multicast simply does not work today. If it did work, then our next hurdle would be, how to get tier1 to play ball, they get money on bits transported, it's not in their best interested to reduce that amount.
Any eyeball network that wants to support multicast should peer with the content players(s) that support it. Simple! Just another reason to make the transit only networks even more irrelevant.
On 2/11/13 9:32 AM, ML wrote:
On 2/11/2013 7:23 AM, Saku Ytti wrote:
On (2013-02-11 12:16 +0000), Aled Morris wrote:
I don't see why, as an ISP, I should carry multiple, identical, payload packets for the same content. I'm more than happy to replicate them closer to my subscribers on behalf of the content publishers. How we do this is the question, i.e. what form the "multi"-"casting" takes.
It would be nice if we could take advantage of an inherent design of IP and the hardware it runs on, to duplicate the actual packets in-flow as near as is required to the destination.
Installing L7 content delivery boxes or caches is OK, but doesn't seem as efficient as an overall technical solution. As an overall technical solution Internet scale multicast simply does not work today. If it did work, then our next hurdle would be, how to get tier1 to play ball, they get money on bits transported, it's not in their best interested to reduce that amount.
Any eyeball network that wants to support multicast should peer with the content players(s) that support it. Simple!
Just another reason to make the transit only networks even more irrelevant.
The big issue is that the customers don't want to watch simulcast content. The odds of having two customers in a reasonably sized multicast domain watching the same netflix movie at exactly the same time frame in the movie is slim. Customers want to watch on time frames of their own choosing. I don't see multicast helping at all in dealing with the situation. Mark -- Mark Radabaugh Amplex mark@amplex.net 419.837.5015
On 11-Feb-13 12:25, Mark Radabaugh wrote:
On 2/11/13 9:32 AM, ML wrote:
Any eyeball network that wants to support multicast should peer with the content players(s) that support it. Simple!
Just another reason to make the transit only networks even more irrelevant.
The big issue is that the customers don't want to watch simulcast content. The odds of having two customers in a reasonably sized multicast domain watching the same netflix movie at exactly the same time frame in the movie is slim. Customers want to watch on time frames of their own choosing. I don't see multicast helping at all in dealing with the situation.
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand. S -- Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
This is true but you should probably define your network size. I don't see many/any independents doing this because of the cost of the boxes and dealing with the content providers. If you're a large MSO (say top 15) then I can see it with today's technology, but even those guys seem to be moving in other directions to get out of the provider controlled set top box model.
S
-- Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
-- Scott Helms Vice President of Technology ZCorum (678) 507-5000 -------------------------------- http://twitter.com/kscotthelms --------------------------------
On Mon, Feb 11, 2013 at 3:01 PM, Scott Helms <khelms@zcorum.com> wrote:
If you're a large MSO (say top 15) then I can see it with today's technology, but even those guys seem to be moving in other directions to get out of the provider controlled set top box model.
really? verizon still wants to sell the hell out of their crappy box to me... despite the fact that I say: "No, I have a tivo, it works better...." same, btw, for comcast... They all seem to want to push their own box on users :( to the extent that they tread the line on legality of pushing away 'cablecard' users :(
Lol, I didn't say all of them were doing that yet. On Feb 11, 2013 3:50 PM, "Christopher Morrow" <morrowc.lists@gmail.com> wrote:
On Mon, Feb 11, 2013 at 3:01 PM, Scott Helms <khelms@zcorum.com> wrote:
If you're a large MSO (say top 15) then I can see it with today's technology, but even those guys seem to be moving in other directions to get out of the provider controlled set top box model.
really? verizon still wants to sell the hell out of their crappy box to me... despite the fact that I say: "No, I have a tivo, it works better...."
same, btw, for comcast...
They all seem to want to push their own box on users :( to the extent that they tread the line on legality of pushing away 'cablecard' users :(
I meant to add in more info, but my mobile Gmail client betrayed me. On Mon, Feb 11, 2013 at 3:59 PM, Scott Helms <khelms@zcorum.com> wrote:
Lol, I didn't say all of them were doing that yet. On Feb 11, 2013 3:50 PM, "Christopher Morrow" <morrowc.lists@gmail.com> wrote:
On Mon, Feb 11, 2013 at 3:01 PM, Scott Helms <khelms@zcorum.com> wrote:
If you're a large MSO (say top 15) then I can see it with today's technology, but even those guys seem to be moving in other directions to get out of the provider controlled set top box model.
really? verizon still wants to sell the hell out of their crappy box to me... despite the fact that I say: "No, I have a tivo, it works better...."
Verizon's infrastructure is all RFoG and not IPTV so a cache wouldn't be useful.
same, btw, for comcast...
They all seem to want to push their own box on users :( to the extent that they tread the line on legality of pushing away 'cablecard' users :(
Time Warner is taking a different approach: http://www.dslreports.com/shownews/Roku-and-Time-Warner-Cable-Hook-Up-122690 Also, on Comcast their new STB platform is built around IP http://xfinity.comcast.net/x1/ and while its still their box they are trying to build a development ecosystem. -- Scott Helms Vice President of Technology ZCorum (678) 507-5000 -------------------------------- http://twitter.com/kscotthelms --------------------------------
----- Original Message -----
From: "Stephen Sprunk" <stephen@sprunk.org>
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
The problem with that, Steve, is that that is over the cold, dead bodies of the program producers, and by extension, their transport agents; the... I think we're calling it the Comcast decision -- the one that said that centralized Big Ass DVRs didn't violate copyright law -- made them unhappy enough about where the content was. In the final analysis, program producers are simply going to have to get over themselves, and stop thinking they can charge people multiple times for multiple formats and resolutions, frex. The *minute* a legal multicast pre-charge facility becomes available, I'm sure someone will write a module for MythTV, and that's a couple million homes, right there. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
On Feb 11, 2013, at 14:11 , Stephen Sprunk <stephen@sprunk.org> wrote:
On 11-Feb-13 12:25, Mark Radabaugh wrote:
On 2/11/13 9:32 AM, ML wrote:
Any eyeball network that wants to support multicast should peer with the content players(s) that support it. Simple!
Just another reason to make the transit only networks even more irrelevant.
The big issue is that the customers don't want to watch simulcast content. The odds of having two customers in a reasonably sized multicast domain watching the same netflix movie at exactly the same time frame in the movie is slim. Customers want to watch on time frames of their own choosing. I don't see multicast helping at all in dealing with the situation.
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
One of us has a different dictionary than everyone else. Assume I have 10 million movies in my library, and 10 million active users. Further assume there are 10 movies being watched by 100K users each, and 9,999,990 movies which are being watched by 1 user each. Which has more total demand, the 10 popular movies or the long tail? This doesn't mean Netflix or Hulu or iTunes or whatever has the aforementioned demand curve. But it does mean my "definition" & yours do not match. Either way, I challenge you to prove the long tail on one of the serious streaming services is a "tiny fraction" of total demand. -- TTFN, patrick
On Feb 11, 2013, at 18:52 , "Patrick W. Gilmore" <patrick@ianai.net> wrote:
On Feb 11, 2013, at 14:11 , Stephen Sprunk <stephen@sprunk.org> wrote:
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
One of us has a different dictionary than everyone else.
Assume I have 10 million movies in my library, and 10 million active users. Further assume there are 10 movies being watched by 100K users each, and 9,999,990 movies which are being watched by 1 user each.
Obvious typo, supposed to be 8,999,990. Or you can say I have 11 million users. Whichever floats your boat. Hopefully the point is still clear, even in a crowd as pedantic as this. -- TTFN, patrick
Which has more total demand, the 10 popular movies or the long tail?
This doesn't mean Netflix or Hulu or iTunes or whatever has the aforementioned demand curve. But it does mean my "definition" & yours do not match.
Either way, I challenge you to prove the long tail on one of the serious streaming services is a "tiny fraction" of total demand.
-- TTFN, patrick
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD = system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches = could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
One of us has a different dictionary than everyone else.
Assume I have 10 million movies in my library, and 10 million active = users. Further assume there are 10 movies being watched by 100K users = each, and 9,999,990 movies which are being watched by 1 user each.
Which has more total demand, the 10 popular movies or the long tail?
This doesn't mean Netflix or Hulu or iTunes or whatever has the = aforementioned demand curve. But it does mean my "definition" & yours = do not match.
Either way, I challenge you to prove the long tail on one of the serious = streaming services is a "tiny fraction" of total demand.
Think I have to agree with Patrick here, even if the facts were not to support him at this time. The real question is: how will video evolve? Multicast is ideally suited for small numbers of streams being delivered to wide numbers of viewers. The broadcast television distribution model worked well when only a large conglomerate could afford to produce video. Around thirty years ago, improvements in technology made it possible and reasonable for municipal cable TV systems to generate local programs. About fifteen(?) years ago, TiVo made waves with DVR's, which introduced a disruptive concept to the existing paradigm. Then ten years ago, video cameras and computer editing made it vaguely possible for a service like YouTube to evolve to serve low quality video over the low speed broadband of the day. Now? I can shoot 1080p 30fps video on my phone, edit it on a modest computer, and post it on the Internet easily. With relatively cheap hardware. But a lot of people still watch network or cable TV. And the thing I have to think is, there's going to be a battle between Big Content, who would prefer to be able to produce shows watched by lots of people (and which works for multicast), and smaller specialty content producers who will be enabled by the ease of inexpensive Internet distribution. This battle has been fought in the shadows until now, because there are not any totally awesome ways for video to be distributed to end users. Someone will invent something like InterneTiVo, which will do for Internet video what TiVo did for OTA/cable/satellite - make it easy to find the things you'd like to watch, and handle the details of getting them for you. But here's the thing. There's a growing number of people who are taking the new generation of Smart TV's and/or the smart little boxes like AppleTV, and optionally combining it with the cheap and readily available storage options (or just relying on the speed of broadband) to be able to download and watch what they want, when they want. For our household, the computation came out to: do we continue to pay DirecTV $80/month plus the $3(?) annual rate increase they had done every single year? We were happy when we started with them at $30/mo and would probably have been willing to pay that forever. But at $80, that's $960/year, and with 21 series that we watched on a semi-regular basis, which we could purchase and own for an average of less than $40 per season, that's only $840 per year, assuming that all the series put out one season per year (a false assumption these days anyways). I've talked about this to a number of people who were startled to discover that their own TV economics did not make sense. The big thing that prevents many people from doing this is just that it's so "different." Broadband providers here in the US have been reluctant to keep up with providing contemporary, industry-leading speeds, which is the other big thing to wrestle with. As network speeds increase, the value of multicast will decrease. So my point is this: to me, it really seems like worrying about video loads on networks (in terms of multicast vs unicast) is going to be a diminishing returns sort of thing: broadband networks are going to be getting faster eventually, the potential number of video sources is going to continue to grow, the diversity of what people wish to view is going to continue to grow, the number of types of video devices is going to increase, and the difficulty of causing any sort of standard implemented on a massive installed base is going to make adoption of multicast for the purpose largely irrelevant. In the long run, it's probable that nothing will change that. There will continue to be a large amount of content that could be multicast (movies, live sports, etc) but I really expect to see CDN's take on that load instead of introducing multicast to the mix... and in the meantime I hope someone invents InterneTiVo to find all the other great content. Multicast would be great, if someone would have figured out a good way to deploy it early on... but at this late point, the horse is out of the barn. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Feb 12, 2013, at 8:11 AM, Joe Greco wrote:
The real question is: how will video evolve?
My guess is that most of it will become synthetic, generated programmatically from local primitives via algorithmic instructions, much in the way that multiplayer 3D FPS games handle such things today. There'll be a significant round of standards wars and development of capture systems (e.g., 'cameras', postproduction software, and so forth) which will need to happen, but I think it's inevitable. Telepresence-type systems will probably be the first to implement this sort of thing. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
On Mon, Feb 11, 2013 at 8:11 PM, Joe Greco <jgreco@ns.sol.net> wrote:
Multicast _is_ useful for filling the millions of DVRs out there with broadcast programs and for live events (eg. sports). A smart VOD = system would have my DVR download the entire program from a local cache--and then play it locally as with anything else I watch. Those caches = could be populated by multicast as well, at least for popular content. The long tail would still require some level of unicast distribution, but that is _by definition_ a tiny fraction of total demand.
One of us has a different dictionary than everyone else.
Assume I have 10 million movies in my library, and 10 million active = users. Further assume there are 10 movies being watched by 100K users = each, and 9,999,990 movies which are being watched by 1 user each.
Which has more total demand, the 10 popular movies or the long tail?
This doesn't mean Netflix or Hulu or iTunes or whatever has the = aforementioned demand curve. But it does mean my "definition" & yours = do not match.
Either way, I challenge you to prove the long tail on one of the serious = streaming services is a "tiny fraction" of total demand.
Think I have to agree with Patrick here, even if the facts were not to support him at this time.
The real question is: how will video evolve?
Good question. I suspect it's going to look a lot like the evolution of audio: Pandora, Grooveshark, Spotify etc. All unicast. CDN. Live sports: how was the Olympics coverage handled? Unicast. CDN. Multicast is dead. Feel free to disagree. :-) Tim:>
On 2/11/2013 11:05 PM, Tim Durack wrote:
Multicast is dead. Feel free to disagree. :-) Tim:>
Multicast is a vendor selling point, as you essentially need a coherent end-to-end solution to get it to work PROPERLY. Of course if it does not work PROPERLY, it will still largely work, albeit inefficiently, in most cases other than routed multicast. So personally I'd love to see the multicast environment die as well :) It's so... well... decades old stuff. For cable / IPTV it may fly and scale, but there is a decided move to the on-demand model. And even with live broadcast, there's the growing DVR selling point of "pause and resume" which is buffering and unicast, just localized to the set top box. It is also the opposite of "on demand" as multicast only works on a synchronized timeline. Few if any people will demand a specific item "on demand" at the same time, or even within a reasonable time window for a buffered/staged multicast (..."this channel should be available shortly..."). You could multicast to cache boxes, but that is prone to cache hit randomization, and only useful to "pre-populate" an incident. Multicast still works for live broadcast. And can be convoluted to work in odd/mixed topologies (e.g., Octoshape... hideous thing). But working multicast requires tweaking (PIM, IGMP snooping, CGMP/etc vendor-specific L2 pruning) that makes it ugly. We had enough headaches just trying to route multicast computer imaging traffic (Ghost, SCOM, etc) that I couldn't imagine trying to extend that out into userland without some serious forklift upgrades to insure it would work at the hardware level. Locally, knock y'erself out with fingers crossed, you'll only nuke your broadcast domain, but routing it? Jeff
On Mon, Feb 11, 2013 at 10:05 PM, Tim Durack <tdurack@gmail.com> wrote:
Multicast is dead. Feel free to disagree. :-)
Tim:>
I really wish I could agree! It would have saved me some time dealing with it. There is the argument of alternative bit rates, compression, etc., but HD streams are assumed[1] at 15 Mbps. At 100Gbps, I can do max 6826 streams of HD streaming. Multicast deployments laugh at this pathetically low number of viewers. At an upstream aggregation point, I can easily serve ~128K subs (7 slots, 8 ports per slot, 3 ports per $ACCESS, 8K[3] users per $ACCESS, 1 slot for upstream). I now assume 2.5 STBs per sub[2]. This results in, more or less, 320,000 STBs. To me, the math says its not dead and we'll need a couple of orders of magnitude (to accommodate the core) in speed improvements to get the same delivery unicast. [1] http://www.cablelabs.com/specifications/OC-SP-CEP3.0-I04-121210.pdf Lists 15Mbps as safe harbor value for HD [2] http://www.aceee.org/files/proceedings/2012/data/papers/0193-000294.pdf Has some stat (good or bad) wrt STBs/household [3] uBR10K (my $ACCESS comparison) specs out for max 64K CPE. One of my guys indicates to me that the actual number might be closer to 15-25K CPE on a given node. Please make adjustments as necessary. (required note: employer is Cisco. Views are my own.) -- William McCall
I really wish I could agree! It would have saved me some time dealing with it.
There is the argument of alternative bit rates, compression, etc., but HD streams are assumed[1] at 15 Mbps.
At 100Gbps, I can do max 6826 streams of HD streaming. Multicast deployments laugh at this pathetically low number of viewers.
At an upstream aggregation point, I can easily serve ~128K subs (7 slots, 8 ports per slot, 3 ports per $ACCESS, 8K[3] users per $ACCESS, 1 slot for upstream). I now assume 2.5 STBs per sub[2]. This results in, more or less, 320,000 STBs.
Multicast for inside of a given service provider is certainly not dead and in fact its widely deployed for IPTV in DSL/FTTx networks. FIOS doesn't use it since they're not doing IPTV (traditional RFoG) but Uverse does as do most telco TV providers I've spoken with.
To me, the math says its not dead and we'll need a couple of orders of magnitude (to accommodate the core) in speed improvements to get the same delivery unicast.
[1] http://www.cablelabs.com/specifications/OC-SP-CEP3.0-I04-121210.pdfLists 15Mbps as safe harbor value for HD [2] http://www.aceee.org/files/proceedings/2012/data/papers/0193-000294.pdfHas some stat (good or bad) wrt STBs/household [3] uBR10K (my $ACCESS comparison) specs out for max 64K CPE. One of my guys indicates to me that the actual number might be closer to 15-25K CPE on a given node. Please make adjustments as necessary.
(required note: employer is Cisco. Views are my own.)
-- William McCall
-- Scott Helms Vice President of Technology ZCorum (678) 507-5000 -------------------------------- http://twitter.com/kscotthelms --------------------------------
Multicast is dead. Feel free to disagree. :-)
Tim:>
Multicast will never be dead. With ever raising bandwidth needs we'll always welcome a distribution method that allows us to pass the same data least times over the least number of links. We all remember the spikes in BW demands when the Austrian fellow jumped from space And regarding the global m-cast. Well we don't need it. You can get the IPTV streams via direct link or via common carrier's mvpn adam
On 02/11/2013 03:52 PM, Patrick W. Gilmore wrote:
One of us has a different dictionary than everyone else.
I'm not sure it's different dictionaries, I think you're talking past each other. Video on demand and broadcast are 2 totally different animals. For VOD, multicast is not a good fit, clearly. But for broadcast, it has a lot of potential. Most of the issues with people wanting to pause, rewind, etc. are already handled by modern DVRs, even with live programming. What I haven't seen yet in this discussion (and sorry if I've missed it) is the fact that every evening every broadcast network sends out hour after hour of what are essentially "live" broadcasts, in the sense that they were not available "on demand" before they were aired "on TV" that night. In addition to live broadcasts, this nightly programming is ideal for multicast, especially since nowadays most of that programming is viewed off the DVR at another time anyway. So filling up that DVR (or even watching it live) could happen over multicast just as well as it could happen over unicast. But more importantly, what's missing from this conversation is that the broadcast networks, the existing cable/satellite/etc. providers, and everyone else who has a multi-billion dollar vested interest in the way that the business is structured now would fight this tooth and nail. So we can engineer all the awesome solutions we want, they are overwhelmingly unlikely to actually happen. Doug
On 02/11/2013 03:52 PM, Patrick W. Gilmore wrote:
One of us has a different dictionary than everyone else.
I'm not sure it's different dictionaries, I think you're talking past each other.
Video on demand and broadcast are 2 totally different animals. For VOD, multicast is not a good fit, clearly. But for broadcast, it has a lot of potential. Most of the issues with people wanting to pause, rewind, etc. are already handled by modern DVRs, even with live programming.
What I haven't seen yet in this discussion (and sorry if I've missed it) is the fact that every evening every broadcast network sends out hour after hour of what are essentially "live" broadcasts, in the sense that they were not available "on demand" before they were aired "on TV" that night. In addition to live broadcasts, this nightly programming is ideal for multicast, especially since nowadays most of that programming is viewed off the DVR at another time anyway. So filling up that DVR (or even watching it live) could happen over multicast just as well as it could happen over unicast.
Yes, but this basically assumes that we don't/won't see a change in their video distribution model, which is actually an unknown, rather than a given.
But more importantly, what's missing from this conversation is that the broadcast networks, the existing cable/satellite/etc. providers, and everyone else who has a multi-billion dollar vested interest in the way that the business is structured now would fight this tooth and nail. So we can engineer all the awesome solutions we want, they are overwhelmingly unlikely to actually happen.
If Big Content can figure out a way to extract money from the end user without involving middlemen like cable and satellite providers, then we'll see a shift. There is no guarantee of an ongoing business model just because that's the way it worked. Look at what's happened to Blockbuster Video, which has gone from being a multi-billion dollar company to being well on its way to irrelevance. What you're actually likely to see is a fight "tooth and nail" to ignore the signs of the coming storm, but that's not going to stop the storm, is it. It will just open the door for more visionary businesses. So, boiling everything down, the two Big Questions I see are: 1) Will throwing more bandwidth at the problem effectively allow the problem to be solved more easily than getting all the technical requirements (CPE, networks, etc.) for multicast to work? I keep having this flashback to the early days of DSL and VoIP when I would hear statements like "VoIP over the public Internet will never work" over what are essentially that day's versions of these concerns. 2) Whether or not we'll have broadcast networks sufficiently large to make it worth the investment to solve the mcast hurdles, or if maybe their entire distribution model just undergoes a tectonic shift... which is hardly unprecedented, consider the sort of shift dialtone is experiencing w.r.t. Ma Bell's POTS plant. In the meantime, the amount of non-broadcast video available as VOD continues to grow at a staggering pace. The answer appears to be that multicast would be great if everything were to remain as-is, but that seems a poor assumption. We're still in a period where multicast would be an awesome thing to have, but we do not have it, and people are successfully moving away from the "broadcast TV channel" model of video distribution, to a VOD model where they can watch what they want, when they want, without being limited to the 60 hours that their DVR happens to have queued up ("how quaint" in this age of the mighty cloud!) If you graph that all out, the picture you get shows a declining RoI for multicast. Any businessman would look at that and advise against any significant investment in solving a problem that is already on a path to self-resolution. We know how to solve 1) by throwing bandwidth at it even if we cringe at the requisite numbers today; throwing bandwidth at it is a brute force solution that makes your C and J salespeople smile. Products like the AppleTV/iPad/etc have managed to make somewhat uncomfortable marriages out of day-after-broadcast video and VOD(*). We have clever ideas about how to solve 1) with multicast, but no real world implementations that actually work at any sort of scale, and that's such a huge impediment to implementation when compared to just scaling known bandwidth and unicast delivery technologies... We also have seen, from the advent of cable TV, that the availability of new transmission opportunities leads to decreased viewership of the OTA broadcast networks, and a growing pool of alternative video to watch. But now here's the killer. The networks that could most benefit from multicast, the existing TV networks, what's the incentive for them to develop multicast technologies, something which I expect that their distribution partners would see as a further unwelcome jab at eventually cutting out the middleman? I think it has been a hard enough sell for them to support buying shows (iTunes, Amazon, whatever) but they're likely to get a lot of pushback from existing dist partners as to why another broadcast technology would be needed, when cable/satellite already have "solved" this problem and have a huge investment. (*) I would note that legacy TV, with its requirement that you be available to watch it when broadcast, or have a device handy to spool it for later viewing, is an uncomfortable marriage of content and viewer. FWIW. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Feb 12, 2013, at 01:06 , Doug Barton <dougb@dougbarton.us> wrote:
On 02/11/2013 03:52 PM, Patrick W. Gilmore wrote:
One of us has a different dictionary than everyone else.
I'm not sure it's different dictionaries, I think you're talking past each other.
No, it's definitely different dictionaries. I am purposely staying out of the whole multicast vs. CDN vs. set-top caching vs. $RANDOM_TECHNOLOGY thing. I was concentrating sole on one point - that the long tail "is _by definition_ a tiny fraction of total demand" (Stephen's emphasis). The long tail might be a fraction, or it might be a majority of the traffic. Depends on the use case. Important to remember this discussing the pros & cons of each protocol / approach. As for the rest, time will tell. But it's fun to watch the discussion, especially by people who have never attempted any of what they are espousing. :) Hey, sometimes that's where the best ideas come up - people who don't know what is impossible are not constrained! -- TTFN, patrick
Video on demand and broadcast are 2 totally different animals. For VOD, multicast is not a good fit, clearly. But for broadcast, it has a lot of potential. Most of the issues with people wanting to pause, rewind, etc. are already handled by modern DVRs, even with live programming.
What I haven't seen yet in this discussion (and sorry if I've missed it) is the fact that every evening every broadcast network sends out hour after hour of what are essentially "live" broadcasts, in the sense that they were not available "on demand" before they were aired "on TV" that night. In addition to live broadcasts, this nightly programming is ideal for multicast, especially since nowadays most of that programming is viewed off the DVR at another time anyway. So filling up that DVR (or even watching it live) could happen over multicast just as well as it could happen over unicast.
But more importantly, what's missing from this conversation is that the broadcast networks, the existing cable/satellite/etc. providers, and everyone else who has a multi-billion dollar vested interest in the way that the business is structured now would fight this tooth and nail. So we can engineer all the awesome solutions we want, they are overwhelmingly unlikely to actually happen.
Doug
Just to clarify, Patrick is right here. Assumptions: All the movies is 120 minuters long. Each movie has an average bitrate of 50 Mbit/s. (50 Mbit/s / 8 (bits) * 7 200 (2 hours) / 1000 (MB) = 45 GB). That means that the storage capacity for the movies is going to be: 10 000 000 * 45 (GB) / 1000 (TB) / 1000 (PB) = 450 PB of storage. Some of you might want to raise your hand to say that this quality of the movie is to good. Ok, so we make it 10 times smaller to 5 Mbit/s in average: 450 PB / 10 = 45 PB or 45 000 TB. If we are using 800 GB SSD drives: 45 000 TB / 0,8 TB = 56 250 SSD drives! (And we don't have any kind of backup of the content here. That need more SSD drives as well. And don't forget the power consumption). So over to the streaming part. 10 000 000 Customers watching, each with a bandwidth of 5 Mbit/s = 50 000 000 Mbit/s / 1000 (Gbit/s) = 50 000 Gbit/s. We only need 500 * 100 Gbit/s connections to solve this kind of demand. For each ISP around the world with 10 000 000 Millions of customers. Will TLMC be able to solve the 100k users watching 10 different movies? Yes. Will TLMC be able to solve the other 10 Million watching 10 Million movies. No, since your network can not handle this kind of load in the first place.
One of us has a different dictionary than everyone else.
Assume I have 10 million movies in my library, and 10 million active users. Further assume there are 10 movies being watched by 100K users each, and 9,999,990 movies which are being watched by 1 user each.
Which has more total demand, the 10 popular movies or the long tail?
This doesn't mean Netflix or Hulu or iTunes or whatever has the aforementioned demand curve. But it does mean my "definition" & yours do not match.
Either way, I challenge you to prove the long tail on one of the serious streaming services is a "tiny fraction" of total demand.
-- //fredan The Last Mile Cache - http://tlmc.fredan.se
You could make far more connecting your awesome prediction software to the stock market, than using it to figure out what specific content people are going to watch to cache before they decide to watch it... And if you don't have said awesome software, then how do you propose to limit the bandwidth need for the cache so you aren't burning more bandwidth than your hit rate, which is what everyone is trying to ask you (or more accurately, explain to you)? And if you're just being a plain local cache at the house/mdf, then what exactly makes it better than any other cache that doesn't get enough hit rate to be worth it to begin with at said house/mdf? -Blake On Tue, Feb 12, 2013 at 8:14 AM, fredrik danerklint <fredan-nanog@fredan.se>wrote:
Just to clarify, Patrick is right here.
Assumptions:
All the movies is 120 minuters long. Each movie has an average bitrate of 50 Mbit/s.
(50 Mbit/s / 8 (bits) * 7 200 (2 hours) / 1000 (MB) = 45 GB).
That means that the storage capacity for the movies is going to be:
10 000 000 * 45 (GB) / 1000 (TB) / 1000 (PB) = 450 PB of storage.
Some of you might want to raise your hand to say that this quality of the movie is to good. Ok, so we make it 10 times smaller to 5 Mbit/s in average:
450 PB / 10 = 45 PB or 45 000 TB.
If we are using 800 GB SSD drives:
45 000 TB / 0,8 TB = 56 250 SSD drives!
(And we don't have any kind of backup of the content here. That need more SSD drives as well. And don't forget the power consumption).
So over to the streaming part.
10 000 000 Customers watching, each with a bandwidth of 5 Mbit/s = 50 000 000 Mbit/s / 1000 (Gbit/s) = 50 000 Gbit/s.
We only need 500 * 100 Gbit/s connections to solve this kind of demand. For each ISP around the world with 10 000 000 Millions of customers.
Will TLMC be able to solve the 100k users watching 10 different movies? Yes.
Will TLMC be able to solve the other 10 Million watching 10 Million movies. No, since your network can not handle this kind of load in the first place.
One of us has a different dictionary than everyone else.
Assume I have 10 million movies in my library, and 10 million active users. Further assume there are 10 movies being watched by 100K users each, and 9,999,990 movies which are being watched by 1 user each.
Which has more total demand, the 10 popular movies or the long tail?
This doesn't mean Netflix or Hulu or iTunes or whatever has the aforementioned demand curve. But it does mean my "definition" & yours do not match.
Either way, I challenge you to prove the long tail on one of the serious streaming services is a "tiny fraction" of total demand.
-- //fredan
The Last Mile Cache - http://tlmc.fredan.se
And if you don't have said awesome software, then how do you propose to limit the bandwidth need for the cache so you aren't burning more bandwidth than your hit rate, which is what everyone is trying to ask you (or more accurately, explain to you)?
Without the concept of TLMC, I don't know. I do think that I need to explain how TLMC works. (please see the file 'tlmc-20130207-r1.tar.gz' as well). This is going to be a long answer. We are trying to get the url: http://static.tlmc.csp.example/hello_world.html First the DNS needs to get the IP address of 'static.tlmc.csp.example', so we have something to connect to. What we would like to have is the IP address of a cache server at the ISP. The CSP has a 'database' of which ISP:s around the world do participate in TLMC. This information is stored in a remark field in the IRR. We do know of where the origin the DNS request is coming from, so we answer that request with a CNAME of: 'static.tlmc.csp.example' IN CNAME 'static.tlmc.csp.example.tlmc.isp.example' (If an ISP does not participate in TLMC, the CSP would instead answer with a A/AAAA record). We now have to ask the DNS server at the ISP for an IP address to connect to. The ISP is in a good mood today, so we are getting the anycast address to connect to. (If the ISP is not in a good mode, called Offline mode in TLMC, the DNS server at the ISP will answer with a CNAME of: 'static.tlmc.csp.example.tlmc.isp.example' IN CNAME 'kaa.k.se.static.tlmc.csp.example' This assume that the DNS server was place in Karlskrona, Sweden. With this the geographic location of where a request is coming is already built in). If we have an end-user/residence which have an cache server, this is the address (the anycast one) its going to listen too. If an end-user does not have an cache server, the ISP must have one. Probably as close to the edge as possible. (Here starts the answer to your question in the beginning): These two have on thing in comment, though. They have a plug-in in the Traffic Server called, 'hash_remap' (which I made specifically for trying to solve the scenario you replied with. And Netflix's). What the plug-in will do is to change the hostname from 'static.tlmc.csp.example' to a hash-based one. In the example url giving, this will be: 'b1902023cbb5ff2597718437.tlmc.isp.example'. The first hash, 'b1902023cbb5ff25', is the combined hash of host and url. The second hash, '97718437' is the hash of the host only. With this, the ISP is going to have another DNS request. A hashed based one. Depending of how much information they are collecting from their cache servers, they know from which one they should load the content from in this case. This principle is called consistent hashing and scales very well. How many layers of consistent hashing should a ISP be using? Only they know the answer for this one. -- //fredan The Last Mile Cache - http://tlmc.fredan.se
On 12/02/13 14:14, fredrik danerklint wrote:
Just to clarify, Patrick is right here.
Assumptions:
All the movies is 120 minuters long. Each movie has an average bitrate of 50 Mbit/s.
(50 Mbit/s / 8 (bits) * 7 200 (2 hours) / 1000 (MB) = 45 GB).
That means that the storage capacity for the movies is going to be:
10 000 000 * 45 (GB) / 1000 (TB) / 1000 (PB) = 450 PB of storage.
Some of you might want to raise your hand to say that this quality of the movie is to good. Ok, so we make it 10 times smaller to 5 Mbit/s in average:
450 PB / 10 = 45 PB or 45 000 TB.
If we are using 800 GB SSD drives:
45 000 TB / 0,8 TB = 56 250 SSD drives!
(And we don't have any kind of backup of the content here. That need more SSD drives as well. And don't forget the power consumption).
So over to the streaming part.
10 000 000 Customers watching, each with a bandwidth of 5 Mbit/s = 50 000 000 Mbit/s / 1000 (Gbit/s) = 50 000 Gbit/s.
We only need 500 * 100 Gbit/s connections to solve this kind of demand. For each ISP around the world with 10 000 000 Millions of customers.
Will TLMC be able to solve the 100k users watching 10 different movies? Yes.
Will TLMC be able to solve the other 10 Million watching 10 Million movies. No, since your network can not handle this kind of load in the first place.
Fortunately, we have some fascinating recent research on exactly this: http://www.land.ufrj.br/~classes/coppe-redes-2012/trabalho/youtube_imc07.pdf -- N.
On 11/02/2013 12:16, Aled Morris wrote:
I don't see why, as an ISP, I should carry multiple, identical, payload packets for the same content. I'm more than happy to replicate them closer to my subscribers on behalf of the content publishers. How we do this is the question, i.e. what form the "multi"-"casting" takes.
It would be nice if we could take advantage of an inherent design of IP and the hardware it runs on, to duplicate the actual packets in-flow as near as is required to the destination.
Multicast is fine when it works, which is generally only in systems where the operator has end-to-end control of the entire data path, where the number of streams isn't too large that the middleboxes have trouble handling all the state requirements, where the middleboxes all support multicast adequately without causing collateral problems, and where the end point talks the same version of multicast as the source, where you don't run into weird vendor multicast bugs and where you don't have packet loss on any intermediate systems. When it stops working, level 3 engineering will usually be able to get it fixed, given enough time, resources and support from vendors. If you're ok about having escalation level engineering dealing with front-line multicast support issues for customers paying a tenner a month, then I wish you well :-) Nick
On 2/8/13 5:23 AM, fredrik danerklint wrote:
- Well, as it turns out, we don't have that kind of a problem.
- You don't?
- No, we do not have that kind of a problem in our network. We have plenty of bandwidth available to our customers, thank-you-every-much.
- Do you have, just to make an example, about 10 000 customers in a specific area, like an city/county or part of a city/county?
- Yes, of course!
- Does these customers have at least 10 Mbit/s connection to the Internet?
- Yes! Who do you think we are, like stupid! Haha!
- Could all those 10 000 customers, just to make it theoretical, hit the 'play'-button on their Internet-connected-TV, at the same time, to watch the latest Quad-HD movie?
The media market has fragmented, so unless we're talking about the first week in February in the US it's not all from one source or 3 or 5. So far the most common delivery format for quad HD content online rings in at around 20Mb/s so you're not delivering that to 10Mb/s customer(s). On the other hand, two weekends ago I bought skyrim on steam and it was delivered, all 5.5GB of it in about 20 minutes. That's not instant gratification but it's acceptable.
- Yes. Oh wait a minute now! This is not fair! Damn. We're toast.
The media market has fragmented, so unless we're talking about the first week in February in the US it's not all from one source or 3 or 5.
Explain further. I did not get that.
So far the most common delivery format for quad HD content online rings in at around 20Mb/s so you're not delivering that to 10Mb/s customer(s).
Isn't 20 Mbit/s more than 10 Mbit/s? (If so, we're taking about 10 000 customers * 20 Mbit/s = 200 000 Mbit/s or 200 Gbit/s).
On the other hand, two weekends ago I bought skyrim on steam and it was delivered, all 5.5GB of it in about 20 minutes. That's not instant gratification but it's acceptable.
About 40 - 50 Mbit/s. Not bad at all. Downloading software does not have to be in real-time, like watching a movie, does. -- //fredan
----- Original Message -----
From: "fredrik danerklint" <fredan-nanog@fredan.se>
The media market has fragmented, so unless we're talking about the first week in February in the US it's not all from one source or 3 or 5.
Explain further. I did not get that.
Joel is saying that the problem you posit: *everyone* wanting to watch the same exact thing at the same exact time, only applies to live TV, and these days, substantially the only thing that can pull anywhere *near* that kind of share is the Super Bowl, which happens to occur the first Sunday in February. Er, Febru-ANY. :-)
Isn't 20 Mbit/s more than 10 Mbit/s? (If so, we're taking about 10 000 customers * 20 Mbit/s = 200 000 Mbit/s or 200 Gbit/s).
Sure; he was just picking a nit about your specification of the customer loops: those people aren't watching QHD anyway, so no sense in using it as an exemplar. My understanding is there is no appreciable amount of QHD programming available to watch anyway, and certainly nothing a) in English b) that isn't sports.
On the other hand, two weekends ago I bought skyrim on steam and it was delivered, all 5.5GB of it in about 20 minutes. That's not instant gratification but it's acceptable.
About 40 - 50 Mbit/s. Not bad at all.
Downloading software does not have to be in real-time, like watching a movie, does.
Real-time is not the constraint you're looking for. To deliver watchable video, the average end-to-end transport bit rate must merely be higher than the program encoding bitrate, with some extra overhead for the lack of real QoS and other traffic on the link; receiver buffers help with this. The only time real-time per se matters is if you're playing the same content on multiple screens and *synchronization* matters. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
Perhaps the solution is to have a 400Gbit/s problem :-) http://newswire.telecomramblings.com/2013/02/france-telecom-orange-and-alcat...
My understanding is there is no appreciable amount of QHD programming available to watch anyway, and certainly nothing a) in English b) that isn't sports.
Why wouldn't you like to solve the problem before it can happen? (I'm talk about static content here, not live events). -- //fredan
Again: Akamai. See also Limelight, etc... fredrik danerklint <fredan-nanog@fredan.se> wrote:
My understanding is there is no appreciable amount of QHD programming available to watch anyway, and certainly nothing a) in English b) that isn't sports.
Why wouldn't you like to solve the problem before it can happen?
(I'm talk about static content here, not live events).
-- //fredan
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
How does Akamai or Limelight or any other CDN, allow your customers as an ISP to cache the content at their home, in their own cache server?
Again: Akamai. See also Limelight, etc...
fredrik danerklint <fredan-nanog@fredan.se> wrote:
My understanding is there is no appreciable amount of QHD programming available to watch anyway, and certainly nothing a) in English b) that isn't sports.
Why wouldn't you like to solve the problem before it can happen?
(I'm talk about static content here, not live events).
-- //fredan
-- //fredan http://tlmc.fredan.se
On (2013-02-11 11:58 +0100), Adam Vitkovsky wrote:
The only time real-time per se matters is if you're playing the same content on multiple screens and *synchronization* matters. And there's the HFT where "real-time" really does matter :)
I think most of HFT crowd are buying into low-latency more as 'why not' than as technical necessity. Reducing latency of your switch from comparable of 200m of fibre (quite high latency) to 20m (rather low latency) seems to be very big and important thing. Yet at the same time one of the most relevant exchanges of them all is replicating multicast on MX trio hardware, which means unpredictable/unfair, high delay replication, which will completely abstract out any switch-level micro-optimization you may have gained. -- ++ytti
On 2/8/13 8:23 AM, fredrik danerklint wrote:
The media market has fragmented, so unless we're talking about the first week in February in the US it's not all from one source or 3 or 5.
Explain further. I did not get that.
The superbowl is the first sunday in feb, it pulls a 75 share of the tv market, about the only thing that does so it's a pretty good example of all eyeballs facing the same direction, of course it's also available via terestrial broadcast, satellite, cable RF and so forth . other than that you talking about a couple of hundred of the most popular content items, followed by a very long tail worth of everything else. While I'm pretty sure somebody in my building watches glee for example or downloaded skyfall in the last week, I'm probably the only one to have streamed a canucks hockey game from 2 weeks ago last night in 1080p.
So far the most common delivery format for quad HD content online rings in at around 20Mb/s so you're not delivering that to 10Mb/s customer(s).
Isn't 20 Mbit/s more than 10 Mbit/s? (If so, we're taking about 10 000 customers * 20 Mbit/s = 200 000 Mbit/s or 200 Gbit/s).
On the other hand, two weekends ago I bought skyrim on steam and it was delivered, all 5.5GB of it in about 20 minutes. That's not instant gratification but it's acceptable.
About 40 - 50 Mbit/s. Not bad at all.
Downloading software does not have to be in real-time, like watching a movie, does. In both cases it's actually rather convenient if it's as fast as
10Mb/s was your number not mine, my crystal ball is total garbage but I don't see delivery 20Mb/s streaming services as a dramatically different problem then delivering 6-8Mb/s streaming services is today. possible, That movie I bought 5 minutes ago from apple I might be streaming to my apple-tv (which has effectively negligible storage), or I might be dumping it on my ipad, in the later case the sooner it arrives the sooner that process is finished and I can unplug it. With the game download, with some exceptions like DLC's I can't start playing until it has arrived so fullfilment is very very important, come back tomorrow when it's done downloading loses you a lot of sales.
About 40 - 50 Mbit/s. Not bad at all.
Downloading software does not have to be in real-time, like watching a movie, does. In both cases it's actually rather convenient if it's as fast as possible,
Yes. What I would like to have is to allow the access switch, which a customer for an ISP is connected to, to let the customer have 1 Gbit/s of bandwidth if the traffic is to or from the cache servers at their ISP. -- //fredan
On 2/8/13 9:46 AM, fredrik danerklint wrote:
About 40 - 50 Mbit/s. Not bad at all.
Downloading software does not have to be in real-time, like watching a movie, does. In both cases it's actually rather convenient if it's as fast as possible,
Yes. What I would like to have is to allow the access switch, which a customer for an ISP is connected to, to let the customer have 1 Gbit/s of bandwidth if the traffic is to or from the cache servers at their ISP.
You're positing a situation where a cache infrastructure at scale built close to the user has a sufficiently high hit rate for rather large objects to be more cost effective than increasing capacity in the middle of the network as the bandwidth/price curve declines. My early career as an http cache dude makes me a bit suspicious. I'm pretty confident that denser/cheaper/faster silicon is less expensive than deploying boxes of spinning disks closer to the customer(s) than they are today (netflix's cache for example isn't that close to the edge (would support 2-10k simultaneous customers for that one application per box), it aims to get inside the isp however) when you add power/cooling/space/lifecycle-maintenance (I'm a datacenter operator) if it wasn't the CDN's would have pushed even closer to the edge. Of course if you can limit consumer choice then you can push your hit rate to 100% but then you're running a VOD service in a walled garden and there are plenty of those already. That said provide compelling numbers and I'll change my mind.
On Fri, 2013-02-08 at 10:50 -0800, joel jaeggli wrote:
On 2/8/13 9:46 AM, fredrik danerklint wrote:
About 40 - 50 Mbit/s. Not bad at all.
Downloading software does not have to be in real-time, like watching a movie, does. In both cases it's actually rather convenient if it's as fast as possible,
Yes. What I would like to have is to allow the access switch, which a customer for an ISP is connected to, to let the customer have 1 Gbit/s of bandwidth if the traffic is to or from the cache servers at their ISP.
You're positing a situation where a cache infrastructure at scale built close to the user has a sufficiently high hit rate for rather large objects to be more cost effective than increasing capacity in the middle of the network as the bandwidth/price curve declines. My early career as an http cache dude makes me a bit suspicious. I'm pretty confident that denser/cheaper/faster silicon is less expensive than deploying boxes of spinning disks closer to the customer(s) than they are today (netflix's cache for example isn't that close to the edge (would support 2-10k simultaneous customers for that one application per box), it aims to get inside the isp however) when you add power/cooling/space/lifecycle-maintenance (I'm a datacenter operator) if it wasn't the CDN's would have pushed even closer to the edge. Of course if you can limit consumer choice then you can push your hit rate to 100% but then you're running a VOD service in a walled garden and there are plenty of those already.
That said provide compelling numbers and I'll change my mind.
The "problem" with increasing capacity is that it opens up captive eyeballs to innovative services from "outside": monopoly operators will prefer to deal with CDN providers & the like and keep control. Sincerely, Laurent
On Fri, Feb 8, 2013 at 3:58 PM, Laurent GUERBY <laurent@guerby.net> wrote:
The "problem" with increasing capacity is that it opens up captive eyeballs to innovative services from "outside": monopoly operators will prefer to deal with CDN providers & the like and keep control.
there are ways to offer vod/etc without pulling that content across your 'internet' backbone, of course you'd still have to provide enough capacity at the last L3 device (probably) to get all customers fed, but... at least it's not all aggregated with cat videos from vimeo?
participants (24)
-
Adam Vitkovsky
-
Aled Morris
-
Blake Dunlap
-
Christopher Morrow
-
Dobbins, Roland
-
Doug Barton
-
fredrik danerklint
-
Jay Ashworth
-
Jeff Kell
-
Jeroen Massar
-
Joe Greco
-
joel jaeggli
-
Laurent GUERBY
-
Mark Radabaugh
-
ML
-
Neil Harris
-
Nick Hilliard
-
Patrick W. Gilmore
-
Robert M. Enger
-
Saku Ytti
-
Scott Helms
-
Stephen Sprunk
-
Tim Durack
-
William McCall