Marc Manthey wrote:
i am not a math genious and i am talking about for example serving
10.000 unicast streams and 10.000 multicast streams
would the multicast streams more efficient or lets say , would you need more machines to server 10.000 unicast streams ?
hello all ,
For 10000 concurrent unicast streams you'd need not just more servers.
thanks for the partizipation on this topic , i was "theoreticly " speaking and this was actually what i wanted to hear ;)
Your delivery needs to be sized against demand. 12 years ago when I started playing around with streaming on a university campus boxes like the following were science fiction: http://www.sun.com/servers/networking/streamingsystem/specs.xml#anchor4 As for that matter were n x 10Gb/s ethernet trunks. To make this scale in either dimension, audience or bandwidth, the interests of the service providers and the content creators need to be aligned. Traditionally this has been something of a challenge for multicast deployments. Not that it hasn't happened but it's not an automatic win either.
You'd need a significantly different network infrastructure than something that would have to handle only a single multicast stream. But supporting multicast isn't without it's own problems either. Even the destination networks would have to consider implementing IGMP and/or MLD snooping in their layer 2 devices to obtain maximum benefit from multicast.
i was reading some papers about multicast activity on 9/11 and it was interesting to read that it just worked even when most of the "big player " sites went offline, so this gives me another approach for emergency scenarios.
The big player new sites were not take offline due to network capacity issues but rather because their dynamic content delivery platforms couldn't cope with the flash crowds... Once they got rid of the dynamically generated content (per viewer page rendering, advertising) they were back.
<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt>
<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-worksh...
Akamai has built a Content Delivery Network (CDN) because they do not have to rely on any specific ISP or any specific IP network functionality. If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you are limited to only using ISPs who have implemented the right protocols and who peer using those protocols.
so this is similar to a "wallet garden " and not what we really want , but i was clear about that this is actually the only idea to implement a "new" technologie into an existing infrastructure.
A maturing internet platform my be quite successful at resisting attempts to change it. It's entirely possible for example that evolving the mbone would have been more successful than "going native". The mbone was in many respects a proto p2p overlay just as ip was a overlay on the circuit-switched pstn. That's all behind us however, and the approach that we should drop all the unicast streaming or p2p in favor of multicast transport because it's greener or lighter weight is just so much tilting at windmills, something I've done altogether to much of. Use the tool where it makes sense and can be delivered in a timely fashion.
regards and sorry for beeing a bit offtopic
Marc
<www.lettv.de>
Antonio Querubin whois: AQ7-ARIN
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog