[NANOG] would ip6 help us safeing energy ?
hello i have a question : " IF we would use multicast" streaming ONLY, for appropriet content , would `nt this " decrease " the overall internet traffic ? Isn´t this an argument for ip6 / greenip6 ;) aswell ? just my 2 cents marc -- Les enfants teribbles - research and deployment Marc Manthey - Hildeboldplatz 1a D - 50672 Köln - Germany Tel.:0049-221-3558032 Mobil:0049-1577-3329231 jabber :marc@kgraff.net blog : http://www.let.de ipv6 http://www.ipsix.org _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
On Sat, Apr 26, 2008, Marc Manthey wrote:
hello
i have a question :
" IF we would use multicast" streaming ONLY, for appropriet content , would `nt this " decrease " the overall internet traffic ?
Isn?t this an argument for ip6 / greenip6 ;) aswell ?
Some people make more money shipping more bits. They may not have any motivation or desire to decrease traffic. Adrian _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
hello
i have a question :
" IF we would use multicast" streaming ONLY, for appropriet content , would `nt this " decrease " the overall internet traffic ?
Isn?t this an argument for ip6 / greenip6 ;) aswell ?
Some people make more money shipping more bits. They may not have any motivation or desire to decrease traffic.
hello adrian, yes i know but i would like to know if there is some material / links, case studys or papers / statistics around to visualise it, for a presentation that i am planning todo. greetings Marc _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
On Sat, 26 Apr 2008, Marc Manthey wrote:
" IF we would use multicast" streaming ONLY, for appropriet content , would `nt this " decrease " the overall internet traffic ?
On one hand, the amount of content that is 'live' or 'continuous' and suitable for multicast streaming isn't s large percentage of overall internet traffic to begin with. So the effect of moving most live content to multicast on the Internet would have little overall effect. However, for some live content where the audience is either very large or concentrated on various networks, moving to multicast certainly has significant advantages in reducing traffic on the networks closest to the source or where the viewer concentration is high (particularly where the viewer numbers infrequently spikes significantly higher than the average). But network providers make their money in part by selling bandwidth. The folks who would need to push for multicast are the live/perishable content providers as they're the ones who'd benefit the most. But if bandwidth is cheap they're not really gonna care.
Isn´t this an argument for ip6 / greenip6 ;) aswell ?
It's an argument for decreasing traffic and improving network efficiency and scalability to handle 'flash crowd events'. IPv6 has nothing to do with it. Antonio Querubin whois: AQ7-ARIN _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
On one hand, the amount of content that is 'live' or 'continuous' and suitable for multicast streaming isn't s large percentage of overall internet traffic to begin with. So the effect of moving most live content to multicast on the Internet would have little overall effect.
I'm wondering how much content is used TiVo style, not in real time, but fairly soon thereafter. It might make sense to multicast feeds to local caches so when people actually want stuff, it doesn't come all the way across the net. R's, John _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
John Levine wrote:
I'm wondering how much content is used TiVo style, not in real time, but fairly soon thereafter. It might make sense to multicast feeds to local caches so when people actually want stuff, it doesn't come all the way across the net.
I think the good folks at Akamai may have already thought of this. :-) -- Jay Hennigan - CCIE #7880 - Network Engineering - jay@impulse.net Impulse Internet Service - http://www.impulse.net/ Your local telephone and internet company - 805 884-6323 - WB6RDV _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
Am 26.04.2008 um 21:12 schrieb Jay Hennigan:
John Levine wrote:
I'm wondering how much content is used TiVo style, not in real time, but fairly soon thereafter. It might make sense to multicast feeds to local caches so when people actually want stuff, it doesn't come all the way across the net.
I think the good folks at Akamai may have already thought of this. :-)
teh http://research.microsoft.com/~ratul/akamai.html http://www.akamai.com/html/about/management_dl.html multicast ? i have another theory , but i dont talk about it ;) BUT .....someone mentioned akamai had 13.000 servers, imagine they just need 100 would this hurt ? ;) cheers Marc _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
I'm wondering how much content is used TiVo style, not in real time, but fairly soon thereafter. It might make sense to multicast feeds to local caches so when people actually want stuff, it doesn't come all the way across the net.
I think the good folks at Akamai may have already thought of this. :-)
Akamai has built a Content Delivery Network (CDN) because they do not have to rely on any specific ISP or any specific IP network functionality. If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you are limited to only using ISPs who have implemented the right protocols and who peer using those protocols. P2P is a lot like CDN because it does not rely on any specific ISP implementation, but as a result of being 100% free of the ISP, P2P also lacks the knowledge of the network topology that it needs to be efficient. Of course, a content provider could leverage P2P by predelivering its content to strategically located sites in the network, just like they do with a CDN. IP multicast and P2MP have routing protocols which tell them where to send content. CDN's are either set up manually or use their own proprietary methods to figure out where to send content. P2P currently doesn't care about topology because it views the net as an amorphous cloud. NNTP, the historical firehose protocol, just floods it out to everyone who hasn't seen it yet but actually, the consumers of an NNTP feed have been set up statically in advance. And this static setup does include knowledge of ISP's network topology, and knowledge of the ISP's economic realities. I'd like to see a P2P protocol that sets up paths dynamically, but allows for inputs as varied as those old NNTP setups. There was also a time when LAN's had some form of economic reality configured in, i.e. some users were only allowed to log into the LAN during certain time periods on certain days. Is there any ISP that wouldn't want some way to signal P2P clients how to use spare bandwidth without ruining the network for other paying customers? --Michael Dillon _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
michael.dillon@bt.com wrote:
NNTP, the historical firehose protocol, just floods it out to everyone who hasn't seen it yet but actually, the consumers of an NNTP feed have been set up statically in advance. And this static setup does include knowledge of ISP's network topology, and knowledge of the ISP's economic realities. I'd like to see a P2P protocol that sets up paths dynamically, but allows for inputs as varied as those old NNTP setups. There was also a time when LAN's had some form of economic reality configured in, i.e. some users were only allowed to log into the LAN during certain time periods on certain days. Is there any ISP that wouldn't want some way to signal P2P clients how to use spare bandwidth without ruining the network for other paying customers?
I think it's safe to assume that isps are steering p2p traffic for the purposes of adjusting their ratios on peering and transit links... while it lacks the intentionality of playing with the usenet spam/warez/porn firehose a little TE to shift it from one exit to another when you have lots of choices is presumably a useful knob to have. Layer violations to tell applications that they should care about some peers in their overlay network vs others seems like something with a lot of potential uninteded consequences.
--Michael Dillon
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
Am 26.04.2008 um 20:42 schrieb Antonio Querubin:
On Sat, 26 Apr 2008, Marc Manthey wrote:
" IF we would use multicast" streaming ONLY, for appropriet content , would `nt this " decrease " the overall internet traffic ?
On one hand, the amount of content that is 'live' or 'continuous' and suitable for multicast streaming isn't s large percentage of overall internet traffic to begin with. So the effect of moving most live content to multicast on the Internet would have little overall effect.
right, i am aware of that and i was ment as an hypothetically rant ;)
However, for some live content where the audience is either very large or concentrated on various networks, moving to multicast certainly has significant advantages in reducing traffic on the networks closest to the source or where the viewer concentration is high (particularly where the viewer numbers infrequently spikes significantly higher than the average).
i am not a math genious and i am talking about for example serving 10.000 unicast streams and 10.000 multicast streams would the multicast streams more efficient or lets say , would you need more machines to server 10.000 unicast streams ?
But network providers make their money in part by selling bandwidth. The folks who would need to push for multicast are the live/perishable content providers as they're the ones who'd benefit the most. But if bandwidth is cheap they're not really gonna care.
well , cheap is relative , i bet its cheap where google hosts the NOCs , but its not cheap in brasil , argentinia or indonesia.
Isn´t this an argument for ip6 / greenip6 ;) aswell ?
It's an argument for decreasing traffic and improving network efficiency and scalability to handle 'flash crowd events'. IPv6 has nothing to do with it.
thanks for your opinion. Marc
Antonio Querubin whois: AQ7-ARIN
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
On Sat, 26 Apr 2008, Marc Manthey wrote:
i am not a math genious and i am talking about for example serving
10.000 unicast streams and 10.000 multicast streams
would the multicast streams more efficient or lets say , would you need more machines to server 10.000 unicast streams ?
For 10000 concurrent unicast streams you'd need not just more servers. You'd need a significantly different network infrastructure than something that would have to handle only a single multicast stream. But supporting multicast isn't without it's own problems either. Even the destination networks would have to consider implementing IGMP and/or MLD snooping in their layer 2 devices to obtain maximum benefit from multicast. Antonio Querubin whois: AQ7-ARIN _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
i am not a math genious and i am talking about for example serving
10.000 unicast streams and 10.000 multicast streams
would the multicast streams more efficient or lets say , would you need more machines to server 10.000 unicast streams ?
hello all ,
For 10000 concurrent unicast streams you'd need not just more servers.
thanks for the partizipation on this topic , i was "theoreticly " speaking and this was actually what i wanted to hear ;)
You'd need a significantly different network infrastructure than something that would have to handle only a single multicast stream. But supporting multicast isn't without it's own problems either. Even the destination networks would have to consider implementing IGMP and/or MLD snooping in their layer 2 devices to obtain maximum benefit from multicast.
i was reading some papers about multicast activity on 9/11 and it was interesting to read that it just worked even when most of the "big player " sites went offline, so this gives me another approach for emergency scenarios. <http://www.nanog.org/mtg-0110/ppt/eubanks.ppt> <http://multicast.internet2.edu/workshops/illinois/internet2-multicast-worksh...
Akamai has built a Content Delivery Network (CDN) because they do not have to rely on any specific ISP or any specific IP network functionality. If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you are limited to only using ISPs who have implemented the right protocols and who peer using those protocols.
so this is similar to a "wallet garden " and not what we really want , but i was clear about that this is actually the only idea to implement a "new" technologie into an existing infrastructure. regards and sorry for beeing a bit offtopic Marc <www.lettv.de>
Antonio Querubin whois: AQ7-ARIN
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
Marc Manthey wrote:
i am not a math genious and i am talking about for example serving
10.000 unicast streams and 10.000 multicast streams
would the multicast streams more efficient or lets say , would you need more machines to server 10.000 unicast streams ?
hello all ,
For 10000 concurrent unicast streams you'd need not just more servers.
thanks for the partizipation on this topic , i was "theoreticly " speaking and this was actually what i wanted to hear ;)
Your delivery needs to be sized against demand. 12 years ago when I started playing around with streaming on a university campus boxes like the following were science fiction: http://www.sun.com/servers/networking/streamingsystem/specs.xml#anchor4 As for that matter were n x 10Gb/s ethernet trunks. To make this scale in either dimension, audience or bandwidth, the interests of the service providers and the content creators need to be aligned. Traditionally this has been something of a challenge for multicast deployments. Not that it hasn't happened but it's not an automatic win either.
You'd need a significantly different network infrastructure than something that would have to handle only a single multicast stream. But supporting multicast isn't without it's own problems either. Even the destination networks would have to consider implementing IGMP and/or MLD snooping in their layer 2 devices to obtain maximum benefit from multicast.
i was reading some papers about multicast activity on 9/11 and it was interesting to read that it just worked even when most of the "big player " sites went offline, so this gives me another approach for emergency scenarios.
The big player new sites were not take offline due to network capacity issues but rather because their dynamic content delivery platforms couldn't cope with the flash crowds... Once they got rid of the dynamically generated content (per viewer page rendering, advertising) they were back.
<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt>
<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-worksh...
Akamai has built a Content Delivery Network (CDN) because they do not have to rely on any specific ISP or any specific IP network functionality. If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you are limited to only using ISPs who have implemented the right protocols and who peer using those protocols.
so this is similar to a "wallet garden " and not what we really want , but i was clear about that this is actually the only idea to implement a "new" technologie into an existing infrastructure.
A maturing internet platform my be quite successful at resisting attempts to change it. It's entirely possible for example that evolving the mbone would have been more successful than "going native". The mbone was in many respects a proto p2p overlay just as ip was a overlay on the circuit-switched pstn. That's all behind us however, and the approach that we should drop all the unicast streaming or p2p in favor of multicast transport because it's greener or lighter weight is just so much tilting at windmills, something I've done altogether to much of. Use the tool where it makes sense and can be delivered in a timely fashion.
regards and sorry for beeing a bit offtopic
Marc
<www.lettv.de>
Antonio Querubin whois: AQ7-ARIN
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
_______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
I became aware of something called espn360 last fall. I just did a google search so I could provide a URL, but one of the top search responses was a Aug 9, 2007 posting saying "ESPN360 Dies an Unneccessary Death: A Lesson in Network Neutrality ..." I don't think it's dead, though, and maybe if you don't know about it, you can do your own google search. I think Disney/ABC thinks they can get individual ISPs to pay them to carry sports audio/video streams. I suppose that would be yet another multicast stream method, assuming an ISP location had multiple customers viewing the same stream. Are other content providers trying to do something similar? How are operators dealing with this? What opinions are there in the operator community? Mr. Dale _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
On Mon, Apr 28, 2008 at 9:01 AM, Dale Carstensen <dlc@lampinc.com> wrote:
I think Disney/ABC thinks they can get individual ISPs to pay them to carry sports audio/video streams. I suppose that would be yet another multicast stream method, assuming an ISP location had multiple customers viewing the same stream.
Are other content providers trying to do something similar? How are operators dealing with this? What opinions are there in the operator community?
I'm not sure of the particulars, but Hulu (NBC/Universal and News Corp) and FanCast (Comcast) seem to have an interesting relationship. I would love to know more, but i detest reading financials. ;-) -Jim P. _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
Dale: ESPN360 used to be something that internet subscribers paid for themselves, but now it's something that ISPs (most interesting to those who are also video providers) can offer. If you google around you can find a pretty good Wikipedia page on ESPN360. I looked into this for our operations because we do both (internet and video). The price was reasonable and you only pay on the number of internet subs that meet their minimum performance standards. Since 50% of our user base is at 128/128 kbps, that's a lot of subscribers we didn't need to pay for. In the end, I didn't get buy-in from the rest of the management team into adding this. I think they perceived (and probably correctly so) that too few of our users would actually *use* it. If I could get even 2% of our customer base seriously interested I think we would move on this. BTW, there's no multicast (at lease from Disney/ABC directly) involved. It's just another unicast video stream like YouTube. Frank -----Original Message----- From: Dale Carstensen [mailto:dlc@lampinc.com] Sent: Monday, April 28, 2008 8:02 AM To: nanog@nanog.org Subject: Re: [NANOG] would ip6 help us safeing energy ? I became aware of something called espn360 last fall. I just did a google search so I could provide a URL, but one of the top search responses was a Aug 9, 2007 posting saying "ESPN360 Dies an Unneccessary Death: A Lesson in Network Neutrality ..." I don't think it's dead, though, and maybe if you don't know about it, you can do your own google search. I think Disney/ABC thinks they can get individual ISPs to pay them to carry sports audio/video streams. I suppose that would be yet another multicast stream method, assuming an ISP location had multiple customers viewing the same stream. Are other content providers trying to do something similar? How are operators dealing with this? What opinions are there in the operator community? Mr. Dale _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
I looked into this for our operations because we do both (internet and video). The price was reasonable
That's interesting. Under the commercial television broadcast model of American networks such as ABC, CBS, FOX, NBC, The CW and MyNetworkTV, affiliates give up portions of their local advertising airtime in exchange for network programming. _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
evening all , found an related article about the power consumtion saving in ip6. - Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6? http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/ http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc20... still interested in other links and publications regards Marc -- "Use your imagination not to scare yourself to death but to inspire yourself to life." Les enfants teribbles - research and deployment Marc Manthey - head of research and innovation Hildeboldplatz 1a D - 50672 Köln - Germany Tel.:0049-221-3558032 Mobil:0049-1577-3329231 jabber :marc@kgraff.net blog : http://www.let.de ipv6 http://www.ipsix.org xing : https://www.xing.com/profile/Marc_Manthey
On Mon, May 05, 2008, Marc Manthey wrote:
evening all ,
found an related article about the power consumtion saving in ip6.
-
Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6?
http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/
http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc20...
I'd seriously be looking at making current -software- run more efficiently before counting ipv6-related power savings. Adrian
On 5 mei 2008, at 0:57, Adrian Chadd wrote:
I'd seriously be looking at making current -software- run more efficiently before counting ipv6-related power savings.
Good luck with that. Obviously there is a lot to be gained at that end, but that doesn't mean we should ignore power use in the network. One thing that could help here is to increase the average packet size. Whenever I've looked, this has always hovered around 500 bytes for internet traffic. If we can get jumboframes widely deployed, it should be doable to double that. Since most work in routers and switches is per-packet rather than per-bit, this has the potential to save a good amount of power. Now obviously this only works in practice if routers and switches actually use less power when there are fewer packets, which is not a given. It helps even more if the maximum throughput isn't based on 64- byte packets. Why do people demand that, anyway? The only thing I can think of is DoS attacks. But that can be solved by only allowing end- users to send an average packet size of 500 (or 250, or whatever) bytes. So if you have a 10 Mbps connection you don't get to send 14000 64-byte packets per second, but a maximum of 2500 packets per second. So with 64-byte packets you only get to use 1.25 Mbps. I'm guessing having a 4x10Gbps line card that "only" does 14 Mpps total rather than 14 Mpps per port would be a good deal cheaper. Obviously if you're a service provider with a customer that sends 10 Gbps worth of VoIP you can only use one of those 4 ports but somehow, I'm thinking few people use 10 Gbps worth of VoIP... Iljitsch PS. Am I the only one who is annoyed by the reduction in usable subject space by the superfluous [NANOG]?
-----Original Message----- From: Iljitsch van Beijnum [mailto:iljitsch@muada.com] think of is DoS attacks. But that can be solved by only allowing end- users to send an average packet size of 500 (or 250, or whatever) bytes. So if you have a 10 Mbps connection you don't get to send 14000 64-byte packets per second, but a maximum of 2500 packets per second. So with 64-byte packets you only get to use 1.25 Mbps.
You have just cut out the VoIP industry, TCP setup, IM or most types of real-time services on the Internet.
PS. Am I the only one who is annoyed by the reduction in usable subject space by the superfluous [NANOG]?
Yes you are the only one. ;)
On 5 mei 2008, at 21:56, Mike Fedyk wrote:
So if you have a 10 Mbps connection you don't get to send 14000 64-byte packets per second, but a maximum of 2500 packets per second. So with 64-byte packets you only get to use 1.25 Mbps.
You have just cut out the VoIP industry, TCP setup, IM or most types of real-time services on the Internet.
Of course not. Like I said, as an average end-user with 10 Mbps you get to send a maximum of 2500 packets per second. That's plenty to do VoIP, set up TCP sessions or do IM. You just don't get to send the full 10 Mbps at this size.
On 6/05/2008, at 8:02 AM, Iljitsch van Beijnum wrote:
Of course not. Like I said, as an average end-user with 10 Mbps you get to send a maximum of 2500 packets per second. That's plenty to do VoIP, set up TCP sessions or do IM. You just don't get to send the full 10 Mbps at this size.
Hmm, I see value in that. But, good luck trying to convince customers to take a pps limitation in addition to a Mbps limitation, whether they ever exceed that pps or not. You /might/ convince them to take a pps limitation only - but if they want to do 30Mbit (ie 2500pps @ 1500b) then your product needs to support that. Maybe you just start calling "10Mbps" "10Mbps, assuming a 500b average packet size." Anyway, nice idea in theory - putting more real world limitations in to sold product limitations - but I don't see it working out with marketing people, etc. unless someone has been doing it for years already. It'd be good if the world were all engineers though, huh? -- Nathan Ward
On Tue, May 06, 2008, Nathan Ward wrote:
Maybe you just start calling "10Mbps" "10Mbps, assuming a 500b average packet size."
Anyway, nice idea in theory - putting more real world limitations in to sold product limitations - but I don't see it working out with marketing people, etc. unless someone has been doing it for years already. It'd be good if the world were all engineers though, huh?
NPE-XXX, anyone? Adrian
Notwithstanding that fact that keepalives are a huge issue for tiny battery powered devices. There's a false economy in assuming those packets wouldn't have to be sent with IPV6... Marc Manthey wrote:
evening all ,
found an related article about the power consumtion saving in ip6.
-
Up to 300 Megawatt Worth of Keepalive Messages to be Saved by IPv6?
http://www.circleid.com/posts/81072_megawatts_keepalive_ipv6/
http://www.niksula.hut.fi/~peronen/publications/haverinen_siren_eronen_vtc20...
still interested in other links and publications
regards
Marc
-- "Use your imagination not to scare yourself to death but to inspire yourself to life."
Les enfants teribbles - research and deployment Marc Manthey - head of research and innovation Hildeboldplatz 1a D - 50672 Köln - Germany Tel.:0049-221-3558032 Mobil:0049-1577-3329231 jabber :marc@kgraff.net blog : http://www.let.de ipv6 http://www.ipsix.org xing : https://www.xing.com/profile/Marc_Manthey _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
found an related article about the power consumtion saving in ip6.
no, you found an article about bad nat design in a market lacking the ability to stanardize on a clean one. if you look, you can also find statements by the same folk explaining how ipv6 will help prevent car accidents involving falling rocks. yes, i am serious. note that i work very hard on ipv6 deployment. i just don't encourage or support marketing insanity. randy
Isn´t this an argument for ip6 / greenip6 ;) aswell ?
besides the multicast argument, ipv6 and the transition to it with dual stacks, etc, etc, afaik will require more horsepower and memory to handle routing info/updates, don't think so it will reduce energy consumption au contraire. one place where major improvements can be made is to increase the efficiency of switched power supplies on servers and other gear installed in large datacenters. My .02 _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
Am 29.04.2008 um 18:31 schrieb Jorge Amodio:
besides the multicast argument,
hi Jorge , all ok, i was talking about a "campus" installation imagine you want to broadcast a live event so 10.000 unicast streams and 10.000 multicast stream for example. from what toni replyed , you need less horsepower with the multicast streams
For 10000 concurrent unicast streams you'd need not just more servers.
but would like to know how this could be calculated. my 00.2 ;) marc - Les enfants teribbles - research and deployment Marc Manthey - Hildeboldplatz 1a D - 50672 Köln - Germany Tel.:0049-221-3558032 Mobil:0049-1577-3329231 jabber :marc@kgraff.net blog : http://www.let.de ipv6 http://www.ipsix.org Klarmachen zum Ändern! http://www.piratenpartei-koeln.de _______________________________________________ NANOG mailing list NANOG@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog
participants (16)
-
Adrian Chadd
-
Antonio Querubin
-
Dale Carstensen
-
Frank Bulk - iNAME
-
Iljitsch van Beijnum
-
Jay Hennigan
-
Jim Popovitch
-
Joel Jaeggli
-
John Levine
-
Jorge Amodio
-
Marc Manthey
-
michael.dillon@bt.com
-
Mike Fedyk
-
Nathan Ward
-
Randy Bush
-
Williams, Marc