"IP networks will feel traffic pain in 2009" (C|Net & Cisco)
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling every two years. This will result in an annual bandwidth demand on the world's IP networks of approximately 522 exabytes2, or more than half a zettabyte." http://news.cnet.com/8301-13846_3-10145480-62.html
On Jan 20, 2009, at 11:52 AM, Randy Bush <randy@psg.com> wrote:
On 09.01.21 04:48, Paul Vixie wrote:
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012
i.e. about the same as it has been. deep shock.
randy
With no bump when v4 address runout happens. We'll see. Matthew Kaufman (sent from my iPhone)
On Tue, 20 Jan 2009, Paul Vixie wrote:
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling every two years. This will result in an annual bandwidth demand on the world's IP networks of approximately 522 exabytes2, or more than half a zettabyte."
Two thoughts: Why do some people think that bytes/month is a relevant measure of traffic? Peak bits/second is what you need to make your network handle for it to perform well. For me CAGR of 46% is a slowdown. I'm used to 75-120% growth per year in traffic, 46% is a relief. As markets mature (we're seeing decline in # of DSL lines in the country, increase is in LAN and mobile) less new people are going online (the ones who want Internet access already have it) and the increase per year in traffic by existing users is slower than the increase seen during the rush of new users coming online. This will of course vary by where you are in the world... -- Mikael Abrahamsson email: swmike@swm.pp.se
On Jan 20, 2009, at 2:58 PM, Mikael Abrahamsson wrote:
On Tue, 20 Jan 2009, Paul Vixie wrote:
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling every two years. This will result in an annual bandwidth demand on the world's IP networks of approximately 522 exabytes2, or more than half a zettabyte."
Two thoughts:
Why do some people think that bytes/month is a relevant measure of traffic? Peak bits/second is what you need to make your network handle for it to perform well.
For me CAGR of 46% is a slowdown. I'm used to 75-120% growth per year in traffic, 46% is a relief. As markets mature (we're seeing decline in # of DSL lines in the country, increase is in LAN and mobile) less new people are going online (the ones who want Internet access already have it) and the increase per year in traffic by existing users is slower than the increase seen during the rush of new users coming online.
It is a slowdown, but the underlying situation is not the same. 100 Mbps came out before most were doing 100 Mbps on a typical LAN in aggregate. 1000 Mbps came out before most were doing 1000 Mbps on a typical WAN in aggregate. 10000 Mbps came out before most were aggregating 10x[GigE|OC12] on their largest individual WAN links. 100000 Mbps should come out shortly after most are aggregating 32x10GE on a typical WAN link. See a pattern forming here? -- TTFN, patrick
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling every two years. This will result in an annual bandwidth demand on the world's IP networks of approximately 522 exabytes2, or more than half a zettabyte."
duh... from a much earlier thread...
that lesson is, the installed base is meaningless, and how we did it before is meaningless, all that matters is getting growth right.
Mike O'dell... Mo's Law. 1994
I believe the quote is What installed base?
/vijay
to play devils advocate, how much impact does caching have on the total traffic flow anyway? --bill
On Jan 20, 2009, at 6:31 PM, bmanning@vacation.karoshi.com wrote:
"Cisco VNI projections indicate that IP traffic will increase at a combined annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling every two years. This will result in an annual bandwidth demand on the world's IP networks of approximately 522 exabytes2, or more than half a zettabyte."
duh...
from a much earlier thread...
that lesson is, the installed base is meaningless, and how we did it before is meaningless, all that matters is getting growth right.
Mike O'dell... Mo's Law. 1994
I believe the quote is What installed base?
/vijay
to play devils advocate, how much impact does caching have on the total traffic flow anyway?
Less and less would be my estimate. How much video is cached ? How much P2P is cached ? Regards Marshall
--bill
On Jan 20, 2009, at 6:37 PM, Marshall Eubanks wrote:
to play devils advocate, how much impact does caching have on the total traffic flow anyway?
Less and less would be my estimate. How much video is cached ? How much P2P is cached ?
Define "cached". For instance, most of the video today (which apparently had 12 zeros in the bits per second number) was "cached", if you ask the CDNs serving it. Sounds to me like that is significant, no matter how big your network is. -- TTFN, patrick
Hum... whats the wholesale cost of 10G/byte connection? And what would the cost of a zetabyte connection cost at todays rates? me thinks Pres Obama's USD 825B package is way too small - or the cost per G/Byte is going to drop a lot... if the traffic loads keep up. --bill
On Tue, Jan 20, 2009, Patrick W. Gilmore wrote:
Define "cached".
For instance, most of the video today (which apparently had 12 zeros in the bits per second number) was "cached", if you ask the CDNs serving it.
Sounds to me like that is significant, no matter how big your network is.
If, for example, Google's current generation of YouTube content serving wasn't 100% uncachable by design, Squid caches would probably be saving a stupid amount of bandwidth for those of you who are using it. People rolling Squid + 'magic adrian rules to rewrite Youtube URLs so they don't suck' report upwards of 80% byte hit rates on -just- the Youtube content, because people view the same bloody popular videos over and over again. Thats 80% of a couple hundred megabits for a couple groups in Brazil, and that translates to mega dollars to them. There's no reason to doubt this wouldn't be the case even in Europe and North America for forward caches put in exactly the right spot to see exactly the right number of people. I tried talking to Google about this. Those I spoke to went from enthusiastic one month to "sorry, been told this won't happen!" the next month. Which is sad really; the people who keep coming to me and asking about caching all those things you funny CDNs are pushing out are those who are on things like satellite links, or in eastern europe / south america, where the -infrastructure- is still lacking. They're the ones blocking facebook, youtube, etc, because of the amount of bandwidth used by just those sites. :) Adrian (And I know about the various generations of Google content boxes out there and have heard stories from people who have and are trialling them. Thats great if you're a service provider, and sucks if you're not well connected to a service provider. Like, say, schools in Australia trying to run a class with 30-40 odd computers hitting Google maps at once. tsk.)
On Jan 20, 2009, at 7:40 PM, Adrian Chadd wrote:
On Tue, Jan 20, 2009, Patrick W. Gilmore wrote:
Define "cached".
For instance, most of the video today (which apparently had 12 zeros in the bits per second number) was "cached", if you ask the CDNs serving it.
Sounds to me like that is significant, no matter how big your network is.
If, for example, Google's current generation of YouTube content serving wasn't 100% uncachable by design, Squid caches would probably be saving a stupid amount of bandwidth for those of you who are using it.
People rolling Squid + 'magic adrian rules to rewrite Youtube URLs so they don't suck' report upwards of 80% byte hit rates on -just- the Youtube content, because people view the same bloody popular videos over and over again. Thats 80% of a couple hundred megabits for a couple groups in Brazil, and that translates to mega dollars to them.
There's no reason to doubt this wouldn't be the case even in Europe and North America for forward caches put in exactly the right spot to see exactly the right number of people.
I tried talking to Google about this. Those I spoke to went from enthusiastic one month to "sorry, been told this won't happen!" the next month. Which is sad really; the people who keep coming to me and asking about caching all those things you funny CDNs are pushing out are those who are on things like satellite links, or in eastern europe / south america, where the -infrastructure- is still lacking. They're the ones blocking facebook, youtube, etc, because of the amount of bandwidth used by just those sites. :)
I do not work for GOOG or YouTube, I do not know why they do what they do. However, it is trivial to think up perfectly valid reasons for Google to intentionally break caches on YouTube content (e.g. paid advertising per download). Doesn't matter if you have small links or no infrastructure or whatever. Google has ever right, moral & legal, to serve content as they please. They are providing the content for free, but they want to do it on their own terms. Seems perfectly reasonable to me. Do you disagree? Sure the situation sux, but life is not fair. As for CDNs, most do not do anything to the content they serve. A content provider makes the content and hands it to the CDNs, which serves the content. the CDN does not own, create, or modify the content. (There might be edge cases, but we are talking generalities here.) You see "funny" stuff, talk to the content owner, not the CDN.
(And I know about the various generations of Google content boxes out there and have heard stories from people who have and are trialling them. Thats great if you're a service provider, and sucks if you're not well connected to a service provider. Like, say, schools in Australia trying to run a class with 30-40 odd computers hitting Google maps at once. tsk.)
Google is not the only company which will put caches into any provider - or school (which is really just a special case provider) - with enough traffic. A school with 30 machines probably would not qualify. This is not being mean, this is just being rational. No way those 30 machines save the company enough money to pay for the caches. Again, sux, but that's life. I'd love to hear your solution - besides writing "magic" into squid to intentionally break or alter (some would use much harsher language) content you do not own. Content others are providing for free. -- TTFN, patrick
On Wed, Jan 21, 2009, Patrick W. Gilmore wrote:
Google is not the only company which will put caches into any provider - or school (which is really just a special case provider) - with enough traffic. A school with 30 machines probably would not qualify. This is not being mean, this is just being rational. No way those 30 machines save the company enough money to pay for the caches.
Again, sux, but that's life. I'd love to hear your solution - besides writing "magic" into squid to intentionally break or alter (some would use much harsher language) content you do not own. Content others are providing for free.
Finding ways to force object revalidation by an intermediary cache (so the end origin server knows something has been fetched) and thus allowing the cache to serve the content on behalf of the content origintor, under their full control, but without the bits being served. I'm happy to work with content providers if they'd like to point out which bits of HTTP design and implementation fail them (eg, issues surrounding Variant object caching and invalidation/revalidation) and get them fixed in a public manner in Squid so it -can- be deployed by people to save on bandwidth in places where it still matters. Adrian
On Jan 21, 2009, at 11:07 AM, Adrian Chadd wrote:
On Wed, Jan 21, 2009, Patrick W. Gilmore wrote:
Google is not the only company which will put caches into any provider - or school (which is really just a special case provider) - with enough traffic. A school with 30 machines probably would not qualify. This is not being mean, this is just being rational. No way those 30 machines save the company enough money to pay for the caches.
Again, sux, but that's life. I'd love to hear your solution - besides writing "magic" into squid to intentionally break or alter (some would use much harsher language) content you do not own. Content others are providing for free.
Finding ways to force object revalidation by an intermediary cache (so the end origin server knows something has been fetched) and thus allowing the cache to serve the content on behalf of the content origintor, under their full control, but without the bits being served.
Excellent idea. It is a shame content owners do not see the utility in your idea. To bring this back to an operational topic, just because a content owner does not want to work with someone on this, does the lack of external bandwidth / infrastructure / whatever make it "OK" to install a proxy which will intentionally re-write the content? -- TTFN, patrick
On 21/01/2009 21:30, Patrick W. Gilmore wrote:
On Jan 21, 2009, at 11:07 AM, Adrian Chadd wrote:
Finding ways to force object revalidation by an intermediary cache (so the end origin server knows something has been fetched) and thus allowing the cache to serve the content on behalf of the content origintor, under their full control, but without the bits being served.
Excellent idea. It is a shame content owners do not see the utility in your idea.
This doesn't provide feed-back to the content distributors on partial downloads, etc - which is useful information to content providers, if you're into data mining end-user browsing habits. In the specific case of Youtube, of course I don't know that they do this, but I'd be surprised if they didn't. Nick
On Wed, Jan 21, 2009, Nick Hilliard wrote:
This doesn't provide feed-back to the content distributors on partial downloads, etc - which is useful information to content providers, if you're into data mining end-user browsing habits. In the specific case of Youtube, of course I don't know that they do this, but I'd be surprised if they didn't.
If they'd like that included as a side-channel for certain response types, then they could ask. Its not like caches don't store per-connection information like that already.. :) Adrian
Excellent idea. It is a shame content owners do not see the utility in your idea.
To bring this back to an operational topic, just because a content owner does not want to work with someone on this, does the lack of external bandwidth / infrastructure / whatever make it "OK" to install a proxy which will intentionally re-write the content?
This really boils down to "who is more important? The content or the contents' eyeballs?" (Or the people having to deliver said content to said eyeballs, and aren't being paid by the content deliverer on their behalf.) Adrian
On Jan 21, 2009, at 4:38 PM, Adrian Chadd wrote:
Excellent idea. It is a shame content owners do not see the utility in your idea.
To bring this back to an operational topic, just because a content owner does not want to work with someone on this, does the lack of external bandwidth / infrastructure / whatever make it "OK" to install a proxy which will intentionally re-write the content?
This really boils down to "who is more important? The content or the contents' eyeballs?"
(Or the people having to deliver said content to said eyeballs, and aren't being paid by the content deliverer on their behalf.)
No, it does not. If I own something, it doesn't matter how "important" the people who want to buy it are. But I guess that's not operational. -- TTFN, patrick
Surely the whole point of this is that the end users (the eyeballs) get the best experience they can as they're the ultimate consumer. So working with everyone in the chain between the content owner and the eyeballs is important. If you're a content owner then you want the experience to be good so that either you sell more ads or that your "brand" (whatever that may mean) is well thought of. It's why content owners use CDNs - to ensure that it's delivered close to the end user. Surely that means, logically to me anyway, that working with caching providers local to the eyeballs is the next logical point. Certainly for the heavy bits that don't change (eg the video streams Adrian mentioned). It's a mutual benefit thing - if your content sucks for a school (for example) to use then they're not going to use it. That's not good for you as a content owner. I realise that CDNs probably aren't that keen on people caching as it reduces their revenue, but a level of being rational about helping the whole chain deliver means probably more traffic overall. MMC On 22/01/2009, at 8:13 AM, Patrick W. Gilmore wrote:
(Or the people having to deliver said content to said eyeballs, and aren't being paid by the content deliverer on their behalf.)
No, it does not.
If I own something, it doesn't matter how "important" the people who want to buy it are.
But I guess that's not operational.
-- TTFN, patrick
On Thu, Jan 22, 2009, Matthew Moyle-Croft wrote:
I realise that CDNs probably aren't that keen on people caching as it reduces their revenue, but a level of being rational about helping the whole chain deliver means probably more traffic overall.
I mean, I could extend an olive branch to all the CDNs out there and ask exactly what kind of extensions they'd like to see in intermediary caches so they can get the statistics they require, whilst allowing the edge to serve content on their behalf, but I doubt I'd get any bites. Oh well. Back to hacking on software so it can shuffle more bits. Adrian
Patrick W. Gilmore wrote:
I do not work for GOOG or YouTube, I do not know why they do what they do. However, it is trivial to think up perfectly valid reasons for Google to intentionally break caches on YouTube content (e.g. paid advertising per download).
Doesn't matter if you have small links or no infrastructure or whatever. Google has ever right, moral & legal, to serve content as they please. They are providing the content for free, but they want to do it on their own terms. Seems perfectly reasonable to me. Do you disagree?
This brings me back the peering problem - if network A's customer sends network B's server a small packet, and network B's server sends back a video, why should Network A be forced to pay the lion's share of the bandwidth costs to deliver Network B's video (and ads) to the viewer? Networks which send large amounts of content should do their best to reduce the bandwidth load on end-user networks whenever and where ever possible, by hot-potato routing, by allowing the content to be cached, etc. They can't do otherwise and also claim they "do no harm". Adrian, what did your contacts at Google say when you asked them how this policy was consistent with their Do No Harm motto? If you didn't ask, I suggest you go ask! jc
On 2009-01-20, at 18:37, Marshall Eubanks wrote:
Less and less would be my estimate. How much video is cached ? How much P2P is cached ?
If you asked Akamai, Limelight and friends, they might tell you that 100% of important video is cached. And viewed from some angles, every peer who receives a block of data and offers to serve it to others is caching that block of data for the benefit of other peers. Joe
On Tue, Jan 20, 2009 at 06:49:14PM -0500, Joe Abley wrote:
On 2009-01-20, at 18:37, Marshall Eubanks wrote:
Less and less would be my estimate. How much video is cached ? How much P2P is cached ?
If you asked Akamai, Limelight and friends, they might tell you that 100% of important video is cached. And viewed from some angles, every peer who receives a block of data and offers to serve it to others is caching that block of data for the benefit of other peers.
Joe
aha... so taking a peek at my nearby BT tracker & client, it seems that there is abt 12% "duplicate" traffic. wildextrapolation -- poor caching design/flaky networks have a 10-15% extra traffic load, "just to make sure". i'd guess that 10% of a femto (or is it the other way) byte of traffic relates to real money. --bill
participants (13)
-
Adrian Chadd
-
bmanning@vacation.karoshi.com
-
JC Dill
-
Joe Abley
-
Marshall Eubanks
-
Matthew Kaufman
-
Matthew Moyle-Croft
-
Mikael Abrahamsson
-
Nathan Malynn
-
Nick Hilliard
-
Patrick W. Gilmore
-
Paul Vixie
-
Randy Bush