Cringley has a theory and it involves Google, video, and oversubscribed backbones: http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
On 1/20/07, Mark Boolootian <booloo@ucsc.edu> wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
The following comment has to be one of the most important comments in the entire article and its a bit disturbing. "Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth." -- Rodrick R. Brown
On Jan 20, 2007, at 10:37 AM, Rodrick Brown wrote:
On 1/20/07, Mark Boolootian <booloo@ucsc.edu> wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
The following comment has to be one of the most important comments in the entire article and its a bit disturbing.
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
I'm not sure why you find that disturbing. I can think of two reasons, and, they depend almost entirely on your perspective: If you are disturbed because you know that these users are early adopters and that eventually, a much wider audience will adopt this technology driving a need for much more bandwidth than is available today, then, the solution is obvious. As in the past, bandwidth will have to increase to meet increased demand. If you are disturbed by the inequity of it, then, little can be done. There will always be classes of consumers who use more than other classes of consumers of any resource. Frankly, looking from my corner of the internet, I don't think that statistic is entirely accurate. From my perspective, SPAM uses more bandwidth than BitTorrent. OTOH, another thing to consider is that if all those video downloads being handled by BitTorrent were migrated to HTTP connections instead the required amount of bandwidth would be substantially higher. Owen
Rodrick Brown wrote:
On 1/20/07, Mark Boolootian <booloo@ucsc.edu> wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
The following comment has to be one of the most important comments in the entire article and its a bit disturbing.
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
Moreover, those of you who were at NANOG in June will remember some of the numbers Colin gave about Youtube using >20gbps outbound. That number was still early in the exponential growth phase the site is (*still*) having. The 20gbps number would likely seem laughable now. -david
The Internet: the world's only industry that complains that people want its product. On 1/20/07, David Ulevitch <davidu@everydns.net> wrote:
Rodrick Brown wrote:
On 1/20/07, Mark Boolootian <booloo@ucsc.edu> wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
The following comment has to be one of the most important comments in the entire article and its a bit disturbing.
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
Moreover, those of you who were at NANOG in June will remember some of the numbers Colin gave about Youtube using >20gbps outbound.
That number was still early in the exponential growth phase the site is (*still*) having. The 20gbps number would likely seem laughable now.
-david
Alexander Harrowell wrote:
The Internet: the world's only industry that complains that people want its product.
The quote sounds good, but nobody in this thread is complaining. There have always been top-talkers on networks and there always will be. The current top-talkers are the joe and jane users of tomorrow. That is what is important. BitTorrent-like technology might start showing up in your media center, your access point, etc. The Venice Project (Joost) and a number of other new startups are also built around this model of distribution. Maybe a more symmetric load on the network (at least on the edge) will improve economic models or maybe we'll see "eyeball" networks start to peer with each other as they start sourcing more and more of the bits. Maybe that's already happening. -david
On 1/20/07, *David Ulevitch* < davidu@everydns.net <mailto:davidu@everydns.net>> wrote:
Rodrick Brown wrote: > > On 1/20/07, Mark Boolootian < booloo@ucsc.edu <mailto:booloo@ucsc.edu>> wrote: >> >> >> Cringley has a theory and it involves Google, video, and oversubscribed >> backbones: >> >> http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html >> > > The following comment has to be one of the most important comments in > the entire article and its a bit disturbing. > > "Right now somewhat more than half of all Internet bandwidth is being > used for BitTorrent traffic, which is mainly video. Yet if you > surveyed your neighbors you'd find that few of them are BitTorrent > users. Less than 5 percent of all Internet users are presently > consuming more than 50 percent of all bandwidth."
Moreover, those of you who were at NANOG in June will remember some of the numbers Colin gave about Youtube using >20gbps outbound.
That number was still early in the exponential growth phase the site is (*still*) having. The 20gbps number would likely seem laughable now.
-david
On Jan 20, 2007, at 1:00 PM, David Ulevitch wrote:
maybe we'll see "eyeball" networks start to peer with each other as they start sourcing more and more of the bits. Maybe that's already happening.
At some point, I think MANET/mesh/roofnets/Zigbee/etc. are going to start fulfilling this role, at least in part. Which should give NSPs something to think about in terms of how they can embrace this model and make money with it. Getting your customers to build and maintain your infrastructure for you is a pretty powerful incentive, IMHO. http://en.fon.com/ (not MANET/mesh, but may be going there, at some point) http://www.speakeasy.net/netshare/terms/ (an NSP who are embracing a sharing model) http://www.netequality.org/ (nonprofit mesh) http://www.cuwin.net/about (mesh community) http://www.wi-fiplanet.com/columns/article.php/3634931 (roofnet SP/ facilitator) http://www.meraki.net/ http://www.microsoft.com/technet/network/p2p/pnrp.mspx (built into Vista, enabled by default, I think) ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
The following comment has to be one of the most important comments in the entire article and its a bit disturbing.
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
the heavy hitters are long known. get over it. i won't bother to cite cho et al. and similar actual measurement studies, as doing so seems not to cause people to read them, only to say they already did or say how unlike japan north america is. the phenomonon is part protocol and part social. the question to me is whether isps and end user borders (universities, large enterprises, ...) will learn to embrace this as opposed to fighting it; i.e. find a business model that embraces delivering what the customer wants as opposed to winging and warring against it. if we do, then the authors of the 2p2 protocols will feel safe in improving their customers' experience by taking advantage of localization and proximity, as opposed to focusing on subverting perceived fierce opposition by isps and end user border fascists. and then, guess what; the traffic will distribute more reasonably and not all sum up on the longer glass. randy randy
On Sat, 20 Jan 2007, Randy Bush wrote:
the heavy hitters are long known. get over it.
i won't bother to cite cho et al. and similar actual measurement studies, as doing so seems not to cause people to read them, only to say they already did or say how unlike japan north america is. the phenomonon is part protocol and part social.
the question to me is whether isps and end user borders (universities, large enterprises, ...) will learn to embrace this as opposed to fighting it; i.e. find a business model that embraces delivering what the customer wants as opposed to winging and warring against it.
if we do, then the authors of the 2p2 protocols will feel safe in improving their customers' experience by taking advantage of localization and proximity, as opposed to focusing on subverting perceived fierce opposition by isps and end user border fascists. and then, guess what; the traffic will distribute more reasonably and not all sum up on the longer glass.
randy
It has been a long time since I bowed before Mr. Bush's wisdom, but indeed, I bow now in a very humble fashion. Thing is though, it is quivalent to one or all of the following: -. EFF-like thinking (moral high-ground or impractical at times, yet correct and to live by). -. (very) Forward thinking (yet not possible for people to get behind - by people I mean those who do this daily), likely to encounter much resistence until it becomes mainstream a few years down the road. -. Not connected with what can currently happen to affect change, but rather how things really are which people can not yet accept. As Randy is obviously not much affected when people disagree with him, nor should he, I am sure he will preach this until it becomes real. With that in mind, if many of us believe this is a philosophical as well as a technological truth -- what can be done today to affect this change? Some examples may be: -. Working with network gear vendors to create better equipment built to handle this and lighten the load. -. Working on establishing new standards and topologies to enable both vendors and providers to adopt them. -. Presenting case studies after putting our money where our mouth is, and showing how we made it work in a live network. Staying in the philosophical realm is more than respectable, but waiting for FUSSP-like wide-addoption or for sheep to fly is not going to change the world, much. For now, the P2P folks who are not in most cases eveel "Internet Pirates" are mostly allied, whether in name or in practice with illegal activities. The technology isn't illegal and can be quite good for all of us to save quite a bit of bandwidth rather than waste it (quite a bit of redudndancy there!). So, instead of fighting it and seeing it left in the hands of the "pirates" and the privacy folks trying to bypass the Firewall of [insert evil regime here], why not utilize it? How can service providers make use of all this redudndancy among their top talkers and remove the privacy advocates and warez freaks from the picture, leaving that front with less technology and legitimacy while helping themselves? This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS. Gadi.
On Sat, 20 Jan 2007 17:55:49 -0600 (CST), Gadi Evron wrote:
On Sat, 20 Jan 2007, Randy Bush wrote:
the question to me is whether isps and end user borders (universities, large enterprises, ...) will learn to embrace this as opposed to fighting it; i.e. find a business model that embraces delivering what the customer wants as opposed to winging and warring against it.
interesting.. i was about to say.. I am involved in London, in building an ISP that encourages users of p2p with respect from major and independent record labels. it makes sense that the film industry will (and is?) moving towards some kind of acceptance as well.
Thing is though, it is quivalent to one or all of the following: -. EFF-like thinking (moral high-ground or impractical at times, yet correct and to live by). -. (very) Forward thinking (yet not possible for people to get behind - by people I mean those who do this daily), likely to encounter much resistence until it becomes mainstream a few years down the road. -. Not connected with what can currently happen to affect change, but rather how things really are which people can not yet accept.
well, a little dash of all thinking makes for a healthy environment doesn't it?
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by. C. -- hail eris http://rubberduck.com/
On Sun, 21 Jan 2007, Charlie Allom wrote:
I am involved in London, in building an ISP that encourages users of
Cool!
p2p with respect from major and independent record labels. it makes sense that the film industry will (and is?) moving towards some kind of acceptance as well.
Erm.. as in to help them sue users? :)
well, a little dash of all thinking makes for a healthy environment doesn't it?
Not on NANOG. :o)
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
Can you please elaborate on this point?
C.
Gadi.
On Sat, 20 Jan 2007 18:32:28 -0600 (CST), Gadi Evron wrote:
On Sun, 21 Jan 2007, Charlie Allom wrote:
p2p with respect from major and independent record labels. it makes sense that the film industry will (and is?) moving towards some kind of acceptance as well.
Erm.. as in to help them sue users? :)
as in - a DMZ for the RIAA vs. the User
well, a little dash of all thinking makes for a healthy environment doesn't it?
Not on NANOG. :o)
hahah.
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
Can you please elaborate on this point?
well all I mean is that backbone networks and the technology that is used is still a bit of a mystery to me, having never fiddled with (and broken) it, myself. One thing I was amazed by this week, was, looking into my ADSL that we are providing, and seeing how low-level, and, well - tailored for a market that could only be perceived 15 years ago. I welcome our new masters, GOOG. ;) C. -- hail eris http://rubberduck.com/
On Sun, Jan 21, 2007, Charlie Allom wrote:
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
Its not that hard a problem to get on top of. Caching, unfortunately, continues to be viewed as anaethma by ISP network operators in the US. Strangely enough the caching technologies aren't a problem with the content -delivery- people. I've had a few ISPs out here in Australia indicate interest in a cache that could do the normal stuff (http, rtsp, wma) and some of the p2p stuff (bittorrent especially) with a smattering of QoS/shaping/control - but not cost upwards of USD$100,000 a box. Lots of interest, no commitment. It doesn't help (at least in Australia) where the wholesale model of ADSL isn't content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams and then receive each session via L2TP. Fine from an aggregation point of view, but missing the true usefuless of content replication and caching - right at the point where your customers connect in. (Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount of interest from CDN/content origination players but none from ISPs. I'd love to know why ISPs don't view caching as a viable option in today's world and what we could to do make it easier for y'all.) Adrian
On Sun, 21 Jan 2007 08:33:26 +0800 Adrian Chadd <adrian@creative.net.au> wrote:
On Sun, Jan 21, 2007, Charlie Allom wrote:
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
Its not that hard a problem to get on top of. Caching, unfortunately, continues to be viewed as anaethma by ISP network operators in the US. Strangely enough the caching technologies aren't a problem with the content -delivery- people.
I've had a few ISPs out here in Australia indicate interest in a cache that could do the normal stuff (http, rtsp, wma) and some of the p2p stuff (bittorrent especially) with a smattering of QoS/shaping/control - but not cost upwards of USD$100,000 a box. Lots of interest, no commitment.
I think it is probably because to build caching infrastructure that is high performance and has enough high availability to make a difference is either non-trivial or non-cheap. If it comes down to introducing something new (new software / hardware, new concepts, new complexity, new support skills, another thing that can break etc.) verses just growing something you already have, already manage and have since day one as an ISP - additional routers and/or higher capacity links - then growing the network wins when the $ amount is the same because it is simpler and easier.
It doesn't help (at least in Australia) where the wholesale model of ADSL isn't content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams and then receive each session via L2TP. Fine from an aggregation point of view, but missing the true usefuless of content replication and caching - right at the point where your customers connect in.
I think if even "pure" networking people (i.e. those that just focus on shifting IP packets around) are accepting of that situation, when they also believe in keeping traffic local, indicates that it is probably more of an economic rather than a technical reason why that is still happening. Inter-ISP peering at the exchange (C.O) would be the ideal, however it seems that there isn't enough inter-customer (per-ISP or between ISP) bandwidth consumption at each exchange to justify the additional financial and complexity costs to do it. Inter-customer traffic forwarding is usually happening at the next level up in the hierarchy - at the regional / city level, which is probably at this time the most economic level to do it.
(Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount of interest from CDN/content origination players but none from ISPs. I'd love to know why ISPs don't view caching as a viable option in today's world and what we could to do make it easier for y'all.)
Maybe that really means your customers (i.e. people who most benefit from your software) are really the content distributors not ISPs anymore. While the distinction might seem somewhat minor, I think ISPs generally tend to have more of a view point of "where is this traffic wanting or probably going to go, and how to do we build infrastructure to get it there", and less of a "what is this traffic" view. In other words, ISPs tend to be more focused on trying to optimise for all types of traffic rather than one or a select few particular types, because what the customer does with the bandwidth they purchase is up to the customer themselves. If you spend time optimising for one type of traffic you're either neglecting or negatively impacting another type. Spending time on general optimisations that benefit all types of traffic is usually the better way to spend time. I think one of the reasons for ISP interest in the "p2p problem" could be because it is reducing the normal benefit-to-cost ratio of general traffic optimsation. Restoring the regular benefit-to-cost ratio of general traffic optimsation is probably the fundamental goal of solving the "p2p problem". My suggestion to you as a squid developer would be focus on caching, or more generally, localising of P2P traffic. It doesn't seem that the P2P application developers are doing it, maybe because they don't care because it doesn't directly impact them, or maybe because they don't know how to. If squid could provide a traffic localising solution which is just another traffic sink or source (e.g. a server) to an ISP, rather than something that requires enabling knobs on the network infrastructure for special handling or requires special traffic engineering for it to work, I'd think you'd get quite a bit of interest. Just my 2c. Regards, Mark. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:
It doesn't seem that the P2P application developers are doing it, maybe because they don't care because it doesn't directly impact them, or maybe because they don't know how to. If squid could provide a traffic localising solution which is just another traffic sink or source (e.g. a server) to an ISP, rather than something that requires enabling knobs on the network infrastructure for special handling or requires special traffic engineering for it to work, I'd think you'd get quite a bit of interest.
I think there's interest from the consumer level, already: http://torrentfreak.com/review-the-wireless-BitTorrent-router/ It's early days, but if this becomes the norm, then the end-users themselves will end up doing the caching. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Sat, 20 Jan 2007 18:51:08 -0800 Roland Dobbins <rdobbins@cisco.com> wrote:
On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:
It doesn't seem that the P2P application developers are doing it, maybe because they don't care because it doesn't directly impact them, or maybe because they don't know how to. If squid could provide a traffic localising solution which is just another traffic sink or source (e.g. a server) to an ISP, rather than something that requires enabling knobs on the network infrastructure for special handling or requires special traffic engineering for it to work, I'd think you'd get quite a bit of interest.
I think there's interest from the consumer level, already:
http://torrentfreak.com/review-the-wireless-BitTorrent-router/
It's early days, but if this becomes the norm, then the end-users themselves will end up doing the caching.
Maybe I haven't understood what that exactly does, however it seems to me that's really just a bit-torrent client/server in the ADSL router. Certainly having a bittorrent server in the ADSL router is unique, but not really what I was getting at. What I'm imagining (and I'm making some assumptions about how bittorrent works) would be bittorrent "super" peer that : * announces itself as a very generous provider of bittorrent fragments. * selects which peers to offer it's generosity to, by measuring it's network proximity of those peers. I think bittorrent uses TCP, and it would seem to me that TCP's own round trip and througput measuring would be a pretty good source to measuring network locality. * This super peer could also have it's generosity announcements restricted to certain IP address ranges etc. Actually, thinking about it a bit more, for this device to work well it would need to somehow be inline with the bit torrent seed URLs, so maybe that wouldn't be feasible to have a server in the ISP's network do it. Still, if BT peer software was modified to take into account the TCP measurements when selecting peers, I think it would probably go a long way towards mitigating some of the traffic problems that P2P seems to be causing. Regards, Mark. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
On Jan 20, 2007, at 7:38 PM, Mark Smith wrote:
Maybe I haven't understood what that exactly does, however it seems to me that's really just a bit-torrent client/server in the ADSL router. Certainly having a bittorrent server in the ADSL router is unique, but not really what I was getting at.
I understand it's not what you meant; my point is that if the SPs don't figure out how to do this, the customers will, by whatever means they have at their disposal, with always-on devices which do the distribution and seeding and caching automagically, and with a revenue model attached. I foresee consumer-level devices like this little Asus router which not only act as torrent clients/servers, but which also are woven together into caches with something like PNRP as the location service (and perhaps an innovative content producer/ distributor acting as a billing overlay prover a la FON in order to monetize same, leaving the SP with nothing). The advantage of providing caching services is that they both help preserve scare resources and result in a more pleasing user experience. As already pointed out, CAPEX/OPEX along with insertion into the network are the current barriers, along with potential legal liabilities; cooperation between content providers and SPs could help alleviate some of these problems and make it a more attractive model, and help fund this kind of infrastructure in order to make more efficient use of bandwidth at various points in the topology. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Sat, 20 Jan 2007 19:47:04 -0800 Roland Dobbins <rdobbins@cisco.com> wrote: <snip>
The advantage of providing caching services is that they both help preserve scare resources and result in a more pleasing user experience. As already pointed out, CAPEX/OPEX along with insertion into the network are the current barriers, along with potential legal liabilities; cooperation between content providers and SPs could help alleviate some of these problems and make it a more attractive model, and help fund this kind of infrastructure in order to make more efficient use of bandwidth at various points in the topology.
I think you're more or less describing what already Akamai do - they're just not doing it for authorised P2P protocol distributed content (yet?). Regards, Mark. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
On Jan 20, 2007, at 8:10 PM, Mark Smith wrote:
I think you're more or less describing what already Akamai do - they're just not doing it for authorised P2P protocol distributed content (yet?).
Yes, and P2P might make sense for them to explore - but a) it doesn't help SPs smooth out bandwidth 'hotspots' in and around their access networks due to P2P activity, b) doesn't bring the content out to the very edges of the access network, where the users are, and c) isn't something which can be woven together out of more or less off-the- shelf technology with the users themselves supplying the infrastructure and paying for (and being compensated for, a la FON or SpeakEasy's WiFi sharing program) the access bandwidth. It seems to me that a FON-/Speakeasy-type bandwidth-charge compensation model for end-user P2P caching and distribution might be an interesting approach for SPs to consider, as it would reduce the CAPEX and OPEX for caching services and encourage the users themselves to subsidize the bandwidth costs to one degree or another. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
holy kook bait. it's amazing after all these years, and companies, how many people, and companies, still don't "get it". /rf
On Sun, Jan 21, 2007, Mark Smith wrote:
What I'm imagining (and I'm making some assumptions about how bittorrent works) would be bittorrent "super" peer that :
Azereus already has functional 'proxy discovery' stuff. Its quite naive but it does the job. The only implementation I know about is the JoltId PeerCache, but its quite expensive. The initial implementation should use this for client communication. Then try to work with the P2P crowd to ratify some kind of P2P proxy discovery and communication protocol (and have more luck than WPAD :) Adrian
Thus spake "Adrian Chadd" <adrian@creative.net.au>
On Sun, Jan 21, 2007, Charlie Allom wrote:
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
Its not that hard a problem to get on top of. Caching, unfortunately, continues to be viewed as anaethma by ISP network operators in the US. Strangely enough the caching technologies aren't a problem with the content -delivery- people.
US ISPs get paid on bits sent, so they're going to be _against_ caching because caching reduces revenue. Content providers, OTOH, pay the ISPs for bits sent, so they're going to be _for_ caching because it increases profits. The resulting stalemate isn't hard to predict.
I've had a few ISPs out here in Australia indicate interest in a cache that could do the normal stuff (http, rtsp, wma) and some of the p2p stuff (bittorrent especially) with a smattering of QoS/shaping/control - but not cost upwards of USD$100,000 a box. Lots of interest, no commitment.
Basically, they're looking for a box that delivers what P2P networks inherently do by default. If the rate-limiting is sane, then only a copy (or two) will need to come in over the slow overseas pipes, and it'll be replicated and reassembled locally over fast pipes. What, exactly, is a middlebox supposed to add to this picture?
It doesn't help (at least in Australia) where the wholesale model of ADSL isn't content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams and then receive each session via L2TP. Fine from an aggregation point of view, but missing the true usefuless of content replication and caching - right at the point where your customers connect in.
So what you have is a Layer 8 problem due to not letting the network topology match the physical topology. No magical box is going to save you from hairpinning traffic between a thousand different L2TP pipes. The best you can hope for is that the rate limits for those L2TP pipes will be orders of magnitude larger than the rate limit for them to talk upstream -- and you don't need any new tools to do that, just intelligent use of what you already have.
(Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount of interest from CDN/content origination players but none from ISPs. I'd love to know why ISPs don't view caching as a viable option in today's world and what we could to do make it easier for y'all.)
As someone who voluntarily used a proxy and gave up, and has worked in an IT dept that did the same thing, it's pretty easy to explain: there are too many sites that aren't cache-friendly. It's easy for content folks to put up their own caches (or Akamaize) because they can design their sites to account for it, but an ISP runs too much risk of breaking users' experiences when they apply caching indiscriminately to the entire Web. Non-idempotent GET requests are the single biggest breakage I ran into, and the proliferation of dynamically-generated "Web 2.0" pages (or faulty Expires values) are the biggest factor that wastes bandwidth by preventing caching. S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
Its not that hard a problem to get on top of. Caching, unfortunately, continues to be viewed as anaethma by ISP network operators in the US. Strangely enough the caching technologies aren't a problem with the content -delivery- people.
if we enbrace p2p, today's heavy hitting bad users are tomorrow's wonderful local cachers. randy
Hi Adrian,
I've had a few ISPs out here in Australia indicate interest in a cache that could do the normal stuff (http, rtsp, wma) and some of the p2p stuff (bittorrent especially) with a smattering of QoS/shaping/control - but not cost upwards of USD$100,000 a box. Lots of interest, no commitment.
Here in central europe we had caching friendly environment from 1997 till 2001 due of transit lines pricing. Few yaers ago prices for upstream connectivity fell and from this time there is no interest for caching. I've discussed this with several nationwide ISPs in .cz and found these reasons: a) caching systems are not easy to implement and maintain (another system for configuration) b) possible conflict with content owners c) they want to sell as much as possible of bandwidth d) they want to have their network fully transparent I don't want to judge these answers, just FYI.
It doesn't help (at least in Australia) where the wholesale model of ADSL isn't content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams and then receive each session via L2TP. Fine from an aggregation point of view, but missing the true usefuless of content replication and caching - right at the point where your customers connect in.
Same here.
(Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount of interest from CDN/content origination players but none from ISPs. I'd love to know why ISPs don't view caching as a viable option in today's world and what we could to do make it easier for y'all.)
Please see points (a)-(d). I think there can be also point (e). Some telcos want to play triple-play game (Internet, telephony and IPTV). They want to move their users back from the Internet to relativelly safe revenue area (television channel distribution via IPTV). Regards Michal Krsek
On Mon, 22 Jan 2007, Michal Krsek wrote:
For broad-band ISPs, whose main goal is not to sell or re-sell transit though...
a) caching systems are not easy to implement and maintain (another system for configuration) b) possible conflict with content owners c) they want to sell as much as possible of bandwidth d) they want to have their network fully transparent
Only a, b apply. d I am not sure I understand.
On Mon, 22 Jan 2007 04:15:44 -0600 (CST) Gadi Evron <ge@linuxbox.org> wrote:
On Mon, 22 Jan 2007, Michal Krsek wrote:
For broad-band ISPs, whose main goal is not to sell or re-sell transit though...
a) caching systems are not easy to implement and maintain (another system for configuration) b) possible conflict with content owners c) they want to sell as much as possible of bandwidth d) they want to have their network fully transparent
Only a, b apply. d I am not sure I understand.
I think (d) is all network testing tools showing a perfect path, which sould isolate the fault to the remote web server itself, yet the website not working because the translucent proxy has a fault. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
On Sun, Jan 21, 2007 at 12:10:11AM +0000, Charlie Allom wrote:
This is a pure example of a problem from the operational front which can be floated to research and the industry, with smarter solutions than port blocking and QoS.
This is what I am interested/scared by.
I don't recall where I read it, perhaps Cringely, but I came across a page the other day that said studying and improving QoS-like technology was ultimately less beneficial than in studying and improving bandwidth overall. He used the "ambulance in traffic" analogy and pointed out that QoS gets the ambulance there faster, except in traffic so congested that nobody can get out of the way. By contrast, wider roads mean everybody gets there faster. -- ``Unthinking respect for authority is the greatest enemy of truth.'' -- Albert Einstein -><- <URL:http://www.subspacefield.org/~travis/>
On Sat, Jan 20, 2007 at 05:55:49PM -0600, Gadi Evron wrote:
Some examples may be: -. Working on establishing new standards and topologies to enable both vendors and providers to adopt them.
Keep this point in mind while reading my below comment.
For now, the P2P folks who are not in most cases eveel "Internet Pirates" are mostly allied, whether in name or in practice with illegal activities. The technology isn't illegal and can be quite good for all of us to save quite a bit of bandwidth rather than waste it (quite a bit of redudndancy there!).
A paper put together by the authors of a download-only "free riding" BitTorrent client, called BitThief. The paper is worth reading: http://dcg.ethz.ch/publications/hotnets06.pdf http://dcg.ethz.ch/projects/bitthief/ (client is here) The part that saddens me the most about this project isn't the complete disregard for the "give back what you take" moral (though that part does sadden me personally) , but what this is going to do to the protocol and the clients. Chances are that other torrent client authors are going to see the project as "major defiance" and start implementing things like filtering what client can connect to who based on the client name/ID string (ex. uTorrent, Azureus, MainLine), which as we all know, is going to last maybe 3 weeks. This in turn will solicit the BitThief authors implementing a feature that allows the client to either spoof its client name or use randomly- generated ones. Rinse lather repeat, until everyone is fighting rather than cooperating. Will the BT protocol be reformed to address this? 50/50 chance.
So, instead of fighting it and seeing it left in the hands of the "pirates" and the privacy folks trying to bypass the Firewall of [insert evil regime here], why not utilize it?
I think Adrian Chadd's mail addresses this indirectly: it's not being utilised because of the bandwidth requirements. ISPs probably don't have an interest in BT caching because of 1) cost of ownership, 2) legal concerns (if an ISP cached a publicly distributed copy of some pirated software, who's then responsible?), and most of all, 3) it's easier to buy a content-sniffing device that rate-limits, or just start hard-limiting users who use "too much bandwidth" (a phrase ISPs use as justification for shutting off customers' connections, but never provide numbers of just what's "too much"). The result of these items already been shown: BT encryption. I personally know of 3 individuals who have their client to use en- cryption only (disabling non-encrypted connection support). For security? Nope -- solely because their ISP uses a rate limiting device. Bram Cohen's official statement is that using encryption to get around this "is silly" because "not many ISPs are implementing such devices" (maybe not *right now*, Bram, but in the next year or two, they likely will): http://bramcohen.livejournal.com/29886.html ISPs will go with implementing the above device *before* implementing something like a BT caching box. Adrian probably knows this too, and chances are it's probably because of the 3 above items I listed. So my question is this: how exactly do we (as administrators of systems or networks) get companies, managers, and even other administrators, to think differently about solving this? -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
On Sat, 20 Jan 2007, Jeremy Chadwick wrote: <snip>
ISPs probably don't have an interest in BT caching because of 1) cost of ownership, 2) legal concerns (if an ISP cached a publicly distributed copy of some pirated software, who's then responsible?),
They cache the web, which has the same chance of being illegal content. <snip>
The result of these items already been shown: BT encryption. I personally know of 3 individuals who have their client to use en- cryption only (disabling non-encrypted connection support). For security? Nope -- solely because their ISP uses a rate limiting device.
Yep. Users will find a way to maintain functionality.
Bram Cohen's official statement is that using encryption to get around this "is silly" because "not many ISPs are implementing such devices" (maybe not *right now*, Bram, but in the next year or two, they likely will):
I don't know of many user ISPs which don't implement them, you kidding?:) <snip>
So my question is this: how exactly do we (as administrators of systems or networks) get companies, managers, and even other administrators, to think differently about solving this?
-- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
Gadi Evron wrote:
On Sat, 20 Jan 2007, Jeremy Chadwick wrote:
<snip>
ISPs probably don't have an interest in BT caching because of 1) cost of ownership, 2) legal concerns (if an ISP cached a publicly distributed copy of some pirated software, who's then responsible?),
They cache the web, which has the same chance of being illegal content. [..]
They do have NNTP "Caches" though with several Terabytes of storage space and obvious newsgroups like alt.binaries.dvd-r and similar names. The reason why they don't run "BT Caches" is because the protocol is not made for it. NNTP is made for distribution (albeit not really for 8bit files ;), the "Cache" (more a wrongly implemented auto-replicating FTP server) is local to the ISP and serves their local users. As such that is only gain. Instead of having their clients use their transits, the data only gets pulled over ones and all their clients get it. For BT though, you either have to do tricks at L7 involving sniffing the lines and thus breaking end-to-end; or you end up setting up a huge BT client which automatically mirrors all the torrents on the planet and hope that only your local users use it, which most likely is not the case as most BT clients don't do network-close downloading. As such NNTP is profit, BT is not. Also, NNTP access is a service which you can sell. There exist a large number of NNTP-only services and even ISP's that have as a major selling point: access to their newsserver. Fun detail about NNTP: most companies publish how much traffic they do and even in which alt.binaries.* group the most crap is flowing. Still it seems totally legal to have those several Terabytes of data and make them available, even with the obvious names that the messages carry. The most named reason: It is a "Cache" and "we don't put the data on it, it is automatic"... yup alt.binaries.dvd.movies whatever is really not so obvious ;) Of course replace BT with most kinds of P2P network in the above of course. There are some P2P nets that try to induce some network topology though, so that you will be downloading from that person next door instead of that guy on a 56k in Timbuktoe while you are sitting on a 1Gbit NREN connect ;) But anyway what I am wondering is why ISP folks are thinking so bad about this, do you guys want: a) customers that do not use your network b) customers that do use the network Probably it is a) because of the cash. But that is strange, why sell people an 'unlimited' account when you don't want them to use it in the first place? Also if your network is not made to handle customers of type b) then upgrade your network. Clearly your customers love using it, thus more customers will follow if you keep it up and running. No better advertisement than the neighbor saying that it is great ;) Greets, Jeroen
Thus spake "Jeremy Chadwick" <nanog@jdc.parodius.com>
Chances are that other torrent client authors are going to see [BitThief] as "major defiance" and start implementing things like filtering what client can connect to who based on the client name/ID string (ex. uTorrent, Azureus, MainLine), which as we all know, is going to last maybe 3 weeks.
BitComet has virtually dropped off the face of the 'net since the authors decided to not honor the "private" flag. Even public trackers _that do not serve private torrents_ frequently block it out of community solidarity. Note that the blocking hasn't been incorporated into clients, because it's largely unnecessary.
This in turn will solicit the BitThief authors implementing a feature that allows the client to either spoof its client name or use randomly- generated ones. Rinse lather repeat, until everyone is fighting rather than cooperating.
Will the BT protocol be reformed to address this? 50/50 chance.
There are lots of smart folks working on improving the tit-for-tat mechanism, and I bet the algorithm (but _not_ the protocol) implemented in popular clients will be tuned to adjust for freeloaders over time. However, the vast majority of people are going to use clients that implement things as intended because (a) it's simpler, and (b) it performs better. Freeloading does work, but it takes several times as long to download files even with the existing, easily-exploited mechanisms. Note that all it takes to turn any standard client into a BitThief is tuning a few of the easily-accessible parameters (e.g. max connections, connection rate, and upload rate). As many folks have found out with various P2P clients over the years, doing so really hurts you in practice, but you can freeload anything you want if you have patience. This is not particularly novel research; it just quantifies common knowledge.
The result of these items already been shown: BT encryption. I personally know of 3 individuals who have their client to use en- cryption only (disabling non-encrypted connection support). For security? Nope -- solely because their ISP uses a rate limiting device.
Bram Cohen's official statement is that using encryption to get around this "is silly" because "not many ISPs are implementing such devices" (maybe not *right now*, Bram, but in the next year or two, they likely will):
Bram is delusional; few ISPs these days _don't_ implement rate-limiting for BT traffic. And, in response, nearly every client implements encryption to get around it. The root problem is ISPs aren't trying to solve the problem the right way -- they're seeing BT taking up huge amounts of BW and are trying to stop that, instead of trying to divert that traffic so that it costs them less to deliver. ( My ISP doesn't limit BT, but I've talked with their tech support folks and the response was that if I use "excessive" bandwidth they'll rate-limit my entire port regardless of protocol. They gave me a ballpark of what "excessive" means to them, I set my client below that level, and I've never had a problem. This works better for me since all my non-BT traffic isn't competing for limited port bandwidth, and it works better for them since my BT traffic is unencrypted and easy to de-prioritize -- but they don't limit it per se, just mark it to be dropped first during congestion, which is fair. Everyone wins. ) S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
On Jan 20, 2007, at 11:55 AM, Randy Bush wrote:
the question to me is whether isps and end user borders (universities, large enterprises, ...) will learn to embrace this as opposed to fighting it; i.e. find a business model that embraces delivering what the customer wants as opposed to winging and warring against it.
I believe that it will end up becoming the norm, as it's a form of cost-shifting from content providers to NSPs and end-users - but for it to really take off, the tension between content-providers and their customers (i.e., crippling DRM) needs to be resolved. There have been some experiments in U.S. universities over the last couple of years in which private music-sharing services have been run by the universities themselves, and the students pay a fee for access to said music. I haven't seen any studies which provide a clue as to whether or not these experiments have been successful (for some value of 'successful'); my suspicion is that crippling DRM combined with a lack of variety may have been 'features' of these systems, which is not a good test. OTOH, emusic.com seem to be going great guns with non-DRMed .mp3s and a subscription model; perhaps (an official) P2P distribution might be a logical next step for a service of this type. I think it would be a very interesting experiment. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Sat, 20 Jan 2007, Roland Dobbins wrote:
On Jan 20, 2007, at 11:55 AM, Randy Bush wrote:
the question to me is whether isps and end user borders (universities, large enterprises, ...) will learn to embrace this as opposed to fighting it; i.e. find a business model that embraces delivering what the customer wants as opposed to winging and warring against it.
I believe that it will end up becoming the norm, as it's a form of cost-shifting from content providers to NSPs and end-users - but for it to really take off, the tension between content-providers and their customers (i.e., crippling DRM) needs to be resolved.
There have been some experiments in U.S. universities over the last couple of years in which private music-sharing services have been run by the universities themselves, and the students pay a fee for access to said music. I haven't seen any studies which provide a clue as to whether or not these experiments have been successful (for some value of 'successful'); my suspicion is that crippling DRM combined with a lack of variety may have been 'features' of these systems, which is not a good test.
OTOH, emusic.com seem to be going great guns with non-DRMed .mp3s and a subscription model; perhaps (an official) P2P distribution might be a logical next step for a service of this type. I think it would be a very interesting experiment.
Won't really happen as long as they stick to a business model which is over a hundred years old. I would strongly suggest people with interest in this area watch Lawrence Lessig's lecture from CCC: http://dewy.fem.tu-ilmenau.de/CCC/23C3/video/23C3-1760-en-on_free.m4v But I would like to stay on-track and discuss how we can help ISPs change from their end, considering both operational and business needs. Do you believe making such a case study public will help? Do you believe it is the ISP itself which should become the content provider rather than a bandwidth service? Gadi.
* Rodrick Brown:
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
s/BitTtorrent/porn, and we've been there all along. I think the real issue here is that Google's video traffic does *not* clog the network, but would be distributed through private networks (sometimes Google's own, or through another company's CDN) and injected into the Internet very close to the consumer. No one is able to charge for that traffic because if they did, Google would simply inject it someplace else. At best your, one of your peerings would go out of balance, or at worst, *you* would have to pay for Google's traffic.
Hello; On Jan 20, 2007, at 1:37 PM, Rodrick Brown wrote:
On 1/20/07, Mark Boolootian <booloo@ucsc.edu> wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
The following comment has to be one of the most important comments in the entire article and its a bit disturbing.
"Right now somewhat more than half of all Internet bandwidth is being used for BitTorrent traffic, which is mainly video. Yet if you surveyed your neighbors you'd find that few of them are BitTorrent users. Less than 5 percent of all Internet users are presently consuming more than 50 percent of all bandwidth."
Those sorts of percentages are common in Pareto distributions (AKA Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic. I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth. Regards Marshall
-- Rodrick R. Brown
Marshall wrote: Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic.
I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth.
Regards Marshall
That's until the spambots inherit the world, right?
On Sat, 20 Jan 2007, Alexander Harrowell wrote:
Marshall wrote: Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic.
I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth.
Regards Marshall
That's until the spambots inherit the world, right?
That is if you see a distinction, metaphorical or physical, between spambots and real users.
On Sat, 20 Jan 2007 17:38:06 -0600 (CST) Gadi Evron <ge@linuxbox.org> wrote:
On Sat, 20 Jan 2007, Alexander Harrowell wrote:
Marshall wrote: Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic.
I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth.
Regards Marshall
That's until the spambots inherit the world, right?
That is if you see a distinction, metaphorical or physical, between spambots and real users.
"On the Internet, Nobody Knows You're a Dog" (Peter Steiner, The New Yorker) Woof woof, Mark. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:
Marshall wrote: Those sorts of percentages are common in Pareto distributions (AKA Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic.
I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth.
Regards Marshall
That's until the spambots inherit the world, right?
I tend to take the long view.
On Sat, 20 Jan 2007, Marshall Eubanks wrote:
On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:
Marshall wrote: Those sorts of percentages are common in Pareto distributions (AKA Zipf's law AKA "the 80-20 rule"). With the Zipf's exponent typical of web usage and video watching, I would predict something closer to 10% of the users consuming 50% of the usage, but this estimate is not that unrealistic.
I would predict that these sorts of distributions will continue as long as humans are the primary consumers of bandwidth.
Regards Marshall
That's until the spambots inherit the world, right?
I tend to take the long view.
sensor nets anyone? research http://research.cens.ucla.edu/portal/page?_pageid=59,43783&_dad=portal&_schema=PORTAL business http://www.campbellsci.com/bridge-monitoring investment http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339 global alerts? disaster management? physical world traffic engineering?
Lucy Lynch wrote:
sensor nets anyone?
On that subject, the current IP protocols are quite bad on delivering asynchronous notifications to large audiences. Is anyone aware of developments or research toward making this work better? (overlays, multicast, etc.) Pete
research http://research.cens.ucla.edu/portal/page?_pageid=59,43783&_dad=portal&_schema=PORTAL
business http://www.campbellsci.com/bridge-monitoring
investment http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339
global alerts? disaster management? physical world traffic engineering?
On Sun, Jan 21, 2007 at 06:41:19AM -0800, Lucy Lynch wrote:
sensor nets anyone?
The bridge-monitoring stuff sounds a lot like SCADA. //drift IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US. Presumably this would be via some slow signalling protocol over the power lines themselves (slow so that you don't trash the entire spectrum by signalling in the range where power lines are good antennas - i.e. 30MHz or so). The response was "yeah, well, maybe with IPv6". -- ``Unthinking respect for authority is the greatest enemy of truth.'' -- Albert Einstein -><- <URL:http://www.subspacefield.org/~travis/>
On Mon, 22 Jan 2007, Travis H. wrote:
On Sun, Jan 21, 2007 at 06:41:19AM -0800, Lucy Lynch wrote:
sensor nets anyone?
The bridge-monitoring stuff sounds a lot like SCADA.
//drift
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US. Presumably this would be via some slow signalling protocol over the power lines themselves (slow so that you don't trash the entire spectrum by signalling in the range where power lines are good antennas - i.e. 30MHz or so).
The response was "yeah, well, maybe with IPv6".
I've heard tha's pretty close to how IPv6 ends up being used as far as current public production installation use go (not counting those done for research, etc). For example apparently some railroad in europe setup ipv6 for use in the rail sensors. Then we also recently heard of large ISP using ipv6 for creating management subnet for all their network equipment, etc. -- William Leibzon Elan Networks william@elan.net
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US....
The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters. I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos. Jim Shankland
* nanog@shankland.org (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US....
The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
It's not nonsense. Those elements need to be unique. RFC1918 isn't unique enough (think what happens during a corporate merger). -- Niels.
One interesting point - they plan to use Broadband over Power Line (BPL) technology to do this. Meter monitoring is the killer app for BPL, which can then also be used for home broadband, Meter reading is one of the top costs and trickiest problems for utilities. - Dan On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:
* nanog@shankland.org (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US.... The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
It's not nonsense. Those elements need to be unique. RFC1918 isn't unique enough (think what happens during a corporate merger).
-- Niels.
On 1/22/07, Daniel Golding <dgolding@t1r.com> wrote:
One interesting point - they plan to use Broadband over Power Line (BPL) technology to do this. Meter monitoring is the killer app for BPL, which can then also be used for home broadband, Meter reading is one of the top costs and trickiest problems for utilities.
- Dan
On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:
Why don't utilities strike deals with celluar providers to push data back to HQ over the cellular network at low utilization times (how many people use GPRS in the dead of night?). -brandon
Why don't utilities strike deals with celluar providers to push data back to
HQ over the cellular network at low utilization times (how many people use GPRS in the dead of night?).
-brandon
Enron did this with SkyTel paging in California. Or rather they wanted to do it, couldn't hack it, so used the bulk-bought pager airtime as a perk.
On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said:
Why don't utilities strike deals with celluar providers to push data back to HQ over the cellular network at low utilization times (how many people use GPRS in the dead of night?).
Especially in rural areas (where physically reading meters sucks the most due to long inter-house distances), you have no guarantee of good cellular coverage. The electric company *can* however assume they have copper connectivity to the meter by definition....
Especially in rural areas (where physically reading meters sucks the most due to long inter-house distances), you have no guarantee of good cellular coverage.
The electric company *can* however assume they have copper connectivity to the meter by definition.... Doesn't have to be copper- it could be aluminum :)
-Don
Virginia Power replaced our meter over the summer with a new one that has wireless on it. The meter reader just drives a truck past the houses and grabs the data without him/her ever leaving the truck. I have no idea what protocol they're using, or if it's even remotely secure. Jamie Bowden -- "It was half way to Rivendell when the drugs began to take hold" Hunter S Tolkien "Fear and Loathing in Barad Dur" Iain Bowen <alaric@alaric.org.uk>
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Tuesday, January 23, 2007 11:44 AM To: Brandon Galbraith Cc: Daniel Golding; Niels Bakker; nanog@merit.edu Subject: Re: Google wants to be your Internet
Why don't utilities strike deals with celluar providers to
On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said: push data back to
HQ over the cellular network at low utilization times (how many people use GPRS in the dead of night?).
Especially in rural areas (where physically reading meters sucks the most due to long inter-house distances), you have no guarantee of good cellular coverage.
The electric company *can* however assume they have copper connectivity to the meter by definition....
On (2007-01-23 12:25 -0500), Jamie Bowden wrote:
Virginia Power replaced our meter over the summer with a new one that has wireless on it. The meter reader just drives a truck past the houses and grabs the data without him/her ever leaving the truck. I have no idea what protocol they're using, or if it's even remotely secure.
We have it here too, few times there has been articles in the newspaper about car alarms going off along the street, when the meters are read :). -- ++ytti
Our REA has been reading the meter via the copper running to our house for several years now. Took them less than 2 years to realize a savings. (And since it's a co-op, that means the price goes down :) ) Valdis.Kletnieks@vt.edu wrote:
On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said:
Why don't utilities strike deals with celluar providers to push data back to HQ over the cellular network at low utilization times (how many people use GPRS in the dead of night?).
Especially in rural areas (where physically reading meters sucks the most due to long inter-house distances), you have no guarantee of good cellular coverage.
The electric company *can* however assume they have copper connectivity to the meter by definition....
Dan, there's one very big assumption in your statement: cost of BPL for metering is economical or workable in the regulatory model. Forget value added services for a moment, the cost often cannot be burdened on the rate payer (regulatory constraint). So, funding this sort of effort is non-trivial. Best regards, Christian -- Sent from my BlackBerry. -----Original Message----- From: Daniel Golding <dgolding@t1r.com> Date: Mon, 22 Jan 2007 18:52:45 To:Niels Bakker <niels=nanog@bakker.net> Cc:nanog@merit.edu Subject: Re: Google wants to be your Internet One interesting point - they plan to use Broadband over Power Line (BPL) technology to do this. Meter monitoring is the killer app for BPL, which can then also be used for home broadband, Meter reading is one of the top costs and trickiest problems for utilities. - Dan On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote: * nanog@shankland.org: <mailto:nanog@shankland.org> (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]: "Travis H." <travis+ml-nanog@subspacefield.org: <mailto:travis+ml-nanog@subspacefield.org> > writes: IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US.... The response was "yeah, well, maybe with IPv6". Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters. I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos. It's not nonsense. Those elements need to be unique. RFC1918 isn't unique enough (think what happens during a corporate merger). -- Niels.
On Mon, 22 Jan 2007, Daniel Golding wrote:
One interesting point - they plan to use Broadband over Power Line (BPL) technology to do this. Meter monitoring is the killer app for BPL, which can then also be used for home broadband, Meter reading is one of the top costs and trickiest problems for utilities.
Why is IP required, and even if you used IP for transport why must the meter identification be based on an IP address? If meters only report information, they don't need a unique transport address and could put the meter identifier in the application data. Even if the intent is to include additional controls, e.g. cycle air conditioners during peak periods, you still don't need to use IP or unique IP transport addresses. Just because you have the hammer called IP, doesn't mean you must use it on everything.
Why is IP required, and even if you used IP for transport why must the meter identification be based on an IP address? If meters only report information, they don't need a unique transport address and could put the meter identifier in the application data.
Even if the intent is to include additional controls, e.g. cycle air conditioners during peak periods, you still don't need to use IP or unique IP transport addresses.
Just because you have the hammer called IP, doesn't mean you must use it on everything.
Exactly. A meter should be able to connect over an available transport method, and be identifiable via a serial number, not an IP. It may need to grab a DHCP address of some sort (or whatever the moniker is for the transport available), but in the end it's unique serial number should be used to identify itself.
On 23 Jan 2007, at 16:48, Sean Donelan wrote:
Why is IP required,
Because using something that works so well means less wheel reinvention.
and even if you used IP for transport why must the meter identification be based on an IP address?
Idenification via IP address (exclusively) is bad. I'd argue that if you are looking to check the meter for consumption data and for problems, a store-and-forward message system which didn't depend on always-on connectivity would preserve enough address space to make it viable as well. -a
Hello; On Jan 22, 2007, at 6:52 PM, Daniel Golding wrote:
One interesting point - they plan to use Broadband over Power Line (BPL) technology to do this. Meter monitoring is the killer app for BPL, which can then also be used for home broadband, Meter reading is one of the top costs and trickiest problems for utilities.
If they control the network, why is doing this with IPv6 out of the question ? It seems like a good fit to me. Regards Marshall
- Dan
On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:
* nanog@shankland.org (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US.... The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
It's not nonsense. Those elements need to be unique. RFC1918 isn't unique enough (think what happens during a corporate merger).
-- Niels.
On Jan 22, 2007, at 12:15 PM, Jim Shankland wrote:
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
Jim Shankland
I also, because I have an important financial proposal to discuss with your electrical meter! Meter L456372-232, attached to the residence of Hafisat Bamaiya, wife of former Nigerian Defense Minister General Musa Bamaiya
Jim Shankland wrote:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US....
The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
Ah, cool, an advocate of NAT. Or didn't you want to say that "one can just make their own IPv4 address space and use that" ? Remember that the machines checking the billing most likely has a global address and RFC1918 ain't nice. Barring getting address space, IPv4 and IPv6 will both do fine for it.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
1) You are on vacation and want to check if you actually turned on that mini-nuke plant in your garden, so that you will retain some cash on your credit card so that you can still come home. 2) You are still on vacation and want to check if your kids are not over abusing electrical power instead of being 'green' for the environment. 3) You are already on the northpole, Lagos was boring after all, and you want to check about that email you received from the electrical company, to see where the power usage was actually so high. You notice that the power plug in the garden is being used a lot, look at the webcam there and notice that your neighbor is using your power. Oh, only one case eh? :) But I guess it is nonsense. Greets, Jeroen
On Jan 22, 2007, at 9:38 AM, Jeroen Massar wrote:
But I guess it is nonsense.
This is what ssh tunnels and/or VPN are for, IMHO. It's perfectly legitimate to construct private networks (DCN/OOB nets, anyone? How about that IV flow-control monitor which determines how much antibiotics you're getting per hour after your open-heart surgery?) for purposes which aren't suited to direct connectivity to/from anyone on the global Internet. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
Roland Dobbins wrote:
This is what ssh tunnels and/or VPN are for, IMHO. It's perfectly legitimate to construct private networks (DCN/OOB nets, anyone? How about that IV flow-control monitor which determines how much antibiotics you're getting per hour after your open-heart surgery?) for purposes which aren't suited to direct connectivity to/from anyone on the global Internet.
-----------------------------------------------------------------------
Can this thread now be merged with the Cacti thread and made into "Using Cacti for Monitoring your Heart and IV's While Using Your Google Toolbar"? -- ==================================================== J. Oquendo http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x1383A743 sil . infiltrated @ net http://www.infiltrated.net The happiness of society is the end of government. John Adams
Roland Dobbins wrote:
On Jan 22, 2007, at 9:38 AM, Jeroen Massar wrote:
But I guess it is nonsense.
This is what ssh tunnels and/or VPN are for, IMHO
[..] Of course, for protecting them you should use that and firewalls and other security measures that one deems neccesary. But which address space do you put in the network behind the VPN? RFC1918!? Oh, already using that on the DSL link to where you are VPN'ing in from..... oopsy ;) That is the case for globally unique addresses and the reason why banks that use RFC1918 don't like it when they need to merge etc etc etc... Fortunately, for IPv6 we have ULA's (fc00::/7), that solves that problem. /me donates coffee around. Greets, Jeroen
On Jan 22, 2007, at 10:49 AM, Jeroen Massar wrote:
But which address space do you put in the network behind the VPN?
RFC1918!? Oh, already using that on the DSL link to where you are VPN'ing in from..... oopsy ;)
Actually, NBD, because you can handle that with a VPN client which does a virtual adaptor-type of deal and overlapping address space doesn't matter, because once you're in the tunnel, you're not sending/ receiving outside of the tunnel. Port-forwarding and NAT (ugly, but people do it) can apply, too.
That is the case for globally unique addresses and the reason why banks that use RFC1918 don't like it when they need to merge etc etc etc...
Sure, and then you get into double-NATting and who redistributes what routes into who's IGP and all that kind of jazz (it's a big problem on extranet-type connections, too). To be clear, all I was saying is that the subsidiary point that there are things which don't belong on the global Internet is a valid one, and entirely separate from any discussions of universal uniqueness in terms of address-space, as there are (ugly, non-scalable, brittle, but available) ways to work around such problems, in many cases. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
[ 2-in-1, before I hit the 'too many flames posted' threshold ;) ] Roland Dobbins wrote:
On Jan 22, 2007, at 10:49 AM, Jeroen Massar wrote:
But which address space do you put in the network behind the VPN?
RFC1918!? Oh, already using that on the DSL link to where you are VPN'ing in from..... oopsy ;)
Actually, NBD, because you can handle that with a VPN client which does a virtual adaptor-type of deal and overlapping address space doesn't matter, because once you're in the tunnel, you're not sending/receiving outside of the tunnel. Port-forwarding and NAT (ugly, but people do it) can apply, too.
How do you handle 192.168.1.1 talking to 192.168.1.1, oh I do mean a different one. Or do you double-reverse-ultra-NAT the packets !? :) One doesn't want to solve problems that way. That is only seen as creating problems. Good for a consultants wallet, but not good for the companies using it and neither good for the programmer who had to work around it in all his applications.
That is the case for globally unique addresses and the reason why banks that use RFC1918 don't like it when they need to merge etc etc etc...
Sure, and then you get into double-NATting and who redistributes what routes into who's IGP and all that kind of jazz (it's a big problem on extranet-type connections, too). To be clear, all I was saying is that the subsidiary point that there are things which don't belong on the global Internet is a valid one
One can perfectly request address space from any of the RIR's and never ever announce or connect it to the internet. One can even give that as a reason "I require globally unique address space" and you will receive it from the RIR in question. One doesn't need to use globally unique address space in the "Internet", it is perfectly valid to use it as a disconnected means. Simple example which nicely works: 9.0.0.0/8 That network is definitely used, but not to be found on the Internet. Also, how many military and bank networks are announced on the Internet? If they are announced, they most likely are nicely firewalled away or actually disconnected in all means possible from the Internet and just used as a nice virus trap, as those silly virusses do scan them :)
and entirely separate from any discussions of universal uniqueness in terms of address-space, as there are (ugly, non-scalable, brittle, but available) ways to work around such problems, in many cases.
You actually mean that you love to create all kinds of weird solutions to solve a problem that could have easily be avoided in the first place!? I don't think I would like to have your job doing those dirty things. With IPv6 and ULA's especially those mistakes fortunately won't happen that quickly any more. Saves you, me, and a load of other people a lot of headaches. Maybe you won't be able to consult for them any more and make quite some money off them, well that is too bad. And now for some asbestos action: short summary: a) use global addresses for everything, b) use proper acl's), c) toys exist that some people clearly don't know about yet ;) No further technical content below, except for a reply to a flame. (But don't miss out on the pdf mentioned for the toys ;) Jim Shankland wrote:
In response to my saying:
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
"Jay R. Ashworth" <jra@baylink.com> responds, concisely:
It doesn't, and it shouldn't. That does *not* mean it should not have a globally unique ( != globally routable) IP address.
and Jeroen Massar <jeroen@unfix.org> presents several hypothetical scenarios.
Are you trying to say that I make things up? Neat, lets counter that: http://www.sixxs.net/presentations/SwiNOG11-DeployingIPv6.pdf (yes, I know large slideset, unfortunately alexandria.paf.se where the pix came from is not available anymore and I can't find another source) Slides 50-57 show some nice toys which you can get in the Asian region already. This is thus far from "hypothetical". Note the IPv6 address on that hydro controller's LCD, it can be used to water your plants. Yes, indeed, when that show was happening, it was globally addressable, just like the camera and all the other toys there. And yes, I gave the plant water using telnet :) That you don't have it, That you didn't see it yet, doesn't mean it does not exist.
Note that the original goal was for electrical companies to monitor electrical meters. Jeroen brings up backyard mini-nuke plants, seeing how much the power plug in the garden is being used, etc. These may all be desirable goals, but they represent considerable mission creep from the originally stated goal.
What is your point with writing this section? Trying to explain that it does not conform to your exact wishes? Or do you just want to type my name a couple of times to practice it? I know it is as difficult to pronounce as to type it ;) Dunno what I should read in it, it doesn't have any technical content or arguments for any of your points.
None of Jeroen's applications requires end-to-end, packet-level access to the individual devices in Jeroen's future (I assume) home.
Using a my name twice in a sentence, I must be important to target. Actually those applications DO require end-to-end, just like anything else. How else would you address them otherwise? If they are not addressable, how do you communicate with it?
You can certainly argue that packet-level connectivity is better, easier to engineer, scales better, etc., etc.; but it is not *required*.
Thus you do actually agree with it, but just want a strange work around. I fully understand that selling middle boxes for all kinds of things is a lot of fun and can earn people lots of cash, but some people just want to stick with one protocol at a time please. Just an example, to keep it a bit technical and at least a bit on subject: using SNMP to monitor the power meters at all your customers. you can thus use cacti or any other standard tool you are using for doing this. Another nice example in this area is IPFIX, which is actually MADE for doing that. Oh note that I had a IPFIX meter for showing the amount of cans and other things dispensed from the vending machine, so yes, it already exists, it is not hypothetical. Or did you want to create a middlebox for that? How are you going to address those middle boxes from your computer?
In fact, there are sound engineering arguments against packet-level access: since we've dragged in the backyard nuke plant, consider what happens when everybody has a backyard mini-nuke, with control software written by Linksys, and it turns out that sending it a certain kind of malformed packet can cause it to melt down ....
Simple Hint: Firewall Next to that, as NANOG is a U.S. thing: Sue them. Also, if a malformed packet can cause a meltdown by that device, then I would not be surprised if the other way of accessing that device (the one you propose and have to come up with out of thin air) would also contain a similar bug when it would be implemented. At least the advantage of IP is that it has already been tested by a large amount of implementations and people around the world so that those kind of bugs are much less likely to occur in the first place. Has your newly addressing scheme been tested that well? As it is addressing, is it 32 or 128 bits? 64 bits you say, conforming to EUI-64 specs?
No matter. Reasonable people can disagree on the question of whether every networkable device benefits from being globally, uniquely addressable.
Indeed, because unreasonable people only think of themselves and don't see the broader scope of things and that tiny projects suddenly become large. But you will disagree with that, because you are reasonable. Now if you had a proper technical argument against I would become less unreasonable as then you had something to reason with against my proper technical arguments.
The burden on the proponents is higher than that: there are *costs* associated with such an architecture, and the proponents of globally unique addressing need to show not only that it has benefits, but that the benefits exceed the costs.
I agree with this completely, especially when you have to design, implement, and test a completely new addressing mechanism for addressing all those devices, build middle boxes, to let them actually talk to the users/tools/devices that want to communicate with them and a lot more, that will cost a lot of money. I did I misread your sentence there, sorry :) It will make companies happy of course, but will users be? Note that you can get sensors that speak IP for about 1 EUR each if it isn't less than that already.
Coming full circle, the original assertion was that IPv6 was required in order for electric companies to use IP to monitor US electric meters. That assertion is false, and no amount of hand-waving about backyard nuke plants will make it true.
As you are clearly targeting this email only on me and not on others; I never said that an electrical company would require IPv6. They can use IPv4 perfectly fine too. The problem with IPv4 though is that there are only 2^32 addresses and that is not enough for most companies that are in this business. As such using IPv6, which has a vastly larger addressing space, would simply solve that problem and still allow them to use their common IP tools that they already have invested in.
The history of IPv6 has been that it keeps receding into the future as people's use of IPv4 adapts enough to make the current benefit of switching to IPv6 smaller than the cost to do so.
You mean that your usage of the Internet has been limited more and more to a sandbox from which you are not able to communicate unless you use strange hacks? Sorry, but that really is your problem if you desire that. Quite some other people that use the Internet actually do want to communicate with other people and devices on the Internet without having to install all kinds of hacks to get over and out of their sandboxes. Doing it without the hacks makes that possible. IPv6 makes that possible. To make it clear: The main benefit of IPv6 is a large amount of addressable endpoints.
Perhaps after a decade or so, we're nearing the end of that road. Or perhaps, as F. Scott Fitzgerald once wrote about IPv6, it is: [..]
"Francis Scott Key Fitzgerald (September 24, 1896 – December 21, 1940)" Sorry, but fat chance that he wrote anything about IPv6 let alone IPv4. He did write a couple of great books though, and one can't avoid liking the music he made. Greets, Jeroen PS: Some people actually have a desire to look out for the next 100 years and what will be possible, they actually dream about cool toys, and freedom, especially freedom on the Internet and on the rest of the planet, restricting addressing is not freedom. PPS: try to find out which IPv6 address can be used to water the plants in my home :) [small hint: it is registered in DNS]
On Jan 23, 2007, at 11:51 AM, Jeroen Massar wrote:
a) use global addresses for everything,
Everything which needs to be accessed globally, sure. But I don't see this as a hard and fast requirement, it's up to the user based upon his projected use.
b) use proper acl's),
Of course.
c) toys exist that some people clearly don't know about yet ;)
Indeed. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
In response to my saying:
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
"Jay R. Ashworth" <jra@baylink.com> responds, concisely:
It doesn't, and it shouldn't. That does *not* mean it should not have a globally unique ( != globally routable) IP address.
and Jeroen Massar <jeroen@unfix.org> presents several hypothetical scenarios. Note that the original goal was for electrical companies to monitor electrical meters. Jeroen brings up backyard mini-nuke plants, seeing how much the power plug in the garden is being used, etc. These may all be desirable goals, but they represent considerable mission creep from the originally stated goal. None of Jeroen's applications requires end-to-end, packet-level access to the individual devices in Jeroen's future (I assume) home. You can certainly argue that packet-level connectivity is better, easier to engineer, scales better, etc., etc.; but it is not *required*. In fact, there are sound engineering arguments against packet-level access: since we've dragged in the backyard nuke plant, consider what happens when everybody has a backyard mini-nuke, with control software written by Linksys, and it turns out that sending it a certain kind of malformed packet can cause it to melt down .... No matter. Reasonable people can disagree on the question of whether every networkable device benefits from being globally, uniquely addressable. The burden on the proponents is higher than that: there are *costs* associated with such an architecture, and the proponents of globally unique addressing need to show not only that it has benefits, but that the benefits exceed the costs. Coming full circle, the original assertion was that IPv6 was required in order for electric companies to use IP to monitor US electric meters. That assertion is false, and no amount of hand-waving about backyard nuke plants will make it true. The history of IPv6 has been that it keeps receding into the future as people's use of IPv4 adapts enough to make the current benefit of switching to IPv6 smaller than the cost to do so. Perhaps after a decade or so, we're nearing the end of that road. Or perhaps, as F. Scott Fitzgerald once wrote about IPv6, it is: the orgiastic future that year by year recedes before us. It eluded us then, but that's no matter - tomorrow we will run faster, stretch out our arms further.... And one fine morning - We'll see. Jim Shankland
On Jan 22, 2007, at 12:15 PM, Jim Shankland wrote:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US....
The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
Perhaps your electrical company has more than 16.7 million electrical meters it needs to address.
On Mon, 22 Jan 2007, Jim Shankland wrote:
"Travis H." <travis+ml-nanog@subspacefield.org> writes:
IIRC, someone representing the electrical companies approached someone representing network providers, possibly the IETF, to ask about the feasibility of using IP to monitor the electrical meters throughout the US....
The response was "yeah, well, maybe with IPv6".
Which is nonsense. More gently, it's only true if you not only want to use IP to monitor electrical meters, but want the use the (global) Internet to monitor electrical meters.
I'd love to hear the business case for why my home electrical meter needs to be directly IP-addressable from an Internet cafe in Lagos.
globally unique addresses I have an electic company, it's got 2500 partners, all with the same 'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is NAT on both sides really efficient?
On Tue, 23 Jan 2007, Chris L. Morrow wrote:
globally unique addresses
I have an electic company, it's got 2500 partners, all with the same 'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is NAT on both sides really efficient?
What do you do when the electric companies split up again, renumber the meters into different network blocks? Satellite set-top boxes don't need to be assigned unique phone numbers to report pay-per-view events back to Dish/DirecTV. They just wake up every few weeks, use the transport identifier already available on the customer's phone line and sends the data, with an embedded identifier independent from the network transport. If the satellite STB ever knew its telephone number, it is probably out-of-date after a few area code changes. The same thing happens with burgerler alarm reporting, and lots of other things. I think network engineers are too quick to use network identifiers for applications. Electric meters, set-top boxes, alarm systems, ice-boxes, and whatever else you want to connect to the network don't need to have the same permanent identifier for the application and the transport. Most of the time they don't need a permanent transport identifier.
On Tue, Jan 23, 2007 at 02:59:21PM -0500, Sean Donelan wrote:
I think network engineers are too quick to use network identifiers for applications.
Analogous to using names or SSNs or anything else as a primary key in a database. The database people already figured out that if you don't assign an identifier used solely for identification purposes, then you can't capture the idea of something changing names (or IPs, or whatever). I think wifi is making this clear; I may connect from various wifi networks, but I'm still me. To deal with roaming mobile devices, we'll have to figure out something to allow us to maintain connectivity while changing IPs, right? Same with DHCP in some cases. -- Kill dash nine, and its no more CPU time, kill dash nine, and that process is mine. -><- <URL:http://www.subspacefield.org/~travis/> For a good time on my UBE blacklist, email john@subspacefield.org.
On Tue, Jan 23, 2007, Chris L. Morrow wrote:
I have an electic company, it's got 2500 partners, all with the same 'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is NAT on both sides really efficient?
I've seen plenty of company setups that double/triple-NAT due to administrative/ security boundaries. The majority of them seem to be government organisations too. :) Adrian
On Jan 23, 2007, at 3:38 PM, Adrian Chadd wrote:
The majority of them seem to be government organisations too. :)
We also see this with extranet/supply-chain-type connectivity between large companies who have overlapping address space, and I'm afraid it's only going to become more common as more of these types of relationships are established. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Jan 20, 2007, at 1:02 PM, Marshall Eubanks wrote:
as long as humans are the primary consumers of bandwidth.
This is an interesting phrase. Did you mean it T-I-C, or are you speculating that M2M (machine-to-machine) communications will at some point rival/overtake bandwidth consumption which is interactively triggered by human actions? Right now TiVo will record television programs it thinks you might like; what effect will this type of technology have on IPTV, more mature P2P systems, etc.? It would be very interesting to try and determine how much automated bandwdith consumption is taking place now and try to extrapolate some trends; a good topic for a PhD dissertation, IMHO. ;> ----------------------------------------------------------------------- Roland Dobbins <rdobbins@cisco.com> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
On Sat, 2007-01-20 at 10:12 -0800, Mark Boolootian wrote:
Cringley has a theory and it involves Google, video, and oversubscribed backbones:
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html
Aren't there some Telco laws wrt cross-state, but still interlata, calls not being able to be charged as interstate? Perhaps Google wants to avoid any future federal/state regulations by providing in-state (i.e. "local") access. Additionally, it makes it easier to do state and local govt business when the data is in the same state (it's not out-sourcing if it's just nextdoor...). And then there is the "lobbying" issue, what better way to lobby multiple states than do do significant business their in? Or perhaps I'm just daydreaming too much today.... ;-) -Jim P.
participants (39)
-
Adrian Chadd
-
Alexander Harrowell
-
Andy Davidson
-
Bob Martin
-
Brandon Galbraith
-
Charlie Allom
-
Chris L. Morrow
-
Christian Kuhtz
-
Daniel Golding
-
David Ulevitch
-
Donald Stahl
-
Florian Weimer
-
Gadi Evron
-
J. Oquendo
-
Jamie Bowden
-
Jeremy Chadwick
-
Jeroen Massar
-
Jim Popovitch
-
Jim Shankland
-
Lucy Lynch
-
Mark Boolootian
-
Mark Smith
-
Marshall Eubanks
-
Michal Krsek
-
Nicholas Suan
-
Niels Bakker
-
Owen DeLong
-
Petri Helenius
-
Randy Bush
-
rich@nullroute.net
-
Rodrick Brown
-
Roland Dobbins
-
Saku Ytti
-
Sean Donelan
-
Stephen Sprunk
-
Travis H.
-
tvest@eyeconomics.com
-
Valdis.Kletnieks@vt.edu
-
william(at)elan.net