I miss the endless debates. Is *everyone* Christmas shopping? Here's a thought to ponder.... With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6? Can it even be done or will they just run IPv6 to the core and proxy the rest? -Jim P.
Jim Popovitch wrote:
I miss the endless debates. Is *everyone* Christmas shopping?
Here's a thought to ponder....
With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6? Can it even be done or will they just run IPv6 to the core and proxy the rest?
-Jim P.
Looking at my own "datacenter": Unifix Linux 2.0.0 No it will never move. Eisfair, kernel 2.2.x My router and my dns, ftp, remote shell No they will probably never move. Suse Linux 8.3 (kernel 2.4.x) my workstation Used to have its IPv6 enabled. Gave me problems with connectivity. I dont have IPv6 to the outside so I had to disable the stack. Runs a lot smoother now. It tooks me week to get the IPv6 stack running in the first place. I tried ISODE 8.0 recently. It still works on all my computers. I could even connect to a friend who also tried ISODE 8.0 It works through IPv4. What happened to ISO? I guess that is what will finally happen to IPv6. I used to have a local IPv6 network running. But with site-local and link-local disappearing the configuration became invalid. Not having valid IPv6 addresses any longer I did not get a headache when I took my IPv6 stack down. My log looks cleaner. No more complaints from my DNS server. Now I am looking forward to what will come after IPv6. :) Merry Christmess Peter and Karin -- Peter and Karin Dambier The Public-Root Consortium Graeffstrasse 14 D-64646 Heppenheim +49(6252)671-788 (Telekom) +49(179)108-3978 (O2 Genion) +49(6252)750-308 (VoIP: sipgate.de) mail: peter@peter-dambier.de mail: peter@echnaton.serveftp.com http://iason.site.voila.fr/ https://sourceforge.net/projects/iason/
Peter Dambier <peter@peter-dambier.de> writes:
Used to have its IPv6 enabled. Gave me problems with connectivity. I dont have IPv6 to the outside so I had to disable the stack. Runs a lot smoother now. It tooks me week to get the IPv6 stack running in the first place.
You've had quite the run of bad luck. My IPv6 stuff was working perfectly and with almost no effort. Until I lost an ethernet card in a VXR and snagged one from the IPv6 box as a "spare", heh. Gotta get around to fixing that, but in the meantime no IPv6 on the colo LAN is not exactly an operational deal-killer.
I tried ISODE 8.0 recently. It still works on all my computers.
Gee, thanks. I'd managed to not think about ISODE for several years. :-P
I could even connect to a friend who also tried ISODE 8.0 It works through IPv4. What happened to ISO?
I guess that is what will finally happen to IPv6.
I think the likelihood that people will eventually stop caring about IPv6 like they stopped caring about OSI is fairly slim, even if the only real problem that V6 addresses is address exhaustion. IPv6 had more traction five years ago than OSI ever had. ---Rob
Jim Popovitch wrote:
With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6?
A L2 switched (or Hubbed ? :) 'datacenter' doesn't need to do much hardware wise, install IPv6 stacks on the hosts, configure done. Most switches will happily run IPv4 and IPv6 on the same cabling, it's just a different ethertype. One thing to watch out for are switches and NIC's with broken multicast, if this is the case IPv6 Neighbour Discovery (ND) will most likely not work. As a workaround one can put interfaces in promisc/allmulti but that is sort of dirty, better replace the hardware instead. Most current OS's support this and usually the upgrade is gratuit(tm), assuming that one isn't running Win95 as a 'server' and does do regular upgrades of their software of course. This part mostly consists of normal server upgrades/updates and should not be to costly except for man hours and of course some testing. Enabling the stack of course doesn't really enable IPv6 in the applications, thus that is a second part one has to look at and will be the most time consuming. The L3 part is most likely the expensive one though, Vendor J noticed they could get a load of cash from government organizations and suddenly introduced a nice expensive additional license for IPv6. Good part is that most vendors seem to support it, they claim at least, and that the software is starting to become sort of stable. Vendor J and C implementations have been very well field tested over the years. Of course as with many new functions, if you take the standard set it will work, if you need something special expect nice creepy bugs, thus keep your support contract up-to-date and keep that backup ready. Another worthy point is of course the upgrading of management utilities. As for getting transit connectivity, shop around there are a couple of good transit providers who will be more than happy to get another customer, proofing their management that it was a good idea to already invest in a full native IPv6 network, instead of waiting for their customers to leave to another party who did already do it a long time ago.
Can it even be done or will they just run IPv6 to the core and proxy the rest?
The general idea of "IPv6 transition" is that one will run dual-stack for the forseeable future, until IPv4 is not used/required anymore, which might take a very long time. For folks who want a native IPv6 core there are a variety of possibilities to do this. Of course not a single one covers all possibilities thus one has to shop around to match the best one. Greets, Jeroen (And I most likely forgot to mention a lot of other problems and issues)
On Wed, 21 Dec 2005 13:13:40 +0100, "Jeroen Massar" <jeroen@unfix.org> said:
Jim Popovitch wrote:
With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6?
A L2 switched (or Hubbed ? :) 'datacenter' doesn't need to do much hardware wise, install IPv6 stacks on the hosts, configure done.
[snip] There are more fundamental changes required to spark wide-spread deployment of IPv6. * a v6-only "killer application" * or a real shortage of v4 addresses * or some other real advantage of v6 over v4, financial and/or technical However, most important is that those involved with v6 need to understand that it's impossible to foresee and solve all future problems. Till then IPv6 isn't mature enough to go anywhere. Any successor to IPv4 has to provide similar ability to evolve over time, and operational policies have to evolve with it. IPv6 as it stand has a lot of room for improvement but that doesn't make it useless. An important aspect of engineering is to take what you've got and make the most of it. Where would we be today if early IPv4 implementations had been halted because there was no DNS, no IGP and no EGP? There was even an internet before we got CIDR ;-) //per -- Per Heldal heldal@eml.cc
On Dec 21, 2005, at 2:09 AM, Jim Popovitch wrote:
With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6? Can it even be done or will they just run IPv6 to the core and proxy the rest?
-Jim P.
We've just gone through a pretty decent sized attempt to convert our infrastructure and applications to IPv4/IPv6 dual stack, and was asked by someone else to write up the successes and problems I had doing this. I'm no where near done writing everything up, or even finished with the migration attempt, but here's some bits of my notes, describing our real world experience to throw in. I'll add that I'm small potatoes to you guys out there, I'm sure those with much larger networks will face some additional challenges. Our biggest client had a need for IPv6 for a couple of reasons. The first was an application (which surprisingly was IPv6 ready) that required a unique IP address "per instance", they needed to be able to handle tens of thousands of "instances" to work around some brokenness in the application, and RFC1918 wouldn't cut it. Using made up IPv6 addresses with no IPv6 connectivity worked, but we wanted to do it "right". The second was because of IRC and some Japanese students. This client has a pretty thriving chat community going, based around IRC. One niche of customers and users for this site that suddenly exploded in popularity was with some east asian(mostly Japanese) students, using these services from their dorm rooms or computer labs. The workstations themselves run IPv4, but the university's backbone was IPv6 only. The side effect of this was that all non HTTP IPv4 connections were going through some kind of proxy server, that had a time limit on how long you could keep a session open. They were getting booted off constantly, and the IT staff at the university was asking them to find an IPv6 IRC server they could use instead. (It's possible I'm misrepresenting the situation here, I didn't have direct contact with the people involved, so the technical details here might be wrong). The third was for VPN. A new product is going to use potentially hundreds of VPNs, again RFC1918 addresses won't work for this application, and we're trying to be a good neighbor and not blow several /24s worth of space on something that didn't really need it. GRE tunnels worked fine for this, and allowed us to do IPv6 inside the tunnel over IPv4 transit and preserve our tiny IPv4 allocations. But yeah, those are three really weak needs for requiring IPv6, but I'm guessing it's going to be niches like this that start the need for IPv6 at all. So, we decided we'd try to make our network IPv6 enabled, and see how hard it would be to run everything dual stack that would support it. The first wakeup call that this wasn't going to be easy was that only one of our transit providers supported IPv6 in any way at all. After we got IPv6 turned up, the problems we discovered, roughly in order: 1) IPv6 on the internet overall seems a bit unreliable at the moment. Entire /32's disappear and reappear, gone for days at a time. The most common path over IPv6 from the US to Europe is US->JP->US->EU. I realize this may be specific to our connection itself, but browsing looking glasses seems to back up that it's not just us. 2) Lots of providers who are running IPv6 aren't taking it as seriously as IPv4. Manual prefix filters, NOC staff that doesn't even know they're running IPv6, etc. 3) Some key pieces of internet infrastructure are IPv6 oblivious. ARIN's route registry doesn't support the "route6" objects, for example. 4) Even though we went through our applications and software before starting this to check for IPv6 readiness, there's a huge difference between "Supports IPv6" and "actually works with IPv6". 5) Our DNS software(djbdns) supports IPv6, kind of. WIth patches you can enter AAAA records, but only by entering 32 digit hexadecimal numbers with no colons or abbreviations. We were never able to get it to respond to queries over IPv6, so of all our DNS is still IPv4. 6) Some software supports IPv6 and IPv4 at the same time by using IPv4 addresses on IPv6 sockets. (i.e. they bind to tcp6 on port 80, and an incoming ipv4 connection appears as ::ffff:192.168.0.1). Other applications want to bind to two different sockets. Others want you to run two copies of themselves, one for IPv6 an one for IPv4. However, on a BSD system, the setting net.inet6.ip6.v6only is a system-wide configuration option. If you turn it off (allow IPv4 connections to come in on IPv6 sockets) for one application running on the server that requires it off, and you're running a difference service on the same server that wants to run IPv4 and IPv6 separately, you have to make sure the IPv6 daemon doesn't start before the IPv4, or it will bind to both protocols and the IPv4 daemon won't be able to bind to the port. 7) We found that even if applications support IPv6, they default to disabling it during compiling 95% of the time. We had to go back and recompile our HTTP servers, PHP, all sorts of libraries, etc. Since IPv6 defaults to off on a lot of packages, we had issues just getting the software to BUILD with ipv6 turned on unless we were using the exact same libraries as were current when it was last released. 8) Once we got everything on the network and server side ready for and usable on IPv6 we discovered that a lot of our client's applications just had no idea what to do with IPv6 connections. Many PHP applications broke because they expected $_SERVER['REMOTE_ADDR'] to fit within 15 characters at most. Databases had to have their columns widened (if they were storing the address as text), or functionality had to be rewritten if they were storing IPs as 32 bit integers. Web server log analyzers claimed that the log was "corrupted" if it had an IPv6 address in it. Lots and lots of application logic just wasn't IPv6 aware at all, and either had serious cosmetic problems with displaying IPv6 addresses, or simply didn't work when an IPv6 address was encountered. 9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites. Some investigating showed that they had inadvertently enabled IPv6 on their desktop without having any IPv6 connectivity. In some cases it was traced to users reading about 6to4 and wanting to play with it but not correctly installing it, then not UNinstalling it. Others had turned on IPv6 on OS's that make it easy to do so (OS X for example) without realizing what it was. Some browsers/OSes seem better than others at figuring out that they don't have IPv6 working and falling back to IPv4. 10) Smaller than normal MTUs seem much more common on IPv6, and it is exposing PMTUD breakage on a lot of people's networks. 11) Almost without fail, the path an IPv6 user takes to reach us (and vice-versa) is less optimal than the IPv4 route. Users are being penalized for turning on IPv6, since they have no way to fall back to IPv4 on a site-by-site basis when using a web browser. In the end, we've backed out almost all of our changes to make sites IPv6 visible for now. It broke things for far more IPv4 users than it helped IPv6 users. We've left the IRC services running on a dual stack system, since we were able to partition IPv6 off well enough not to hurt things. Everything else is back to IPv4 only. I'm also personally a bit concerned with how IPv6 allocation and routing works with respect to small to medium sized networks like ours. I know this is still a hot topic and several proposals are being passed around to resolve some of these issues, but it seems like I *lose* functionality with IPv6 that I have with IPv4, mostly due to the "don't deaggregate your allocation" mantra, and how far the bar was raised to get PI space. We do a lot of things things in IPv4 land with regard to routing and addressing that I don't believe we can do in IPv6, which worries me more. Shim6 and other proposals are creative, but don't replace a lot of the functionality I'd be losing. This is another story though, that is getting really off topic. Getting your network running IPv6 doesn't seem to be the challenge anymore. None of our L2 devices cared at all. Our L3 devices took some configuration, but moved pretty easily. it's the server and application software that needs a lot more work. I don't think we're even close to the point where an end-user can go to their provider and say "IPv6 me!" and get it working for more hassle than it's worth to them. -- Kevin
On Wed, Dec 21, 2005 at 07:50:14AM -0600, Kevin Day wrote: [ .. snip .. ]
1) IPv6 on the internet overall seems a bit unreliable at the moment. Entire /32's disappear and reappear, gone for days at a time. The most common path over IPv6 from the US to Europe is US->JP->US->EU. I realize this may be specific to our connection itself, but browsing looking glasses seems to back up that it's not just us.
That really depends on who you are using for an upstream. There are already *several* sane v6 networks in US who have proper routing and high-quality US-Europe inter-connections, including commercial networks that one can purchase transit from. Networks that still implement legacy 6bone routing practice will find themselves using scenic routes (i.e. US-JP-US-EU). [ .. snip .. ]
10) Smaller than normal MTUs seem much more common on IPv6, and it is exposing PMTUD breakage on a lot of people's networks.
This is almost all the time due to people misconfiguring their tunnels. Popular vendors' routers tend to automatically derive tunnel payload MTU based from physical circuit MTU instead of a 'commonly accepted' size. We had problems where a tunneled peer has SONET interface on their side, basing a tunnel MTU out of 4470, while our side is manually set to 1480 for ipip tunnel. This causes problem with pmtud. As outlined in C&W's v6 presentation[1], tunnel MTUs should be explicitly configured wherever possible. [1]: http://www.nanog.org/mtg-0505/steinegger.html [ .. snip .. ]
I'm also personally a bit concerned with how IPv6 allocation and routing works with respect to small to medium sized networks like ours. I know this is still a hot topic and several proposals are being passed around to resolve some of these issues, but it seems like I *lose* functionality with IPv6 that I have with IPv4, mostly due to the "don't deaggregate your allocation" mantra, and how far the bar was raised to get PI space. We do a lot of things things in IPv4 land with regard to routing and addressing that I don't believe we can do in IPv6, which worries me more. Shim6 and other proposals are creative, but don't replace a lot of the functionality I'd be losing. This is another story though, that is getting really off topic.
I agree.. but this opens up the whole "Great Multihoming Debate" ;> James
8) Once we got everything on the network and server side ready for and usable on IPv6 we discovered that a lot of our client's applications just had no idea what to do with IPv6 connections. Many PHP applications broke because they expected $_SERVER['REMOTE_ADDR'] to fit within 15 characters at most. Databases had to have their columns widened (if they were storing the address as text), or functionality had to be rewritten if they were storing IPs as 32 bit integers. Web server log analyzers claimed that the log was "corrupted" if it had an IPv6 address in it. Lots and lots of application logic just wasn't IPv6 aware at all, and either had serious cosmetic problems with displaying IPv6 addresses, or simply didn't work when an IPv6 address was encountered.
Just imagine what it will be like if the idea of sticking a decimal point into 32 bit AS numbers ends up getting deployed. --Michael Dillon
Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites.
It is possible that a broken 6to4 relay somewhere was causing problems. Running your own local 6to4 relay (rfc3068) will improve performance and reduce the chances of going through a broken one. FWIW, I have been running some AAAA records for almost a year on some revenue generating sites without any reachability complaints or drop in traffic. I do run a local 6to4 relay though.
I know this is still a hot topic and several proposals are being passed around to resolve some of these issues, but it seems like I *lose* functionality with IPv6 that I have with IPv4, mostly due to the "don't deaggregate your allocation" mantra, and how far the bar was raised to get PI space.
It sounds like you are an existing ISP in the ARIN reigon. If so then you qualify for a /32 allocation. Let us know if you have any problems getting one. For non-ISP's, the policy is still being worked out. See the ARIN ppml list for discussion. As for deaggregation, it is not recommended because some filter (/48's) and some don't which results in sub-optimal paths, but it can be done depending on what your peers will accept. - Kevin
On Wed, Dec 21, 2005 at 11:13:31AM -0500, Kevin Loch wrote:
Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites.
It is possible that a broken 6to4 relay somewhere was causing problems. Running your own local 6to4 relay (rfc3068) will improve performance and reduce the chances of going through a broken one.
we have been running w/ AAAA records for production systems for the past six years w/o complaint and no 6to4 relay. --bill
- Kevin
Have you used CNAME to AAAA records ? If the customers trying to access the sites used XP with SP1, there was a bug which make impossible the connection via IPv4 if IPv6 connectivity is broken. That was sorted out already by Microsoft with SP2. Otherwise ask for a traceroute6 so the trouble can be followed up. Regards, Jordi
De: <bmanning@vacation.karoshi.com> Responder a: <owner-nanog@merit.edu> Fecha: Wed, 21 Dec 2005 16:15:41 +0000 Para: Kevin Loch <kloch@hotnic.net> CC: <nanog@nanog.org> Asunto: Re: Deploying IPv6 in a datacenter (Was: Awful quiet?)
On Wed, Dec 21, 2005 at 11:13:31AM -0500, Kevin Loch wrote:
Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites.
It is possible that a broken 6to4 relay somewhere was causing problems. Running your own local 6to4 relay (rfc3068) will improve performance and reduce the chances of going through a broken one.
we have been running w/ AAAA records for production systems for the past six years w/o complaint and no 6to4 relay.
--bill
- Kevin
On Dec 21, 2005, at 11:13 AM, Kevin Loch wrote:
Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites.
It is possible that a broken 6to4 relay somewhere was causing problems. Running your own local 6to4 relay (rfc3068) will improve performance and reduce the chances of going through a broken one.
FWIW, I have been running some AAAA records for almost a year on some revenue generating sites without any reachability complaints or drop in traffic. I do run a local 6to4 relay though.
I know this is still a hot topic and several proposals are being passed around to resolve some of these issues, but it seems like I *lose* functionality with IPv6 that I have with IPv4, mostly due to the "don't deaggregate your allocation" mantra, and how far the bar was raised to get PI space.
It sounds like you are an existing ISP in the ARIN reigon. If so then you qualify for a /32 allocation. Let us know if you have any problems getting one. For non-ISP's, the policy is still being worked out. See the ARIN ppml list for discussion.
specifically, 2005-1 http://www.arin.net/policy/proposals/2005_1.html Comments by all are welcomed. Regards Marshall Eubanks
As for deaggregation, it is not recommended because some filter (/48's) and some don't which results in sub-optimal paths, but it can be done depending on what your peers will accept.
- Kevin
On Dec 21, 2005, at 10:13 AM, Kevin Loch wrote:
Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites.
It is possible that a broken 6to4 relay somewhere was causing problems. Running your own local 6to4 relay (rfc3068) will improve performance and reduce the chances of going through a broken one.
FWIW, I have been running some AAAA records for almost a year on some revenue generating sites without any reachability complaints or drop in traffic. I do run a local 6to4 relay though.
I will admit, the number of complaints was very small. But, I don't know how many it was affecting who didn't contact us, so it was decided it wasn't worth the risk with no perceived gain at this point.
I know this is still a hot topic and several proposals are being passed around to resolve some of these issues, but it seems like I *lose* functionality with IPv6 that I have with IPv4, mostly due to the "don't deaggregate your allocation" mantra, and how far the bar was raised to get PI space.
It sounds like you are an existing ISP in the ARIN reigon. If so then you qualify for a /32 allocation. Let us know if you have any problems getting one. For non-ISP's, the policy is still being worked out. See the ARIN ppml list for discussion. As for deaggregation, it is not recommended because some filter (/48's) and some don't which results in sub-optimal paths, but it can be done depending on what your peers will accept.
Correct, we're in the ARIN region. We did qualify for a /32, but just barely, and only because of (hopeful) future growth and some new products we're launching. We do managed hosting for a small number of specialized clients, and act sort of as a content delivery network. Not long ago we were a little smaller, and qualified for a /20 of our own in IPv4. /20 = 4096 addresses. We were able to pretty easily qualify for that between a few racks of servers, some IP based vhosting(necessary for a few applications), etc. In the IPv6 world, nothing we could do under that business model would qualify us for a direct allocation. Back then we wouldn't have qualified for a /32 because we wouldn't have met the requirements. We wouldn't have met the proposed 2005-1 requirements for a /44 (we don't come close to 100,000 devices), and lose functionality if we're required to advertise it through a single aggregated address. Just for the sake of argument though, let's say we didn't get a /32 and had to either get provider assigned space, or a /44 through the 2005-1 proposal. 1) We've got separate POPs in different cities, with no dedicated connectivity between them. They act as entirely independent networks. We don't even have the same transit providers in each city. Some content is replicated to each POP, so we can dump content directly on peers where they want it, or for redundancy. Some content is only available on one POP (dynamic sites, etc), so traffic destined for that POP can only go to that POP. IPv4: We split our /20 into a /23 for each city, and announce only that. IPv6: We have to announce the entire /44, and aren't allowed to deaggregate it. Where do we announce it? If I use transit provider Y only in city #1, and I announce to them our /44 even though I only really want a /48 worth coming there, I'm going to end up receiving traffic for city #2 at city #1. I can't even bounce it back off my router onto transit again, because I might see provider Y as the best path for it and it'll loop. Don't say we should ask for a /44 in each city, we don't need that much space in each location, and we're allocating space only to avoid deaggregating it, which is even more wasteful. :) 2) We use anycast to select the closest server on our network to a viewer, anycasted DNS for quicker lookups, and a few other anycast services. IPv4: We have a /24 dedicated for anycast, and announce that block in each city. IPv6: ??? I honestly have no idea how to do this with IPv6, and still announce only a single block, without effectively anycasting everything we do. 3) We're far past the size where we can renumber every time we change transit providers. IPv4: We were able to qualify for PI allocations as small as a /22, because we were multihomed. a /20 if we weren't. This is within the range of feasibility of any small/medium hosting company to reach in a very short time. IPv6: Even forgetting multihoming issues, we're beyond the size where we can do a renumbering without it becoming a serious ordeal. If we're forced to take provider assigned space, we're locked into that transit provider unless we want to leave them so badly that we're willing to go through the hassle. I can't even easily transition from one provider to another by having the new provider announce my PA space from my old provider temporarily, since my provider might be forced to announce everything as a single aggregate as well. The allocation and advertisement policies work great for small end- sites (they get many subnets, and more addresses than they'll ever need), and great for large providers (they have no problem justifying the need for a /32, and are able to break things up if needed). If you fit somewhere between those two though, I don't think the policies really address our needs. Small to medium sized hosting providers, content networks and other non-bandwidth-supplying companies generally aren't providing transit to others, so they can't get a /32. They ARE used to being able to deaggregate when necessary though, anycast as needed, multihome easily, and not go through renumbering hell every time you change providers. In the case of hosting companies, a renumbering is especially painful when you have to coordinate renumbering with thousands of customers who are going to see it as a hassle with no benefit. I completely understand the need to reduce table growth, slow ASN allocations and not waste IPv6 space. But, (and I honestly don't mean this as an attack or flamebait, just a different perspective) it feels like a lot of the big providers are saying "We can't keep up with table growth, we need policies to stop this." Totally understandable. Policies got written to slow the table growth and ASN allocation, but for the most part they're painless for the big providers(easy allocations, and no loss of functionality compared to IPv4), and all the concessions are being made by those on the small- medium side, with very little benefit shown to them. I don't mean to sound so resistant to change, but IPv6 is really a step backwards for people in our position if we're giving up so much just to get a bigger block of addresses. -- Kevin <waits for flame war to erupt> :)
Kevin Day wrote:
We wouldn't have met the proposed 2005-1 requirements for a /44 (we don't come close to 100,000 devices), and lose functionality if we're required to advertise it through a single aggregated address.
The high requirements of the "current" 2005-1 were so thoroughly rejected at the last ARIN meeting that it will probably return closer to the original 2005-1 at the next meeting. Something like "if you have an IPv4 assignment/allocation from ARIN you can get an end site assignement". There was also a suggestion to eliminate the "single aggregate announcement" requirement from both end-site and ISP sections. - Kevin
Kevin Day wrote: [..] I agree with your point that currently your IPv4-solution can't be applied to IPv6 but..(see the helpful and nice thingy part at the end ;)
1) We've got separate POPs in different cities, with no dedicated connectivity between them. They act as entirely independent networks. We don't even have the same transit providers in each city. Some content is replicated to each POP, so we can dump content directly on peers where they want it, or for redundancy. Some content is only available on one POP (dynamic sites, etc), so traffic destined for that POP can only go to that POP.
IPv4: We split our /20 into a /23 for each city, and announce only that.
This means you are taking up more routing slots. -> note the routing. The 'problem' here is that we call these allocations "IPv6 Address Space", not "xxx Routing Space". RIR's provide Address Space and do not guarantee routability in any way. Hence the subject. This is actually a more fundamental problem in the current IP ideology than anything else.
IPv6: We have to announce the entire /44, and aren't allowed to deaggregate it.
Well, that is actually not true, you can also, in IPv6, simply announce /48's from a /32 or /44 or whatever. The same way you do that in IPv4. The issue with announcing say a /48 is though that networks which filter will filter it out and will only reach you over the aggregate. Of course that is their choice, just like yours is to try to announce the /48's in IPv6, or the /23's for IPv4. An IPv4 /28 doesn't reach far either. The problem here is though that your /48 will propagate through ISP's with less strict filters and they will act as transits for your /48. My experience tells that the ones not filtering also have a not so-good setup thus your connectivity tends to go all around the world. <SNIP>
2) We use anycast to select the closest server on our network to a viewer, anycasted DNS for quicker lookups, and a few other anycast services.
IPv4: We have a /24 dedicated for anycast, and announce that block in each city.
IPv6: ??? I honestly have no idea how to do this with IPv6, and still announce only a single block, without effectively anycasting everything we do.
The same as IPv4 announce a /48. Have a fallback /32 for folks who do filter it out.
3) We're far past the size where we can renumber every time we change transit providers.
Indeed, as you have those _Addresses_ assigned to devices. The issue you have is routing though. [..]
The allocation and advertisement policies work great for small end-sites (they get many subnets, and more addresses than they'll ever need), and great for large providers (they have no problem justifying the need for a /32, and are able to break things up if needed). If you fit somewhere between those two though, I don't think the policies really address our needs.
Here you say it your self: "Advertisement policies" this is routing again, which has not much to do with Address Space.
Small to medium sized hosting providers, content networks and other non-bandwidth-supplying companies generally aren't providing transit to others, so they can't get a /32.
You don't have customers? If you have say 201 customers, who each have a 1U box in a rack you provide connectivity to, then give each one of them a /48, they are endsites and maybe that endsite sets up VPN's. As for the 201, that is what you might want to have once. Tada current policy fulfilled. NEXT! :)
They ARE used to being able to deaggregate when necessary though, anycast as needed, multihome [..]
One can deaggregate in IPv6 also, just don't expect it to be accepted. Just like a /24 won't be accepted by most folks. Also note that policy _suggests_ that one announces it as a single chunk.
I completely understand the need to reduce table growth, slow ASN allocations and not waste IPv6 space.
Nobody is complaining (yet) about ASN consumption. Also 4-byte ASN's will solve that issue quite easily, except for the pain of hard/software upgrades, though these have a much less expected impact than going for IPv6.
-- Kevin
<waits for flame war to erupt> :)
</me donates his asbestos suite and tin foil hat> :) The "helpful and nice thingy" part: Apparently the problem faced here can be put into two parts: *** Getting IPv6 address space from the RIR's in: - /44 chunks for PoPs/sites - /48 for anycast This Address Space can then be used for giving Addresses to hosts. This should not be to difficult to do, except for getting the policy spelled out correctly and getting it passed through: politics. One thing that should be included here and spelled out in big letters: it is Address Space and not a /32, expect it to be filtered. Also such blocks should come from one large block, eg a /20 per RIR, which makes it easy to exempt them in filters ala the IX blocks etc. Of course it might well be that for the forseeable future ISP's will be lenient in accepting prefixes upto a /48. This would be sort of similar to: http://www.arin.net/reference/micro_allocations.html *** Multihoming / Route announcement trick that doesn't fill up slots This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;) Currently I am aware of the following possible 'solutions': - SHIM6 - SCTP - HIP - NOID - WIMP See: draft-huston-multi6-proposals draft-savola-multi6-nowwhat Most if not all of these have problems for some uses, eg Traffic Engineering is supposedly not possible anymore the way it is currently being done in BGP. Mindset changes will be required. Here is the area that needs attention. Only way to get that going is to currently either 'just do it' or get along in the train with the SHIM6 folks and help them out to make it the way you might want to see it. Greets, Jeroen (already caught a nasty cold so come on with those flaming arguments to heat me up ;)
On Wed, Dec 21, 2005 at 08:34:06PM +0100, Jeroen Massar wrote:
The issue with announcing say a /48 is though that networks which filter will filter it out and will only reach you over the aggregate. Of course that is their choice, just like yours is to try to announce the /48's in IPv6, or the /23's for IPv4. An IPv4 /28 doesn't reach far either.
The problem here is though that your /48 will propagate through ISP's with less strict filters and they will act as transits for your /48. My experience tells that the ones not filtering also have a not so-good setup thus your connectivity tends to go all around the world.
This description of the problem ain't totally correct. The problem is that the more specific route propagates thru fewer paths, thus the average AS_PATH (and forwarding path) length is usually (much) higher for the more specific route than the aggregate. Routers decide on CIDR, so packets to the more specific will travel the long way instead of the short way. It's NOT a matter of "the ones not filtering also have a not so-good setup". Actually, many/most of the "big good IPv6 players" nowadays DO allow /48s as they recognize the need for "end site multihoming". And this also contributes to the fact that /48 multihoming is nowadays far more feasible (as long as your upstreams are "sane and well connected") than let's say a year ago. Caveat: this is a EU/US centric view. I'm not sure about the development in the ASPAC region on this matter as I didn't follow that closely (partly because it's difficult to follow anything there as it's network-topological "quite distant" and few hosts and accessible routers to test from).
The same as IPv4 announce a /48. Have a fallback /32 for folks who do filter it out.
As outline above, that will artificially impair connectivity performance.
Here you say it your self: "Advertisement policies" this is routing again, which has not much to do with Address Space.
It does, as long as we don't have a total decoupling between locators and identifiers, alongside with the required tools for traffic engineering with the locators.
One can deaggregate in IPv6 also, just don't expect it to be accepted. Just like a /24 won't be accepted by most folks.
Uhm, sorry, but that's wrong. /24s are widely(!) accepted and only very seldom not accepted. There are many (MANY!) folks running on /24 PI (== no larger covering aggregate) without problems.
-- Kevin
<waits for flame war to erupt> :)
</me donates his asbestos suite and tin foil hat> :)
</me shows off his designer collection of asbestos>
This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;)
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?). That doesn't mean that there are none - I'm not a scientist or mathematician - I'd be happy to have any numbers to back the FUD about the end of world being near. :-)
Most if not all of these have problems for some uses, eg Traffic Engineering is supposedly not possible anymore the way it is currently being done in BGP. Mindset changes will be required.
For all, or only for those who are not deemed worthy of entering the market on the same terms and conditions like the existing players? ;) Best regards, Daniel -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
On Wed, Dec 21, 2005 at 11:36:00PM +0100, Daniel Roesen wrote:
On Wed, Dec 21, 2005 at 08:34:06PM +0100, Jeroen Massar wrote:
The issue with announcing say a /48 is though that networks which filter will filter it out and will only reach you over the aggregate. Of course that is their choice, just like yours is to try to announce the /48's in IPv6, or the /23's for IPv4. An IPv4 /28 doesn't reach far either.
The problem here is though that your /48 will propagate through ISP's with less strict filters and they will act as transits for your /48. My experience tells that the ones not filtering also have a not so-good setup thus your connectivity tends to go all around the world.
This description of the problem ain't totally correct. The problem is that the more specific route propagates thru fewer paths, thus the average AS_PATH (and forwarding path) length is usually (much) higher for the more specific route than the aggregate. Routers decide on CIDR, so packets to the more specific will travel the long way instead of the short way.
It's NOT a matter of "the ones not filtering also have a not so-good setup". Actually, many/most of the "big good IPv6 players" nowadays DO allow /48s as they recognize the need for "end site multihoming". And this also contributes to the fact that /48 multihoming is nowadays far more feasible (as long as your upstreams are "sane and well connected") than let's say a year ago.
Caveat: this is a EU/US centric view. I'm not sure about the development in the ASPAC region on this matter as I didn't follow that closely (partly because it's difficult to follow anything there as it's network-topological "quite distant" and few hosts and accessible routers to test from).
The same as IPv4 announce a /48. Have a fallback /32 for folks who do filter it out.
As outline above, that will artificially impair connectivity performance.
Here you say it your self: "Advertisement policies" this is routing again, which has not much to do with Address Space.
It does, as long as we don't have a total decoupling between locators and identifiers, alongside with the required tools for traffic engineering with the locators.
One can deaggregate in IPv6 also, just don't expect it to be accepted. Just like a /24 won't be accepted by most folks.
Uhm, sorry, but that's wrong. /24s are widely(!) accepted and only very seldom not accepted. There are many (MANY!) folks running on /24 PI (== no larger covering aggregate) without problems.
-- Kevin
<waits for flame war to erupt> :)
</me donates his asbestos suite and tin foil hat> :)
</me shows off his designer collection of asbestos>
This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;)
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?).
Last time I checked, Ethernet is still CSMA/CD.
That doesn't mean that there are none - I'm not a scientist or mathematician - I'd be happy to have any numbers to back the FUD about the end of world being near. :-)
Most if not all of these have problems for some uses, eg Traffic Engineering is supposedly not possible anymore the way it is currently being done in BGP. Mindset changes will be required.
For all, or only for those who are not deemed worthy of entering the market on the same terms and conditions like the existing players? ;)
Best regards, Daniel
-- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
On Wed, Dec 21, 2005 at 04:43:58PM -0600, sysadmin@citynetwireless.net wrote:
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?).
Last time I checked, Ethernet is still CSMA/CD.
Correct. And there you have minimum frame spacing requirements (IFG) and (e.g. with 10Base2 networks) minimum distance between stations attached to the bus to allow CSMA/CD work correctly. I'm not aware of any BGP timing stuff that makes it inherently explode at a certain amount of routes. So what we ACTUALLY talk about is the fear that control planes (and associated forwarding plane adjacency lookup systems) won't be able to keep up with the granularity growth in routing information. Which is a different thing than "we have a BGP scaling problem". Best regards, Daniel -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
Thus spake <sysadmin@citynetwireless.net>
On Wed, Dec 21, 2005 at 11:36:00PM +0100, Daniel Roesen wrote:
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?).
Last time I checked, Ethernet is still CSMA/CD.
Only if you're running half-duplex, which is generally an error condition in modern networks. (Unless we're talking WiFi, but I think that mechanism's actually called CSMA/CA) S Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking
On Dec 21, 2005, at 1:34 PM, Jeroen Massar wrote:
Kevin Day wrote: [..]
I agree with your point that currently your IPv4-solution can't be applied to IPv6 but..(see the helpful and nice thingy part at the end ;)
Thanks. I also just want to add that I'm not expecting to be able to do every single thing with IPv6 that we could with IPv4, it's a different protocol on more levels than just addressing. However, the original question was about what experiences people had with IPv6, and I think it's an appropriate point. The more deviation between how we have to do business in IPv6 and how we're doing it now in IPv4 adds dramatically to the pain of transitioning. The disincentives for a small-mid sized network to moving to IPv6 are pretty big right now, which means very few of us are going to do it willingly. (Remember, the little guys are who are going to be buying IPv6 transit from the big guys, so it's in everyone's best interest to ease adoption)
IPv4: We split our /20 into a /23 for each city, and announce only that.
This means you are taking up more routing slots. -> note the routing.
The 'problem' here is that we call these allocations "IPv6 Address Space", not "xxx Routing Space". RIR's provide Address Space and do not guarantee routability in any way. Hence the subject.
This is actually a more fundamental problem in the current IP ideology than anything else.
Yes, but the routing for these networks are different. If AS numbers were infinite I'd run each one with a different ASN. The paths, policies and providers to each network are different, so I believe they deserve different routing. In IPv4 land, the line between IP allocation and IP routing was pretty clear. The RIR didn't care how you announced it, you just had to ensure that the rest of the world heard it. A pretty fair nearly- world-wide compromise was that you could break your network up into as many pieces as you wanted, just don't expect anything smaller than a /24 to be heard. I think this is a pretty fair balance. For us with a /20, we could break our network up into 16 pieces if we really had to, but for us we were content with 4 pieces. In IPv6 land, the RIRs are dictating routing policy as well as allocation policy. With the current /44 proposal (with acknowledgment that Kevin Loch says things might be changing), which would be enough for all but the largest enterprise customers, I still wouldn't be allowed to have different routing between subnets at all. Following the policy, I'm not allowed to deaggregate it at all. Even if I did, there are going to be providers who will filter everything in the space /44's are announced on the /44 boundary because "that's what ARIN says you're supposed to do". (Fair enough).
IPv6: We have to announce the entire /44, and aren't allowed to deaggregate it.
Well, that is actually not true, you can also, in IPv6, simply announce /48's from a /32 or /44 or whatever. The same way you do that in IPv4.
No, the proposed policy says that if you get a /44 you must "advertise that connectivity through it's single aggregated address assignment." Get a /48 from your provider? Your provider can only give /48s to organizations "through its single aggregated address allocation". Yes, if you qualify for a /32 you can break it up into /48's, but most small-medium sized hosting companies and other networks can't qualify for a /32.
The issue with announcing say a /48 is though that networks which filter will filter it out and will only reach you over the aggregate. Of course that is their choice, just like yours is to try to announce the / 48's in IPv6, or the /23's for IPv4. An IPv4 /28 doesn't reach far either.
True, but the smallest atomic block in IPv4 you can announce is a / 24, and allocations are given in multiples of /24. In IPv6, allocations or assignments are at /48 and /44, and you must advertise the entire thing, and ONLY the entire thing.
The problem here is though that your /48 will propagate through ISP's with less strict filters and they will act as transits for your /48. My experience tells that the ones not filtering also have a not so- good setup thus your connectivity tends to go all around the world.
Ok, let me explain a scenario that would directly apply to us, which probably will happen to anyone with multiple networks scattered around the globe. Lets also pretend I qualified for a /44, and received it. I have two corporate offices, one in New York and one in Los Angeles, each requiring a /48. In New York, I buy transit from ISP A and ISP B. In Los Angeles, I buy transit from ISP C and ISP D. ISP A and B don't sell service in LA. I can announce New York and LA's /48 to each provider there, but there are going to be networks out there who filter out /48's, so I need to advertise the /44 somewhere. Where do I advertise it? If I advertise the /44 in both and a single /48 only in NY, C and D are going to see our NY advertisement through whoever they're buying transit from. If their providers filter out my /48's, C and D won't see my /48 at all, so they'll send all my traffic for my NY office to LA, that I won't be able to route anywhere. If there isn't a way for end-sites to receive multiple allocations, or split their allocation up and expect 99.99% of the world to listen to their deaggregations, they can't have multiple networks. If that's the case, I think it'll encourage people to lie and try to justify multiple /44's, even though they really only need one. A company has six disconnected branch offices within the US. Which situation is optimal: 1) They somehow acquire 6 /44's, and announce 6 /44's. 2) They lie and get a /32, and announce only the 6 /48's that they really wanted, leaving the 65530 other /48's wasted. 3) They get a single /44 like they probably only really need, and deaggregate it to get 6 /44's. All three add 6 entries to your table. #1 is using /44's when a /48 would do. #2 is wasting a good chunk of space that will never be used. #3 wastes the least space, and doesn't require "creativity" in getting your allocation, but is against the current policy.
The allocation and advertisement policies work great for small end- sites (they get many subnets, and more addresses than they'll ever need), and great for large providers (they have no problem justifying the need for a /32, and are able to break things up if needed). If you fit somewhere between those two though, I don't think the policies really address our needs.
Here you say it your self: "Advertisement policies" this is routing again, which has not much to do with Address Space.
Well, I believe it does if ARIN is going to mandate deaggregation policy with their allocation policy. If they say that /44's aren't deaggregatable, providers will insist you don't deaggregate your /44. Big providers get the advantage that they can chop up their /32 however they want.
Small to medium sized hosting providers, content networks and other non-bandwidth-supplying companies generally aren't providing transit to others, so they can't get a /32.
You don't have customers?
If you have say 201 customers, who each have a 1U box in a rack you provide connectivity to, then give each one of them a /48, they are endsites and maybe that endsite sets up VPN's.
As for the 201, that is what you might want to have once. Tada current policy fulfilled. NEXT! :)
If the average hosting company did that, I think we'd run out of IPv6 before we run out of IPv4. In current IPv4 practice, it's common for hosting companies to dump racks and racks of servers into a /22 or / 24-ish sized block. I could see assigning a /48 to "colo customers" and giving each one a /64 if they needed it, but a /48 each is overkill for a 1U box. Also take into account fully managed hosting providers where many customers are sharing one box, or the servers and management tasks are owned by the hosting company not the customer. I can't in good conscience claim that I need a /48 for each CUSTOMER on a shared hosting platform, which means I can't really qualify for a /32 if that's all I do. I understand what you're saying, we could claim that we're giving each customer a /48 just to get the assignment, but I don't really want to game the system. I'd rather have applications like this recognized by policy, instead of cloak & dagger stuff when it comes to documenting your network. :)
One can deaggregate in IPv6 also, just don't expect it to be accepted. Just like a /24 won't be accepted by most folks. Also note that policy _suggests_ that one announces it as a single chunk.
We've announced single /24's without a covering larger prefix, and have had no problems with connectivity.
*** Multihoming / Route announcement trick that doesn't fill up slots
This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;)
Currently I am aware of the following possible 'solutions': - SHIM6 - SCTP - HIP - NOID - WIMP See: draft-huston-multi6-proposals draft-savola-multi6-nowwhat
Most if not all of these have problems for some uses, eg Traffic Engineering is supposedly not possible anymore the way it is currently being done in BGP. Mindset changes will be required.
I don't necessarily mind mindset changes. My concern is that people are actively doing things in IPv4 that aren't possible now in IPv6, with no workaround for a lot of these issues. However I'm much more concerned that "big" providers (anyone who can qualify for a /32) need to make nearly zero changes to their way of doing things, but Mom&Pop's regional ISP or Chuck's Web Hosting and Bait Shop are going to be losing out big when it comes to IPv6. Which is preferable, giving /32's to people who don't need anywhere near that much space so that they can traffic engineer things they way they need to, or being more flexible when it comes to deaggregation when strictly necessary? Shim6 and others are interesting, and solve multihoming issues for some, but they don't address traffic engineering or the need to do more with smaller allocations.
Kevin Day wrote:
On Dec 21, 2005, at 1:34 PM, Jeroen Massar wrote:
Kevin Day wrote:
[..]
The disincentives for a small-mid sized network to moving to IPv6 are pretty big right now, which means very few of us are going to do it willingly. (Remember, the little guys are who are going to be buying IPv6 transit from the big guys, so it's in everyone's best interest to ease adoption)
Well that part is only of interest to the transit companies ;) The part that is important is that usually the 'smaller' parties bring content and content (Web,GamingServices,and a lot more) is something that IPv6 still needs for quite a bit. [..]
Well, that is actually not true, you can also, in IPv6, simply announce /48's from a /32 or /44 or whatever. The same way you do that in IPv4.
No, the proposed policy says that if you get a /44 you must "advertise that connectivity through it's single aggregated address assignment." Get a /48 from your provider? Your provider can only give /48s to organizations "through its single aggregated address allocation".
Please read the answer from Gert Doering to a question I posted: http://www.ripe.net/ripe/maillists/archives/ipv6-wg/2005/msg00488.html This is also why I changed the subject to "Addressing versus Routing". RIR's provide *Address Space* how the routing happens is up to you.
The issue with announcing say a /48 is though that networks which filter will filter it out and will only reach you over the aggregate. Of course that is their choice, just like yours is to try to announce the /48's in IPv6, or the /23's for IPv4. An IPv4 /28 doesn't reach far either.
True, but the smallest atomic block in IPv4 you can announce is a /24, and allocations are given in multiples of /24.
At least RIPE also gives out /26's on request, eg if you need globally unique address space but don't want to route it anyway.
In IPv6, allocations or assignments are at /48 and /44, and you must advertise the entire thing, and ONLY the entire thing.
Why? There really won't be a single RIR going to complain to you. It is also *impossible* for them to stop it. See also above. The only parties that can stop it is your peers as they can filter it. This is the same as trying to announce a IPv4 /28, if you can persuade $world to accept it you are done. But most folks filter on /24.
The problem here is though that your /48 will propagate through ISP's with less strict filters and they will act as transits for your /48. My experience tells that the ones not filtering also have a not so-good setup thus your connectivity tends to go all around the world.
Ok, let me explain a scenario that would directly apply to us, which probably will happen to anyone with multiple networks scattered around the globe. Lets also pretend I qualified for a /44, and received it.
I have two corporate offices, one in New York and one in Los Angeles, each requiring a /48.
(Which is what the current policy calls end-sites btw ;)
In New York, I buy transit from ISP A and ISP B.
In Los Angeles, I buy transit from ISP C and ISP D. ISP A and B don't sell service in LA.
I can announce New York and LA's /48 to each provider there, but there are going to be networks out there who filter out /48's, so I need to advertise the /44 somewhere. Where do I advertise it?
If I advertise the /44 in both and a single /48 only in NY, C and D are going to see our NY advertisement through whoever they're buying transit from. If their providers filter out my /48's, C and D won't see my /48 at all, so they'll send all my traffic for my NY office to LA, that I won't be able to route anywhere.
Correct. One of the few solutions you can do is setup connectivity (VPN or so) of your own between them. This is a routing issue. You can also try to persuade $world to accept it. To demonstrate how far the filtering is being done, check: http://www.sixxs.net/tools/grh/lg/?show=endsite&find=::/0 This shows /48's, out of a larger /32 (or more) that are being announced by a different ASN than the ASN that announces the DFP. (/48 being announced by the same ASN should IMHO get aggregated). afaics this matches your above described LA/NY setup. The above shows 70 prefixes at the moment btw. I expect there to be a couple more but those are then usually using the same ASN to announce the more specific, which is not 'true bgp multihomed' imho.
If there isn't a way for end-sites to receive multiple allocations, or split their allocation up and expect 99.99% of the world to listen to their deaggregations, they can't have multiple networks. If that's the case, I think it'll encourage people to lie and try to justify multiple /44's, even though they really only need one.
With current filtering policies both a /48 and a /44 will be accepted by the most clueful sites.
A company has six disconnected branch offices within the US. Which situation is optimal:
1) They somehow acquire 6 /44's, and announce 6 /44's.
2) They lie and get a /32, and announce only the 6 /48's that they really wanted, leaving the 65530 other /48's wasted.
6 offices, 150 employees in total, add expected growth for the forseeable future, say +50 employees, they all need a VPN => 206.
3) They get a single /44 like they probably only really need, and deaggregate it to get 6 /44's.
6 /48's you mean ;)
All three add 6 entries to your table.
Which is where the 'big problem' lies for some people, for the foreseeable future (say coming 25 years) this will go fine, but after that will it still go fine? [..]
Small to medium sized hosting providers, content networks and other non-bandwidth-supplying companies generally aren't providing transit to others, so they can't get a /32.
You don't have customers?
If you have say 201 customers, who each have a 1U box in a rack you provide connectivity to, then give each one of them a /48, they are endsites and maybe that endsite sets up VPN's.
As for the 201, that is what you might want to have once. Tada current policy fulfilled. NEXT! :)
If the average hosting company did that, I think we'd run out of IPv6 before we run out of IPv4.
Please take a look at http://www.sixxs.net/tools/grh/dfp/all/ and try to spot some places which have "very large networks".
In current IPv4 practice, it's common for hosting companies to dump racks and racks of servers into a /22 or /24-ish sized block. I could see assigning a /48 to "colo customers" and giving each one a /64 if they needed it, but a /48 each is overkill for a 1U box.
Why is that overkill? The person getting the /48 is an endsite and he makes a VPN tunnel setup for all his employees from that box instead of doing it from the 8mbit office they have.
Also take into account fully managed hosting providers where many customers are sharing one box, or the servers and management tasks are owned by the hosting company not the customer. I can't in good conscience claim that I need a /48 for each CUSTOMER on a shared hosting platform, which means I can't really qualify for a /32 if that's all I do.
That is true indeed. [..]
One can deaggregate in IPv6 also, just don't expect it to be accepted. Just like a /24 won't be accepted by most folks. Also note that policy _suggests_ that one announces it as a single chunk.
We've announced single /24's without a covering larger prefix, and have had no problems with connectivity.
Daniel Roesen wrote:
Uhm, sorry, but that's wrong. /24s are widely(!) accepted and only very seldom not accepted. There are many (MANY!) folks running on /24 PI (== no larger covering aggregate) without problems.
Oops I meant to type "smaller than /24" (thus /25's, /26's etc) :) The point I intended to make: ISP's make the policies in what they accept, not the RIR's. To put it in reverse, ask Daniel (in an off list message) how much work he had to do to get a certain large prefix to actually being accepted at most sites in the DFP. There are also sites that filtered on larger than /32. This is similar to Bogon Prefix filtering where ISP's don't update the filters. Kevin Day wrote: [..]
Shim6 and others are interesting, and solve multihoming issues for some, but they don't address traffic engineering or the need to do more with smaller allocations.
If you see these issues then participate in the SHIM6 working group and explain these issues and suggest how you can solve them. Just like the RIR's, IETF has an open process and anybody can participate. But do bring valid justifiable arguments. Daniel Roesen wrote:
This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;)
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?). That doesn't mean that there are none - I'm not a scientist or mathematician - I'd be happy to have any numbers to back the FUD about the end of world being near. :-)
Which part of "it might be years before we will hit" did you ignored? :) But as we had a nice example above (LA/NY office), lets that that. Lets say we have 1.000.000 companies, they all have say 10 offices, thus they all want to announce separate /48's, that brings us to 10.000.000 /48's. Do you have equipment to support that? Current IPv4 BGP contains about 170k prefixes. And remember that it might just be that say 500k routes suddenly all change because they are actually going over the same transit who has a failure somewhere. Indeed that will not happen tomorrow and not even in the coming 25 years but will most likely happen later on. (Funnily I see people complain about the current policy and limits they "have been laid upon" but they never seem to come forward with an actual proposal which satisfies their needs, complaining doesn't help, small hint) Greets, Jeroen
Ok, I promise this is my last reply to the list about this... This has gone too far into theoretical and not operational content, and probably belongs on an IPv6 policy list, so I'll hush. :) I'll follow up with anyone privately who wants to continue the discussion though. On Dec 22, 2005, at 4:56 AM, Jeroen Massar wrote:
Kevin Day wrote:
No, the proposed policy says that if you get a /44 you must "advertise that connectivity through it's single aggregated address assignment." Get a /48 from your provider? Your provider can only give /48s to organizations "through its single aggregated address allocation".
Please read the answer from Gert Doering to a question I posted: http://www.ripe.net/ripe/maillists/archives/ipv6-wg/2005/msg00488.html
This is also why I changed the subject to "Addressing versus Routing". RIR's provide *Address Space* how the routing happens is up to you.
In IPv6, allocations or assignments are at /48 and /44, and you must advertise the entire thing, and ONLY the entire thing.
Why? There really won't be a single RIR going to complain to you. It is also *impossible* for them to stop it. See also above.
The only parties that can stop it is your peers as they can filter it. This is the same as trying to announce a IPv4 /28, if you can persuade $world to accept it you are done. But most folks filter on /24.
While I really really WISH this is how it worked, my (and others I've asked) interpretation of what advertising a "single aggregated address assignment" means that ARIN does intend it to be their policy that you don't deaggregate at all. I'll email ARIN to see if I can get an official clarification. However, my opinion is that if they say "If you want IP addresses from us, you agree to do (whatever)", you'd better do what they ask. I know they can't technically prevent us from doing something in BGP, but there's a clause in the ARIN agreement that you have to sign before getting any space from them that says: "If ARIN determines that the numbering resources or any other Services are not being used in compliance with this Agreement, the Policies, or for purposes for which they are intended, ARIN may: (i) revoke the numbering resources, (ii) cease providing the Services to Applicant, or (iii) terminate this Agreement." So, in my book, if the RIR says I gotta do X, no matter how unfair X is, I'll do it rather than risk having them revoke my assignment. We don't want to end up in a situation where everyone is breaking the policy. This is completely different than IPv4 though. Nothing in the IPv4 policies says anything about what announcements you can make. The "/ 24 is the smallest block that you can rely on the world listening to" policy isn't mandated by assignment policy, and it matches what the smallest block that you could get (at least, at one point, I know you can get smaller now, but it's well known that nobody will hear it). IPv6 is new in that it does make part of the assignment and allocation policy what you do with it in BGP space, and that's what concerns me. If a new /16 were allocated specifically for /44 sized end-user blocks, and the policies stated that /44's can't be deaggregated, you know there are a good number of providers who are going to filter everything smaller than a /44 inside that /16, "because the policy says you can't do that anyway". In IPv4, filtering on a /24 works because everything is assigned in multiples of /24's, and nearly everyone who would need to deaggregate qualifies for multiple /24 sized pieces. In IPv6, unless you're getting a /32, you're being allocated or assigned a block that can't be split up.
In New York, I buy transit from ISP A and ISP B.
In Los Angeles, I buy transit from ISP C and ISP D. ISP A and B don't sell service in LA.
I can announce New York and LA's /48 to each provider there, but there are going to be networks out there who filter out /48's, so I need to advertise the /44 somewhere. Where do I advertise it?
If I advertise the /44 in both and a single /48 only in NY, C and D are going to see our NY advertisement through whoever they're buying transit from. If their providers filter out my /48's, C and D won't see my /48 at all, so they'll send all my traffic for my NY office to LA, that I won't be able to route anywhere.
Correct. One of the few solutions you can do is setup connectivity (VPN or so) of your own between them. This is a routing issue. You can also try to persuade $world to accept it.
Yes, but that dramatically increases my expenses(I'm having to haul traffic around myself, paying for transit on it twice), and causes really sub-optimal routing. If someone else is a customer of ISP A, they're in Los Angeles and trying to reach my LA POP, ISP A is going to take their traffic all the way to New York, only for me to VPN it back to LA.
In current IPv4 practice, it's common for hosting companies to dump racks and racks of servers into a /22 or /24-ish sized block. I could see assigning a /48 to "colo customers" and giving each one a /64 if they needed it, but a /48 each is overkill for a 1U box.
Why is that overkill? The person getting the /48 is an endsite and he makes a VPN tunnel setup for all his employees from that box instead of doing it from the 8mbit office they have.
I realize IPv6 is really big. Vastly hugely mindbogglingly big, and all that. It's so big that we don't need to worry about wasting space, which is really great in theory. But if we start assigning / 48's to every 1U box out there, regardless of need, we're going to burn through /32's faster than anyone is predicting. I'm not saying a 1U box can't have a /48, I'm saying it probably doesn't need to have one automatically assigned in a hosting company environment. Assigning every 1U box you have a /48 is a great way to meet the /32 requirements, but I kinda question the necessity. Personally, if I were doing colo or dedicated IPv6 hosting, unless the customer specified a need otherwise, groups of servers would share a /64. Customers could request a /64 of their own, or if they could demonstrate a need, get a /48.
(Funnily I see people complain about the current policy and limits they "have been laid upon" but they never seem to come forward with an actual proposal which satisfies their needs, complaining doesn't help, small hint)
In my defense, I wasn't even aware of the deaggregation terms in the IPv6 policy until after we started looking at IPv6 ourselves in a serious way. I agree that just complaining doesn't help, but I'm not sure I'm the right guy to do anything about it at this stage. :)
On Thu, Dec 22, 2005 at 11:56:07AM +0100, Jeroen Massar wrote:
Correct. One of the few solutions you can do is setup connectivity (VPN or so) of your own between them.
Oh excellent idea. Pushing traffic twice through the upstream pipes? I'm sure the upstream ISPs will be very happy about such a model.
This is a routing issue.
No, it's an economic issue.
All three add 6 entries to your table.
Which is where the 'big problem' lies for some people, for the foreseeable future (say coming 25 years) this will go fine, but after that will it still go fine?
I will only care about problems which MIGHT come up in 25 years if there is a compelling rationale that we won't be able to cope with it THEN. :-)
Daniel Roesen wrote:
Uhm, sorry, but that's wrong. /24s are widely(!) accepted and only very seldom not accepted. There are many (MANY!) folks running on /24 PI (== no larger covering aggregate) without problems.
Oops I meant to type "smaller than /24" (thus /25's, /26's etc) :)
OK, that's correct then.
The point I intended to make: ISP's make the policies in what they accept, not the RIR's.
Indeed. This is why I'm interested to see how the whole "/48 except proven technically never needed in future" policy idea will work out in reality. Consider me pessimistic. Beancounters will prevent that prolly.
There are also sites that filtered on larger than /32.
Yep, folks who didn't understand CIDR and thus don't understand that less-specifics aren't a threat at all. Very annoying. Even more annoying to see popular filter recommendations STILL promoting such nonsense.
This is similar to Bogon Prefix filtering where ISP's don't update the filters.
It's exactly that. Except one case where I got an answer along the lines of "we think /21 is too big an allocation and have to make a policy decision wether we'll accept such a large prefix". No kidding. Made my day (year). Took some days or week (cannot remember) until the filters got finally opened.
Shim6 and others are interesting, and solve multihoming issues for some, but they don't address traffic engineering or the need to do more with smaller allocations.
If you see these issues then participate in the SHIM6 working group and explain these issues and suggest how you can solve them. Just like the RIR's, IETF has an open process and anybody can participate. But do bring valid justifiable arguments.
No. SHIM6 is already a narrowed-down solution space. You'd be advocating resurrecting the multi6 WG, which was closed after it's failure to agree on any way forward that would cover ALL requirements people brought forward. Whining on the SHIM6 mailing list that SHIM6 is not the solution folks are looking for is certainly not the right place. It would have been multi6. SHIM6 is already the dead-end street, not the junction where you discuss wether you take the road to the left or the right. :-)
Daniel Roesen wrote:
This is the very hard part. (political and technical) But it might be years before we will hit the actual limits of BGP. We still have to be nice to our kids though as they will hit it ;)
Really? Where are the limits of BGP? Can you show me any numbers? You'd be the first. I'm not aware of any protocol inherent scaling brickwalls like with other protocols where certain timing constraints place limits (or thinking of L1 systems, you remember CSMA/CD?). That doesn't mean that there are none - I'm not a scientist or mathematician - I'd be happy to have any numbers to back the FUD about the end of world being near. :-)
Which part of "it might be years before we will hit" did you ignored? :)
Non. I just ask what this "it" is. Define it. :-)
But as we had a nice example above (LA/NY office), lets that that. Lets say we have 1.000.000 companies, they all have say 10 offices, thus they all want to announce separate /48's, that brings us to 10.000.000 /48's. Do you have equipment to support that?
No. Do we have this kind of DFZ yet? No. So what's the relevance of that question?
Current IPv4 BGP contains about 170k prefixes. And remember that it might just be that say 500k routes suddenly all change because they are actually going over the same transit who has a failure somewhere.
You are comparing oranges (today IPv4 DFZ) and apples (a possible future IPv6 DFZ). If we align IPv6 PI policy to current IPv4 policy AND would assume that everbody who runs an active ASN in the DFZ suddenly starts announcing an IPv6 PI block in the DFZ, we'd be at 190k instead of 170k routes. I don't see any problem with that. If you do, you'd have to fight IPv4 PI policy as well.
(Funnily I see people complain about the current policy and limits they "have been laid upon" but they never seem to come forward with an actual proposal which satisfies their needs, complaining doesn't help, small hint)
Chose your battles wisely. Or to put it differently: don't begin a war you cannot win. You should first have a good idea of the amount of support that would get. Currently I do not see enough momentum to succeed in the various RIR policy development fora (at least ARIN and RIPE, not too familiar with the other ones). And that will prolly not change until the policy voting process changes from "let's see who can cry loudest and invest most time and money into flooding mailing lists with postings and make appearances at conferences" to "let all relevant people with interest in the RIR region simply vote" That of course means not only letting the LIRs vote about such things as the LIRs are not the ones who like to see such a policy succeeding. :-) Best regards, Daniel -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
However I'm much more concerned that "big" providers (anyone who can qualify for a /32) need to make nearly zero changes to their way of doing things, but Mom&Pop's regional ISP or Chuck's Web Hosting and Bait Shop are going to be losing out big when it comes to IPv6. Which is preferable, giving /32's to people who don't need anywhere near that much space so that they can traffic engineer things they way they need to, or being more flexible when it comes to deaggregation when strictly necessary?
Shim6 and others are interesting, and solve multihoming issues for some, but they don't address traffic engineering or the need to do more with smaller allocations.
This is why I have suggested that we need to open up additional IPv6 addresses for geo-topological addressing. This means that instead of getting one big /32, you would be able to apply for a seaparate allocation for each city in whatever size is appropriate for each city. Then, you can multihome inside that city and your announcements won't clutter up the global routing table because they will be replaced by a single big city aggregate that covers all the small and medium sized companies in the city. The providers that offer such multihoming inside any given city will need to interconnect inside that city either at an IX or privately. The RIRs have not made any decisions yet about offering geotop addresses, but 7/8 of the IPv6 address space has been reserved for different types of allocation schemes. I believe that geotop addresses or something similar, will eventually be offered by ARIN and the other RIRs. In order to get to that point, small and mid-sized companies need to make their voices heard. --Michael Dillon
On Wed, 21 Dec 2005, Kevin Day wrote:
9) Once we started publishing AAAA records for a few sites, we started getting complaints from some users that they couldn't reach the sites. Some investigating showed that they had inadvertently enabled IPv6 on their desktop without having any IPv6 connectivity.
I would hazard an educated guess that the majority of these users had actually enabled 6to4 via some OS-provided convenience, which *would* work if it weren't for (a) IPv4 NAT already widely used in "home router" appliances, resulting in bad 2002:0a00::/24 or 2002:c0a8::/32 addresses, and (b) many IPv6-capable providers not providing a 2002:: route, or at least not providing a *working* one, to the 6to4 islands. Fixing (b) would much allieviate the following when the 6to4 address in question would otherwise be reachable:
11) Almost without fail, the path an IPv6 user takes to reach us (and vice-versa) is less optimal than the IPv4 route.
(If a user is implementing 6to4, it usually means that the v4 route *is* better, so 6to4 becomes a routing policy suggestion as well.) -- -- Todd Vierling <tv@duh.org> <tv@pobox.com> <todd@vierling.name>
On Wed, Dec 21, 2005 at 07:50:14AM -0600, Kevin Day wrote:
1) IPv6 on the internet overall seems a bit unreliable at the moment. Entire /32's disappear and reappear, gone for days at a time.
That's certainly true for people not doing it "in production". But that ain't a problem as they aren't doing it... in production. :-)
The most common path over IPv6 from the US to Europe is US->JP->US->EU.
Sorry, but that's not true anymore on grand scale. That might still be valid for some exceptionally bad IPv6 providers who still "do it 6bone style". Fortunately, those don't play any too significant role anymore in global IPv6 routing (which was hard work to achieve).
I realize this may be specific to our connection itself, but browsing looking glasses seems to back up that it's not just us.
That'd suprise me. Could you give examples?
2) Lots of providers who are running IPv6 aren't taking it as seriously as IPv4. Manual prefix filters, NOC staff that doesn't even know they're running IPv6, etc.
ACK, but "manual prefix filters" is often rooted in "there are no good tools to do the job". I've seen folks trying to beat RtConfig for months into doing sensible things. :-)
3) Some key pieces of internet infrastructure are IPv6 oblivious. ARIN's route registry doesn't support the "route6" objects, for example.
Don't get me started about IRRs. :-(
5) Our DNS software(djbdns) supports IPv6, kind of. WIth patches you can enter AAAA records, but only by entering 32 digit hexadecimal numbers with no colons or abbreviations. We were never able to get it to respond to queries over IPv6, so of all our DNS is still IPv4.
Then stop using incomplete and cumbersome software from authors with strong religious believes and a disconnection from any technological advances of the last $many years. :-) "Use the right tools for the job".
10) Smaller than normal MTUs seem much more common on IPv6, and it is exposing PMTUD breakage on a lot of people's networks.
It is, but we have tracked down most of them... at least the ones we noticed. I don't experience PMTUD problems anymore since long... the last one is prolly over half a year ago. And I use IPv6 on all my servers, desktops and laptop. :-)
11) Almost without fail, the path an IPv6 user takes to reach us (and vice-versa) is less optimal than the IPv4 route. Users are being penalized for turning on IPv6, since they have no way to fall back to IPv4 on a site-by-site basis when using a web browser.
That is indeed a problem. How big the penalty is, depends heavily on your choice of upstream provider(s). The isle of sanity gets bigger and bigger, and networks with bad IPv6 connectivity become more seldom (relatively). Thank you for sharing your experience! BTW, what timeframe are we talking about? Things have changed massively over the last 12-18 months. Best regards, Daniel -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
On Dec 21, 2005, at 4:18 PM, Daniel Roesen wrote:
1) IPv6 on the internet overall seems a bit unreliable at the moment. Entire /32's disappear and reappear, gone for days at a time.
That's certainly true for people not doing it "in production". But that ain't a problem as they aren't doing it... in production. :-)
We had a case where a somewhat decent sized provider that was actually using IPv6 accidentally stopped announcing their space without realizing it. After a couple of days of waiting for them to fix it, I emailed their NOC and got the impression that I was the first to notice they had killed IPv6.
The most common path over IPv6 from the US to Europe is US->JP->US-
EU.
Sorry, but that's not true anymore on grand scale. That might still be valid for some exceptionally bad IPv6 providers who still "do it 6bone style". Fortunately, those don't play any too significant role anymore in global IPv6 routing (which was hard work to achieve).
I admit, my experiences are with only a tiny number of users, so I may have just had bad luck. But, I had trouble finding any of our IPv6 guinea pigs that didn't take a perceptibly slower route to us over 6 than they do for 4. (50-100ms)
I realize this may be specific to our connection itself, but browsing looking glasses seems to back up that it's not just us.
That'd suprise me. Could you give examples?
Right now, I can't remember, this was a couple of months ago now... But next time I encounter one, I'll drop you an email.
5) Our DNS software(djbdns) supports IPv6, kind of. WIth patches you can enter AAAA records, but only by entering 32 digit hexadecimal numbers with no colons or abbreviations. We were never able to get it to respond to queries over IPv6, so of all our DNS is still IPv4.
Then stop using incomplete and cumbersome software from authors with strong religious believes and a disconnection from any technological advances of the last $many years. :-)
"Use the right tools for the job".
I don't doubt that there are better tools for IPv6 DNS, but we were already using djbdns for a couple of reasons and I didn't want to undergo a switch to something else JUST to add AAAA records when what we had was working well enough for us. I wasn't trying to document how to do IPv6 right, just what problems we hit when we tried to switch to IPv6 with no thought to IPv6 being done beforehand.
10) Smaller than normal MTUs seem much more common on IPv6, and it is exposing PMTUD breakage on a lot of people's networks.
It is, but we have tracked down most of them... at least the ones we noticed. I don't experience PMTUD problems anymore since long... the last one is prolly over half a year ago. And I use IPv6 on all my servers, desktops and laptop. :-)
Our test network was running through a GRE tunnel inside an IPIP tunnel, so our MTU was abnormally tiny. I'm guessing that hit some people with PMTUD problems that didn't normally see them because they had a short MTU to start with.
11) Almost without fail, the path an IPv6 user takes to reach us (and vice-versa) is less optimal than the IPv4 route. Users are being penalized for turning on IPv6, since they have no way to fall back to IPv4 on a site-by-site basis when using a web browser.
That is indeed a problem. How big the penalty is, depends heavily on your choice of upstream provider(s). The isle of sanity gets bigger and bigger, and networks with bad IPv6 connectivity become more seldom (relatively).
Out of all of our transit providers, only one could sell us IPv6 transit(not faulting those who don't yet). Out of 100+ peering connections, only 2 wanted to do IPv6 peering. So, I don't have many different angles to view things from. That said though, the provider we are using for IPv6 seems to be doing it right, it just doesn't feel like IPv6 has the same "mesh" yet where who is connected to who doesn't really matter that much.
Thank you for sharing your experience!
BTW, what timeframe are we talking about? Things have changed massively over the last 12-18 months.
We threw in the towel (pulled AAAA records) about 6 weeks ago, and started IPv6 experimentation about 16 weeks ago. I'll be writing up a paper going into a lot more detail about what went right, what went wrong, and why the decision was made to revert back to IPv4 soon, if anyone is interested. -- kevin
On Wed, Dec 21, 2005 at 07:59:15PM -0600, Kevin Day wrote:
I admit, my experiences are with only a tiny number of users, so I may have just had bad luck. But, I had trouble finding any of our IPv6 guinea pigs that didn't take a perceptibly slower route to us over 6 than they do for 4. (50-100ms)
Well, yes, most v6 paths are still worse than the v4 paths. But there are also counterexamples where the v6 paths are actually better than the v4 paths. Happens when v6 is operated by techs and not under the belly of peering politicians (yet). :-)
Our test network was running through a GRE tunnel inside an IPIP tunnel, so our MTU was abnormally tiny. I'm guessing that hit some people with PMTUD problems that didn't normally see them because they had a short MTU to start with.
Yep. MTU <1480 raises the chance to see PMTUD problems. E.g. using GRE (1476) instead of IPIP (1480). Been there... :-)
Out of all of our transit providers, only one could sell us IPv6 transit(not faulting those who don't yet). Out of 100+ peering connections, only 2 wanted to do IPv6 peering. So, I don't have many different angles to view things from.
Fair enough. But then it might make sense to make that more prominent. Many folks will casually read your statement and think that it's a generic one, instead of the view of a singlehomed customer with two peerings. :-) On grand scale, US=>AP=>US=>EU is certainly _not_ the standard anymore.
I'll be writing up a paper going into a lot more detail about what went right, what went wrong, and why the decision was made to revert back to IPv4 soon, if anyone is interested.
Certainly! Please post this to the ipv6-ops[1] list too. Best regards, Daniel [1] http://lists.cluenet.de/mailman/listinfo/ipv6-ops -- CLUE-RIPE -- Jabber: dr@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
Subject: Awful quiet? Date: Wed, Dec 21, 2005 at 12:09:23AM -0800 Quoting Jim Popovitch (jimpop@yahoo.com):
I miss the endless debates. Is *everyone* Christmas shopping?
Here's a thought to ponder....
With the thousands of datacenters that exist with IPv4 cores, what will it take to get them to move all of their infrastructure and customers to IPv6? Can it even be done or will they just run IPv6 to the core and proxy the rest?
A datacenter typically has L2 VLANs everywhere, connected to one or more routing devices for v4 connectivity, so all one needs to do is get one or two routers with all VLANs trunked in on ethernets and give each L2 broadcast domain a /64. Run RA to taste. I've done this on what amounts to a "very large data center", a LAN gamer party with over 5000 participants. Of course, not many were using v6 (We really need a FPS game with v6 transport to drive this forward!), but it was available to everyone, and, it did not harm v4 traffic. -- Måns Nilsson Systems Specialist +46 70 681 7204 KTHNOC MN1334-RIPE INSIDE, I have the same personality disorder as LUCY RICARDO!!
participants (17)
-
bmanning@vacation.karoshi.com
-
Daniel Roesen
-
James
-
Jeroen Massar
-
Jim Popovitch
-
JORDI PALET MARTINEZ
-
Kevin Day
-
Kevin Loch
-
Mans Nilsson
-
Marshall Eubanks
-
Michael.Dillon@btradianz.com
-
Per Heldal
-
Peter Dambier
-
Robert E.Seastrom
-
Stephen Sprunk
-
sysadmin@citynetwireless.net
-
Todd Vierling