On 10/8/15, 6:45 PM, "NANOG on behalf of Mike" <nanog-bounces@nanog.org on behalf of mike-nanog@tiedyenetworks.com> wrote:
On 10/08/2015 02:41 PM, Mark Andrews wrote:
Plus one to that. We are such a provider, and IPv6 is on my list of things to implement, but the barriers are still plenty high. Firstly, I do have an Ipv6 assignmnt and bgp (v4) and an asn, but until I can get IPv6 transit,
There are lots of transit providers that provide IPv6. It really is time to name and shame transit providers that don't provide IPv6.
NO, THERE IS NOT. We operate in rural and underserved areas and WE DO NOT HAVE realistic choices. Can you see me from your ivory tower?
I looked up tiedyenetworks.com, and I think he¹s 100 miles from Sacramento. I hope some sales person from a transit provider is giving him a call right now, but it¹s entirely possible that there are no big providers in his neighborhood. Sorry, Mike, wish I could help you there. However, you can still mock your upstreams here. Then we can offer them help to support IPv6.
there is not much point in my putting a lot of effort into enabling IPv6 for my subscribers. Yes I have a HE tunnel and yes it's working, but it's not the same as running native v6 and with my own address space. Second, on the group of servers that have v6 thru the HE tunnel, I still run into problems all the time where some operations over v6 simply fail inexplictly, requireing me to turn off v6 on that host so whatever it is I'm doing can proceed over v4. Stuff like OS updates for example. Then complain to the OS vendor. It is most probably someone breaking PMTU discover by filtering PTB. Going native will hide these problems until the MTU between the DC and the rest of the net increases. You could also just lower the advertised MTU internally to match the tunnel MTU which would let you simulate better what a native experience would be. Not my job. v4 works, v6 does not, end of story.
It sounds like you do have some concern about the transition, and you know there¹s a bug, at least with OS downloads. Please do report those issues you know about. Usually, Happy Eyeballs masks problems in dual stack, whether that¹s good or bad. If we can get your upstream(s) to support IPv6, then maybe we can leverage them to help troubleshoot MTU problems, so you don¹t have to spend a lot of time on them. Or maybe they go away when you¹re no longer tunnelling.
I can't remember the last time I saw a site stall due to reaching it over IPv6 it is that long ago.
It happens every day for me, which only amplifies my perception that v6 IS NOT READY FOR PRIME TIME.
Would you at least keep a list of places you have these problems, even if you never follow up on it? Again, I¹m wondering if tunnelling is the problem, and once you have native dual-stack, you could refer to the list and see if problems just dry up.
Damm maddening. Can't imagine the screaming I'll hear if a home user ever ran into similar so I am quite gun shy about the prospect. Secondly, the the dodgy nature of the CPE connected to our network and the terminally buggy fw they all run is sure to be a never ending source of stupidity. CPE devices are buggy for IPv4 as well. Bugs in CPE devices are only found and fixed if the code paths are exercised.
Not my job. v4 works, v6 does not. I am a provider not a developer.
I would guess it is your job in IPv4. I would also guess, based on gateways I¹ve seen, than 10% of CPE has critical IPv4 bugs, and 25% of CPE has critical Ipv6 bugs. I agree with you that the difference is too high, and maybe waiting a year helps get those ratios aligned. CPE vendors: step it up!
That said IPv6 worked fine for me with the shipped image (old version of OpenWRT) using 6to4 before I reflashed it to a modern version of OpenWRT as I wanted to use the HE tunnel rather than 6to4. I know that is only one CPE device. And will you be providing all of my end users with replacement CPE that meets all of the other requirements too? No? Because no such devices exist yet? OHHH yeah thats right, I'm a provider not a developer, so again, not a solution for my business.
Ugh, 6to4 is a bad idea anyway. Deprecated, even. There are loads of gateways that support native dual-stack and several transition mechanisms (DS-lite, 6rd, and MAP pretty soon), but native dual-stack is the way to go if you possibly can. If you provide gateways as part of your service, you should at least make sure you¹re providing devices that at least *claim* IPv6 support now (and the IPv6 CPE Ready logo is meaningful here), so that as old equipment ages out, you¹re not stuck replacing newer boxes. Figure out how long until you think you really need all of your customers to have IPv6. Subtract your CPE replacement time. Start replacing CPE then. e.g., if users need IPv6 in 2018, and you replace all CPE on a 5 year schedule, you should begin providing IPv6-capable CPE in 2013.
Thirdly, some parts of my network are wireless, and multicast is a huge, huge problem on wireless (the 802.11 varities anyways). The forwarding rates for multicast are sickeningly low for many brand of gear - yes, it's at the bottom of the barrel no matter how good or hot your signal is - and I honestly expect v6 to experience enough disruption over wireless as to render it unusable for exactly this reason alone. You expect but haven't tested.
Based on observation and experience, I think v6 will wipe out the 802.11 portion of my network and no, Im not going to 'test' it, recovery would be near impossible and in any event I don't experiment with paying customers. I won't move until the underlaying issues are resolved, and that means fixing multicast in wireless, which won't be done by me again because, you guessed it, I am a provider and not a developer.
This might help: https://tools.ietf.org/html/draft-ietf-v6ops-reducing-ra-energy-consumption -02 Cisco has done some presentations on their use of IPv6 over WiFi at Cisco Live and other venues. For instance, http://www.rmv6tf.org/wp-content/uploads/2013/04/5-2013-04-17-RMv6TF-kreddy -02.pdf Might be able to mitigate with configuration.
The wired portion of my subscriber network is only slightly better, im pretty sure it can deal with v6 in the middle, but the question is still wether specfic CPE models can and which set of bugs I'll hit on my access concentrators passing our v6 over PPPoE. I just read about a cisco bug where enabling rp-filtering on v6 causes a router reload, which I would hit immediately since rp-filtering is a standard subscriber profile option here (trying to be a good netizen). How many other network destroying bugs await? The longer I wait on v6, the less work I will have to do dealing with bugs. So, as the original posted said, we'll do v6 when it's easy, when we have time, and when the economics make sense. And is there a fix available yet? All code has bugs in it. They exist in both the IPv4 code paths and the IPv6 code paths. There are lots of places that are going IPv6 only internally and only having IPv4 at the fringe. You can't do that if routers are flakey when pushing IPv6 packets. This is basically just fear overriding rational decisions.
I am a provider and not a developer, and I am likely only going to use what I know works and what is within my sphere of control and influence. The flakey crappy state of v6 today means I am not putting it out anywhere a customer would have any exposure to it. I don't play games with my customers that way.
It makes sense to me that you would want to wait another year. An ISP of your size doesn¹t have the support staff to troubleshoot new problems. I do hope you¹ll keep an eye on deployments, and at least be thinking in terms of deploying next year (e.g., by buying gear that at least claims IPv6 support, thinking about what monitoring you need to keep an eye on it as you deploy), so that when you do start, you have an easier time of it. Lee