And last, but not least, here's the notes from the morning part of the NANOG meeting. I strongly, STRONLY suggest people read Aaron's IPv6 deployment in a nutshell slides; while I differ from him on some of the thoughts around address allocation schemes for very large networks, for small to midsized networks, it's a very, very good cookbook to follow for getting IPv6 rolled out: http://nanog.org/meetings/nanog47/presentations/Wednesday/Hughes_Kosters_fun... Thanks to everyone for participating, both locally and remotely! subsequent ARIN notes will be posted to ppml list. Matt 2009.10.21 NANOG47 Wednesday morning notes Don't forget to fill out your survey!! http://tinyurl.com/nanog47 Dave Meyer welcomes everyone back from their hangovers at 0904 hours Eastern Time. John Curran isn't here, so he misses his 13 minutes of fame, and we go straight over to Mark Kosters. Mark Kosters, IPv6: Emerging stories of success. A stellar panel of people to talk about transitioning to v6. IPv4 is running out; in 2 years or so, we'll no longer have a flow of addresses from IANA to the RIRs. But why isn't more traffic moving onto IPv6, given the imminent runout? Still less than 1% of the overall traffic. What do you need to do to make the move to v6, from the enterprise and ISP viewpoint? John B, comcast Matt R, ARIN Owen DeLong, HE.net Aaron Hughes, 6connect John B from comcast is up first Native, dual stack core and access networks started as means to leverage device management; then moved to subscriber access service. Backoffice, where applicable, also dual stack Cable modems (DOCSIS) single stack v4 or v6 eMTAs remain v4 eSTBs targeted to support v4 or v6 only Native dual-stack subscriber services Leverage well known transition technologies to enable enterprise desktop IPv6 connectivity. Some of the backoffice pieces, like DHCP, are still evolving. This is a team organizational effort, so it takes many pieces working together. Core concept--intial key piece was device management. Core network, access network, and back office systems all have to work together, or the program fails. So, the iteratively extend those three elements to offer services over IPv6 Native is preferred whenever possible over tunnels and other techniques; but sometimes it's just not possible. There's still much learning to happen in the area to figure out how best to make the deployment happen. Lessons Learned IPv6 must become business as usual for staff from every area of business lack of attention here be be problematic for v6 deployment deferring or avoiding IPv6 will be problematic. it's really, REALLY important to do large scale testing of interoperability, especially when you have millions of devices. You test the key interconnect points where devices interact, especially with high levels of diversity in your gear. Also leverages technologies that newer releases, like DOCSIS3.0 provide. Find opportunties like that in your own environment. Challenges Need to manage the deployment of v6 relative to other business needs. channel bonding vs v6, which gets business priority, for example Security on v6 is still a challenge vendors often say "but you're the only one who has asked for that" backoffice and tool upgrades to support IPv6 are non-trivial best approach is to divide these efforts into smaller activities. Very substantial chunk of work; don't underestimate the challenges of this! IPv6 data services for subscribers. preferred approach is to offer native dual-stack v6 service to customers; v4 continues unchanged, just adds v6. Directly connected device that supports v6, or home gateway device that supports v6. all the support systems must support both models for the rollout to work. Most people in the room use a gateway device at home. most home gateway devices don't support v6 yet, so pushing the retail type devices to support v6 natively, off the shelf is a challenge. Challenges associated with routing for delegated v6 prefixes should be uniformly addressed. Support for v6 in many products is still considered 'new' and isn't as mature as v4. testing and interoperability are critical for successful deployment bugs and issues will arise scale makes a difference! deploying IPv6 must not impact existing services (this is pretty much true for everyone--can't break existing customers!) Content and Services availability of content and services over IPv6 to date appears to be lacking simply having v6 connectivity isn't sufficient John_Brzowoski@cable.comcast.com Matt Ryanczak, network ops manager at ARIN History of IPv6 @ARIN They're a small, 50 person multihomed customer network. Their network has been running IPv6 since 2003, with a beta Sprint circuit. it was a T1 line, appeared native, but was tunneled inside Sprint. v6 internet wasn't well connected. 2004 Worldcom circuit, similar issue. Started connecting to exchange points, got transit there, and is now starting to be able to serve large volumes of traffic. In 2003, T1 line from Sprint--very adhoc and beta, used Linux Router, and OpenBSD firewall. Completely segregated network. Not dual stack, too many security issues at the time; was a bit afraid of it at the time. Path MTU discovery issues, packets just dying; server MTU issues, upstream issues, great learning process. Sprint circuit finally being decomm'd. Sprint support was always really good. 2004, Worldcom circuit, part of Vint Cerf's test v6 network; real router this time, but OpenBSD firewall still used. T1 into 2800 router. Duplicated the services that were on Sprint link, provided a second path to verify issues, see if the problem could be duplicated or not. Similar issues, PMTU discovery issues due to tunnels, problems reaching chunks of Europe (problem for serving DNS, for example)--good learning exercise. 2006, joined Equi6IX--beta at time, completely free, 100Mb ethernet, transit via OCCAID, things started to look like production network; still had firewall, same services, but the service level got a lot better, still segregated network, but many routing issues went away, PMTU issues started to disappear. started to dual stack. 2008: NTT/TiNet IPv6; built two networks, one west coast, one east coast, would host all public services out there, separate provisioning side from public side. 1000Mb links to NTT/TiNet using ASR 1000 routers Foundry LBs, IPv6 support was Beta. They're very responsive, been issuing patches for them. Now it's a full dual-stack network end to end, and Foundry is still working with them to figure out how to best support the traffic. Whois is out there, DNS out there, figuring out how to expand the services. Traffic: Whois, about 0.12% DNS about 0.55% WWW IPv6 about 8% traffic in 2009 Most of that is internal ARIN traffic, since they're dual stacked internally. :D Lessons learned Tunnels are not desirable. he.net tunnel at home is fine for home use, but using them for production services, PathMTU discovery problems are just a pain. Not all transit is equal! Routing is not as reliable as v4; people are still learning, backbones aren't as good. Dual stack isn't so bad, no security issues they're aware of, stacks have gotten a lot better. Proxies are good; use 6to4 proxies for current rwhois servers, older routing registry (moving to v6 someday) People fear 4byte ASN. They have people who can't peer with them due to 4byte ASN. More people need to get 4byte ASN code. Native support is better. DHCPv6 is not well supported. This really needs fixing. Reverse DNS is a pain. No wildcards. Can't use same tricks as v4, and is very error-prone. Windows XP is broken but usable; can't do v6 DNS, but mostly usable. Bugging vendors does actually work. Helps if they recognize your name (being ARIN doesn't hurt!) Today and the Future standardizing dual stack, ipv6 enabled by default, including push scripts, back office, etc. v6 support a requirement for vendors All RFPs list IPv6 as a requirement Be prepared to do a lot of work tweaking your back office scripts! Patrick, Akamai, -- do you do Google whitelist or do you just break people who don't have v6 connectivity to you who ask for AAAA records. A: It probably happens, but they don't get too many complaints. It does happen sometimes, but often they can work with people to get them connected. Kevin Oberman, ESnet Recently, he had that issue, AAAA couldn't get there, but he didn't open a ticket, he just went back to IPv4 address. A: yeah, there's been some issues like that; they don't make much money from website, so it's not as critical for them as for some others, but they do work with people to try to fix those cases. Shift focus--what does it take to move enterprises into v6? Owen DeLong, what does it take to port systems from v4 to v6? Porting to Dual Stack -- not that hard Why important? We've all seen the exhaustion point graph. code examples there http://owend.corp.he.net/ipv6/ change variable names when changing types, to make it easier to spot old variables. compile-repair-recompile-test-debug-retest AF_INET to AF_INET6 sockaddr_in to sockaddr_in6 sockaddr_storage (generic storage type) check address scoping (link local vs global, and interface scope for link locals) Some gotchas not in sample code IP addresses in log files IP addresses stored in databases Parsing routines for external data PERL porting example refer to source code examples v4_* are v4 only code Add socket6 as well as socket module to code replace get*byname calls change protocol and address families in socket and bind calls get*byname to getaddinfo If you pass in in6addr_any to getaddrinfo returns localhost, not what you were looking for! Example of actual old way getservbyname becomes getaddrinfo socket and bind calls, AF_INET6, not too bad PERL client migrations, similar tactics inet_ntoa to inet_ntop now getaddrinfo simplifies DNS on client side You can't cycle socket calls anymore for reads; you have to explicitly create it each time right now, as you don't know which family the previous call was. Handy function replacement slide from the website, with structure replacement slide owend at he dot net Bill Fenner some kernels have changed to only bind to v6 sockets when available; has he found that to be the case? A: if your kernel does that, it's unfortunate; what Owen has found is that on his boxes, it binds to both. Q: yes, some kernels behaved that way, which is very unfortunate; there might be knobs that can change the default behaviour. A: Recent Linux stacks seem to behave just fine with dual stack socket calls; get with him if you have examples of bad kernels so he can post warnings. Aaron Hughes, deploying dual stack on a network succesful implementation requires good supporting policies! We need to participate in decisionmaking at the company level, to determine when and how to deploy v6. Timing will be different for different companies. Dispel the myths obtaining v6 addresses is hard transit providers don't support it no BGP multihoming Obtaining IPv6 address space is not really hard. My provider doesn't support v6! right now, talk to others, you can get free transit. he.net, wvfiber, probably others out there you can talk to. This is really valuable to people right now! Starting with IX locations--get IPv6 addresses from your exchange point providers Make a list of all relevant peering information update peeringDB--let the rest of the world know you have v6 addresses! Follow your own company change processes for the deployments! configure IPv6 locate existing v4 peering interfaces enable v6 (cisco) configure v6 address ping some peers (look their IPs up in peeringDB) within a minute, you can pass some ICMPv6 packets. Cisco ipv6 unicast-routing Juniper enabled by default v4 and v6 are configured almost identically. At this point, your IX interfaces are dual stacked. :) Next up, the backbone. Keeping track of peering interfaces in peeringDB is great. For your backbone, you really want a database to track them. Spreadsheets don't scale terribly well. ^_^; At least use a reverse DNS zone file Come up with a good numbering plan for IPv6!! If you take first /48 for infrastructure, take first /64 for loopbacks, You can take the opportunity to change your architecture for v6 if you want, but it's easier to keep is same as v4 so you don't have to keep track of two topologies. Architecture choices: simple one: loopbacks and connected infrastructure only in IGP rest of stuff in BGP Configure your backbone Numbering plan? IPv4 4th octet/32 -> IPv6 ::X/128 enable OSPFv3 if you're running OSPF: ipv6 router ospf 12456 enable v6 on interface configure OSPF for interface do same thing on next router, verify the link is up, reachable, and routed. Managing assignments with DNS zone you can just increment /48s in your zone file don't forget that after 9 comes a! It shouldn't take very long to do this, even for a midsized network. It's tedious, but not hard. To reach the outside world, need some BGP configure a new v6 peer group, you can mirror your v4 peer group, but with route-maps and lists that match v6 elements. Naming them -V6 makes it easier to spot them later. iBGP will be loopback to loopback, next-hop-self, like with v4. You can build a common config to be pushed out. iBGP will handle connected interfaces (except loopbacks) route-maps use slightly different syntax for v6 matching: match ipv6 address matchall Don't panic when you do address-family ipv6 it will reformat your config, your next Rancid run will look scary, but it really didn't break your whole router. You can build a common config chunk for all the IBGP configs, and push it out. Still doesn't let you reach outside world. Next up, configure your external peers. New peer group for v6 peers; you'll need new sanity lists, there's not as many well-defined bogon filters; but at least set filters on sizes seq 5 permit ::/0 ge 16 le 48 Create a list of your ASNs IPv6 prefix(es) to allow out. create route maps, use same communities and localprefs to match what you use in v4 Next, send email to peering@he.net to get BGP up; send the peering info file you collected early on. Next up, turn up a peer with he.net, you'll see the neighbor come up, and you can reach the world now. :D show bgp ipv6 unicast summary show bgp ipv6 unicast neighbor X::Y adv make sure you're sending and receiving the expected routes; do some traceroutes and pings, make sure it looks good. Go ahead and continue turning up more peers. Attaching a host to the v6 network. use a *nonproduction* host to test with first! Find a lab box, look at v4 routing and config; allocate a /64 from your DNS zone file (figure out your regional aggregation at some point) Configure interface facing host, and depending on the OS version, it may autoconfigure itself. No more ARP, you can try to ping, you can look at the neighbor table to see if your host is there. Check your iBGP, see if you see the subnet in your table now (first connected non-IGP subnet) look at http://ripe.net/ to see if you get there via v6 or not. Note about SLAAC--the moment you configure the interface on your router, *every* host on the subnet can get a v6 address! Make SURE you have your security concerns squared away before you do this! Time to add nameservice Add DNS reverse...is ugly. Look at the slide. forward: ns0 IN AAAA blech reload nameserver Note that your machine is now on the global v6 internet with every port open; in fact, every host on that subnet is now on the global v6 internet. you MUST make sure your security policy is ready to handle IPv6 security similar to IPv4! Peering--just about everyone out there will peer via v6 at the moment; it's the right time to dive in and make it happen. Start working with a good beta customer to start developing customer route maps, customer neighbor configs (most of which will be mirrors of your v4 configs and route maps, but with different address families and different filters) Most networks are allowing multihoming of /48s at this point, so you can let your downstream customer know it's OK for them to announce the /48 to their other upstream as well. Step 1 is pretty easy; the network side isn't that scary. Step 2, getting hosts and content up and running with security policy in place, operations staff comfortable on IPv6, etc is the harder part. So, on the network side, getting IPv6 up and running isn't hard; it's very, very similar to v4. Leo Bicknell--thanks for a great presentation, good summary. Few small items. BGP change on IOS, it does reformat things; there is a command "bgp upgrade-cli" will change your config to new format ahead of time to let you check the delta ahead of time. Presentation is heavy on IOS classic configs. IOS-xr and JunOS allows for common policies for both, with different lines and different terms for v4 and v6; makes configs even simpler. Lastly, with IPv6 reverse, people forget that $ORIGIN exists, so you can make the zone files look considerably easier to read. Humans seem to work better when v6 host address and v4 address map to each other statically, rather than using SLAAC and having hosts change when NICs change. A: Very true, that's more of step 2, but this is very very good information to know. Arjin, AMS-IX, since autoconfig is on by default, might want to turn them off on exchange point interfaces. Cathy says this looks like the beginnings of a WONDERFUL best current practices document; let's turn it into one! Next up is Betty with some results for us from the elections. 196 people voted Candidates: Steve Feldman, Sylvie LaPierre, and Duane Wessels are new SC members. Austin, Texas, NANOG 48, see you Feb 21-24 2010. Thanks to ARIN, Arbor, and Merit for this meeting! There's new SC members; we're at the first point since the restructuring where people have hit term limits. Josh, Joel, Ren, Todd, have been serving since the revolution, and are aging out--a big round of applause for them as well. AND FILL OUT YOUR SURVEY!!! http://tinyurl.com/nanog47 John Curran notes there is a break, and ARIN will start at 11am. :) BREAK TIME.