On Thursday, February 06, 2014 04:56:15 PM Mikael Abrahamsson wrote:
Yes, this is for hundreds of thousands of customers. Why do you need customer management? You document where a certain fiber goes to (what port), and then this port goes to a certain customer. That is the only customer management you need.
So you provision your L3 switch with a protocol based 0x86dd vlan per port, put a static /64 L3 subinterface into this vlan, then you have a built in DHCPv6(-PD) server in the same switch that hands out a static /56 on this vlan, plus hands out DNS-resolver etc. No dynamics, just static. You provision ACLs to only allow the /56, /64 and LL in on the L3 interface. You set ND cache max size to 20-50 entries per L3 subinterface to protect the 1024-2048 entries or whatever the L3 switch can handle. For IPv4 you need to do all the L2/L3-inspection magic in a common vlan.
This is now a standalone unit and you don't need any central system to stay up and running in order to move IPv6 packets, and you support both directly attached computer or a residential gateway that wants PD.
I won't lie, it is impressive that you strung the network this way. I can certainly see how it would work, although I'm not sure how well it would scale if you're diddling about with all sorts of kinky services beyond just access to the Internet (certinaly a concern for me, anyway). At previous job, we looked at various topologies and deployment techniques for how to support large scale FTTH installations, and one of the key issues is impact on the control plane for locally-ran DHCP servers on the routers, particularly when the same router is providing business services as well. Some vendors have seen the light and are finally running x86-based multi-core 64-bit CPU's for control plane processing, and while this may help, CPU horsepower is finite (although it would, most certainly, scale better than a Layer 3 switch doing the same thing). When you start to add services like Multicast for IPTv, and depending on whether you run these switches in a ring or not, and whether you're running Rosen MVPN vs. NG-MVPN, you can quickly start to hit platform limits or feature constraints.
I did this type of DSL deplayment early 2000nds with an L3 switch and an ethernet DSLAM as media converter. This was obviously IPv4 only, but worked very well. At the same time the guys with central DHCP systems had a lot of country wide outages when the DHCP system went belly-up.
I don't believe in centralized BNG's; mostly because traffic forwarding is not optimal. That and it's too much trust in one device. I prefer distributed BNG's (much like the topology you describe, only just less your deployment in number given how much you can scale a single Layer 3 switch to act as a service termination device). Along with distributed BNG's, you can also distribute DHCP servers, and multiple DHCP servers can maintain lease state amongst each other to allow for failover in case the primary DHCP server breaks. This is a known design tactic, and helps to take away from the issues of centralized architectures.
I would never want to have MPLS that far out into the network.
This is a different topic, but what we did was deploy MPLS all the way into the Metro-E access using Cisco's ME3600X because there had simply been to much pain doing legacy Layer 2-based Metro-E solutions (stringing VLAN's together between end points, keeping hands away from VTP, e.t.c.). This was in 2010, and by then, there wasn't any decent devices cheap enough with sufficient features to make this possible. I'd certainly recommend this architecture for Metro-E deployments focused on business-grade services. I don't expect most to follow it given there is a large Layer 2- based Metro-E installed base, but I think it will grow with time. Mark.