:: Sean M. Doran writes ::
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
This from the same guy who routinely bashes ATM in the Internet Infrastrucutre Backbones (a view I agree with). Didn't you just recently post that you thought you were getting old because you liked simple things and didn't think there was much merit to doing routing at Layer 2 *and* Layer 3? And now you want connection state in many boxes along the path of a TCP session? (O, sure, if layer 5 and higher protocols didn't suck, connection state wouldn't be needed. But you still need lists of addresses.) Here are the problems I see with NAT: (1) Broken applications. Layer 3 addresses shouldn't appear in higher-layer datastreams. But they do. NAT boxes handle this for common applications (example: FTP) but it means that applications can no longer be designed under the assumption that as long as both end points speak the application layer protocol, the underlying TCP or UDP will be transparent to them. You've got to make sure all the boxes along the path support your *application*! This is, admittedly, a somewhat poor argument, because if the only thing standing between conceptualization and implementation of a really good idea is some broken applications, then the right thing to do is fix the applications. But it is still a real-world issue. (2) Current NAT requires that computers be classified as servers or clients, with there being no NAT between a server and the "untranslated backbone". If NAT boxen were taught to translate DNS addresses, this could be handled. But in the brave new world of NAT everywhere, a packet might pass through many NATs between two endpoints. I don't really want to troubleshoot a network where not only the addresses, but also the contents, of my DNS frames are being changed many times between me and the server. Not to mention the problem of distributed DNS. Right now, a WWW server might be at one place, and the DNS providing resolution for the name of that server, might be at a completely different place. So the NAT boxes need to somehow talk to other so that when I issue a query to DNS X about server Y, the respons e I get tells me the layer 3 address I need to use to get to server Y. And then there's security issues, of course -- with all these NAT boxes altering the contents of DNS RRs, the current DNS Security proposals fall falt on their face. (This, too, is somewhat of a broken application issue. If I want to be sure I'm talking to server Y, the means for me to know that should be something other than accepting a DNS RR that says Y's Layer 3 address is a.b.c.d. But that too takes time to fix, and you've still got to deal with the denial of service issue. Maybe you don't want to convince me that you are Y -- maybe you only want me to not be able to find the real Y). (3) State in the network. Even if we solve 100% of the problems associated with Layer 3 addresses appearing in places other than the Layer 3 header, you've still got all these NAT boxen maintaining lists of translated addresses. Way to complex for my tastes. This state also leads to: (4) A forced hierarchical design (some people consider this a benefit). You've got to divide the internet up into "NAT zones" with guaranteed symmetric routing between the zones. This effectively requires a hierarchy, or some newcomplex protocol for NATs to talk to each other with. But NAT only really solves one fundamental issue: BGP4 doesn't scale. The routing table size per se isn't a problem. There's no reason why backbone routers can't effectively forward based on a forwarding table of a few hundred thousand (or more) entries. BGP4 just wouldn't be an effective way to manage the NLRI that leads to a table that big. And there's no imminent shortage of IP addresses themselves (meaning that address space limitations aren't forcing us into NAT). So, we can develop a routing protocol that scales; or we can deal with a bunch of broken applications, completely redesign DNS and DNS security, put gobs and gobs more state in the network, and restrict network design to a hierarchical model that only some engineers feel is the best model, and then implement NAT. I think the best, and quickest (quickest, but not necessarily quick), solution, is to fix BGP.
Most importantly, just as CIDR requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well.
I don't think requiring support for addresses to change over *space* provides anywhere near the benefit/cost numbers that CIDR did.
Deployed base is a strong engineering consderation in an industry which is experiencing enormous growth. NAT has the advantage that reasonably designed existing technologies ought not even notice that NAT is happening.
Any new application that is sending layer 3 addresses above layer 3 is certainly not reasonably designed. I'll even concede that FTP is unreasonable enough in what it does with the PORT command that it should be fixed. But getting Layer 3 information out of the higher layers on DNS packet is extremely non-trivial.
OTOH, with the right methods of allocating IPv6 space, no one should ever have to come back to get more space. Eventually that should mean fewer routes as IPv4/IPv6s4 closes down. Route filtering should be encouraged on IPv4 space and prohibited on IPv6 space, at that point, IMHO.
Could you pleace elaborate? I am completely lost by your logic here.
It will lead to fewer routes per entity. Currently, any ISP/NSP large enough to be getting /19s (or shorter) and advertising their own routes into the backbone probably has and is advertising several prefixes. Under IPv6, the vast majority of ISP should be able to have a singel prefix that encompasses all their existing users plus any anticipated growth over several years. So each ISP should be able to place just one, or at most, a very small number of, advertisement(s) in the backbone routing tables. This leads to fewer routes per provider. This, of course, will probably be offset by the existance of more providers. - Brett ------------------------------------------------------------------------------ ... Coming soon to a | Brett Frankenberger .sig near you ... a Humorous Quote ... | brettf@netcom.com