I apologize that the quality of this message will be somewhat limited by pressures of time and having to use a really weird Microsoft keyboard that leaves me prone to speeling mistakes, but I couldn't resist some of the things being talked about in this thread. Phil Howard <phil@charon.milepost.com> writes:
The transition to IPv6 is clearly going to have some difficulties. We are waiting on:
1. Network equipment, with translation translation have to have translations routing software includes translation
Bingo! The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet. Translation of addresses, whether it is between IPv4 and IPv4 or involves protocol translations as well (as is the case in IPv4->IPv6->IPv4, or IPv6->IPv4->IPv6), is simply the most practical and effective way of overcoming the two principal scaling problems of the Internet, namely the narrowness of the IPv4 address and the fact that deployed routing protocols simply suck. Observe that so long as a translatable subset of transport layer options are used, there is absolutely no difference between NAT between IPv4 addresses and protocol translation between IPv4 and IPv6. Moreover, in the latter case, you are not restricted to IPv6s4 (who comes up with these acronyms anyway? Does anyone mind if I call IPv4 IP? If you do, well, tough.) The technical goal is that end to end services will work, period, in all cases. This is possible provided that the higher order protocols do not make invalid assumptions about the transport layer. Most importantly, just as CIDR requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well. That is, so long as protocols do not assume that an IP address is the same from the point of view of all locations throughout the concatenated Internet, they will do just fine with either NAT or protocol translation or both. Returning to the observation that NAT and protocol translation are semantically equivalent from an end-to-end perspective, we now need to consider whether simple address translation or protocol translation is a better idea.
But just having [network routing software] translating IPv4 <-> IPv6s4 is not enough. We will need to manage the new IPv6 network.
Deployed base is a strong engineering consderation in an industry which is experiencing enormous growth. NAT has the advantage that reasonably designed existing technologies ought not even notice that NAT is happening. Protocol translation, on the other hand, requires, as you say, new management techniques, which will generally involve lots of learning time on the part of lots of engineers and operators, wherever the new protocol is deployed. The fact is, that there may be a reason to deploy a new protocol that makes this worthwhile, however, you should also note that so long as a translation between transport layers is straightforward, there is no reason why the new protocol needs to be IPv6. In fact, I welcome IPv6's fan base working on protocol translation because there are also some more interesting experimental protocols which could be deployed in precisely the same fashion that do not suffer some of the brokenesses of IPv6. Most notably:
Routing issues will become different in IPv6.
This is simply untrue. Routing issues are EXACTLY the same in IP and IPv6, the only difference is the width of the addresses, which worsens the poor scaling properties of IP with current routing protocols. The only attractive (and this is very very very speculative) aspect of the IPv6 address scheme is that it may be wide enough to experiment with something like using a modified IS-IS that works on multiple hierarchical areas encoded as fields in the IPv6 address, with the thought of using that to supplant current interdomain routing protocols. I'm sure this thought will go over well with the IPv6 crowd...
If IPv6 allocations will have varying sizes like CIDR, then we might continue to have issues of size based route filtering.
Please understand that the size-based filtering is done to limit the number of prefixes carried, and that this is completely independent of IP vs IPv6. If the number of prefixes must be kept to some maximum by filtering at, say, the 19 bit mark, the same maximum will be maintained even if the address space is much wider, and the straightforward way of doing that is to retain filters at the 19 bit mark.
OTOH, with the right methods of allocating IPv6 space, no one should ever have to come back to get more space. Eventually that should mean fewer routes as IPv4/IPv6s4 closes down. Route filtering should be encouraged on IPv4 space and prohibited on IPv6 space, at that point, IMHO.
Could you pleace elaborate? I am completely lost by your logic here. Sean.
if global name to 'address' resolution is desired, then the directory mechanism protocols, currently dns, need to be translated at address and/or name domain boundaries. some nats currently do this. are there other protocols/data which *must* be translated at boundaries? should kink such as cuseeme be left to die? randy
if global name to 'address' resolution is desired, then the directory mechanism protocols, currently dns, need to be translated at address and/or name domain boundaries. some nats currently do this.
You opened it, so I can't resist. What happens to Secure DNS in this case? The whole idea of "tracable to the root" signing goes out the window in this case.
are there other protocols/data which *must* be translated at boundaries? should kink such as cuseeme be left to die?
(Rotting in steaming piles on the shoulder of the information superhighway?) I think these protocols will need to be fixed or only run in limited areas of the Internet.
randy
jerry
:: Sean M. Doran writes ::
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
This from the same guy who routinely bashes ATM in the Internet Infrastrucutre Backbones (a view I agree with). Didn't you just recently post that you thought you were getting old because you liked simple things and didn't think there was much merit to doing routing at Layer 2 *and* Layer 3? And now you want connection state in many boxes along the path of a TCP session? (O, sure, if layer 5 and higher protocols didn't suck, connection state wouldn't be needed. But you still need lists of addresses.) Here are the problems I see with NAT: (1) Broken applications. Layer 3 addresses shouldn't appear in higher-layer datastreams. But they do. NAT boxes handle this for common applications (example: FTP) but it means that applications can no longer be designed under the assumption that as long as both end points speak the application layer protocol, the underlying TCP or UDP will be transparent to them. You've got to make sure all the boxes along the path support your *application*! This is, admittedly, a somewhat poor argument, because if the only thing standing between conceptualization and implementation of a really good idea is some broken applications, then the right thing to do is fix the applications. But it is still a real-world issue. (2) Current NAT requires that computers be classified as servers or clients, with there being no NAT between a server and the "untranslated backbone". If NAT boxen were taught to translate DNS addresses, this could be handled. But in the brave new world of NAT everywhere, a packet might pass through many NATs between two endpoints. I don't really want to troubleshoot a network where not only the addresses, but also the contents, of my DNS frames are being changed many times between me and the server. Not to mention the problem of distributed DNS. Right now, a WWW server might be at one place, and the DNS providing resolution for the name of that server, might be at a completely different place. So the NAT boxes need to somehow talk to other so that when I issue a query to DNS X about server Y, the respons e I get tells me the layer 3 address I need to use to get to server Y. And then there's security issues, of course -- with all these NAT boxes altering the contents of DNS RRs, the current DNS Security proposals fall falt on their face. (This, too, is somewhat of a broken application issue. If I want to be sure I'm talking to server Y, the means for me to know that should be something other than accepting a DNS RR that says Y's Layer 3 address is a.b.c.d. But that too takes time to fix, and you've still got to deal with the denial of service issue. Maybe you don't want to convince me that you are Y -- maybe you only want me to not be able to find the real Y). (3) State in the network. Even if we solve 100% of the problems associated with Layer 3 addresses appearing in places other than the Layer 3 header, you've still got all these NAT boxen maintaining lists of translated addresses. Way to complex for my tastes. This state also leads to: (4) A forced hierarchical design (some people consider this a benefit). You've got to divide the internet up into "NAT zones" with guaranteed symmetric routing between the zones. This effectively requires a hierarchy, or some newcomplex protocol for NATs to talk to each other with. But NAT only really solves one fundamental issue: BGP4 doesn't scale. The routing table size per se isn't a problem. There's no reason why backbone routers can't effectively forward based on a forwarding table of a few hundred thousand (or more) entries. BGP4 just wouldn't be an effective way to manage the NLRI that leads to a table that big. And there's no imminent shortage of IP addresses themselves (meaning that address space limitations aren't forcing us into NAT). So, we can develop a routing protocol that scales; or we can deal with a bunch of broken applications, completely redesign DNS and DNS security, put gobs and gobs more state in the network, and restrict network design to a hierarchical model that only some engineers feel is the best model, and then implement NAT. I think the best, and quickest (quickest, but not necessarily quick), solution, is to fix BGP.
Most importantly, just as CIDR requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well.
I don't think requiring support for addresses to change over *space* provides anywhere near the benefit/cost numbers that CIDR did.
Deployed base is a strong engineering consderation in an industry which is experiencing enormous growth. NAT has the advantage that reasonably designed existing technologies ought not even notice that NAT is happening.
Any new application that is sending layer 3 addresses above layer 3 is certainly not reasonably designed. I'll even concede that FTP is unreasonable enough in what it does with the PORT command that it should be fixed. But getting Layer 3 information out of the higher layers on DNS packet is extremely non-trivial.
OTOH, with the right methods of allocating IPv6 space, no one should ever have to come back to get more space. Eventually that should mean fewer routes as IPv4/IPv6s4 closes down. Route filtering should be encouraged on IPv4 space and prohibited on IPv6 space, at that point, IMHO.
Could you pleace elaborate? I am completely lost by your logic here.
It will lead to fewer routes per entity. Currently, any ISP/NSP large enough to be getting /19s (or shorter) and advertising their own routes into the backbone probably has and is advertising several prefixes. Under IPv6, the vast majority of ISP should be able to have a singel prefix that encompasses all their existing users plus any anticipated growth over several years. So each ISP should be able to place just one, or at most, a very small number of, advertisement(s) in the backbone routing tables. This leads to fewer routes per provider. This, of course, will probably be offset by the existance of more providers. - Brett ------------------------------------------------------------------------------ ... Coming soon to a | Brett Frankenberger .sig near you ... a Humorous Quote ... | brettf@netcom.com
Brett Frankenberger wrote:
:: Sean M. Doran writes ::
This from the same guy who routinely bashes ATM in the Internet Infrastrucutre Backbones (a view I agree with). Didn't you just recently post that you thought you were getting old because you liked simple things and didn't think there was much merit to doing routing at Layer 2 *and* Layer 3? And now you want connection state in many boxes along the path of a TCP session?
Are you sure you're not confusing Sean with me? We both worked for Sprint :) --vadim
Sean M. Doran writes...
Phil Howard <phil@charon.milepost.com> writes:
The transition to IPv6 is clearly going to have some difficulties. We are waiting on:
1. Network equipment, with translation translation have to have translations routing software includes translation
Bingo!
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
Translation of addresses, whether it is between IPv4 and IPv4 or involves protocol translations as well (as is the case in IPv4->IPv6->IPv4, or IPv6->IPv4->IPv6), is simply the most practical and effective way of overcoming the two principal scaling problems of the Internet, namely the narrowness of the IPv4 address and the fact that deployed routing protocols simply suck.
Observe that so long as a translatable subset of transport layer options are used, there is absolutely no difference between NAT between IPv4 addresses and protocol translation between IPv4 and IPv6. Moreover, in the latter case, you are not restricted to IPv6s4 (who comes up with these acronyms anyway? Does anyone mind if I call IPv4 IP? If you do, well, tough.)
I came up with "IPv6s4" when I wrote that, lacking a concise reference to the subset of IPv6 space that is supposed to be the equivalent to the puny IPv4 space. I notice in your response that your meaning of translate and my meaning of translate are entirely different things. What I mean by translate is something that will not change the address, but only the packet format. Translating addresses does solve some problems, but it is not the ultimate solution for anything. It's just a quick fix. It's like chopping off "19" to save a couple bytes back when storage space was a premium and managers were sure they would be able to come back and fix everything at no cost in the year 2000.
The technical goal is that end to end services will work, period, in all cases. This is possible provided that the higher order protocols do not make invalid assumptions about the transport layer. Most importantly, just as CIDR
What kinds of assumptions do you think might be made that would cause NAT to fail?
requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well.
There are some assumptions that are valid to make, that fail with NAT. For example the assumption that 2 or more packets or connections using the same IP address will be the very same interface. While that is usually the case, NAT does not make it so, and there are now boxes on the market that specifically offer to make multiple machines appear a one. That has advantages in certain cases, but it shouldn't be applied in others.
That is, so long as protocols do not assume that an IP address is the same from the point of view of all locations throughout the concatenated Internet, they will do just fine with either NAT or protocol translation or both.
You would have to be sure that there is no translation taking place other than at the end points. Paths do change in time, and if those changes switch NATs, then either some new thing has to pass along those changes to the new NAT to preserve consistency, or else the IP does change.
Returning to the observation that NAT and protocol translation are semantically equivalent from an end-to-end perspective, we now need to consider whether simple address translation or protocol translation is a better idea.
But just having [network routing software] translating IPv4 <-> IPv6s4 is not enough. We will need to manage the new IPv6 network.
Deployed base is a strong engineering consderation in an industry which is experiencing enormous growth. NAT has the advantage that reasonably designed existing technologies ought not even notice that NAT is happening.
You've switched your terms and are now using "reasonably designed" to mean "assuming NAT might be in effect". Many protocols break with NAT. NAT contrains the flexibility in protocol design that TCP/IP was supposed to give us. With NAT, it really isn't the principle of TCP/IP anymore.
Protocol translation, on the other hand, requires, as you say, new management techniques, which will generally involve lots of learning time on the part of lots of engineers and operators, wherever the new protocol is deployed.
I'm not sure what protocol translation you are referring to. I referred to IPv4 <-> IPv6s4 (meaning to a specific space) in my original post. The translation itself is trivial and doesn't require massive memory state. The tools needed are only some of the tools needed for IPv6.
The fact is, that there may be a reason to deploy a new protocol that makes this worthwhile, however, you should also note that so long as a translation between transport layers is straightforward, there is no reason why the new protocol needs to be IPv6.
What should it be? IPv7?
In fact, I welcome IPv6's fan base working on protocol translation because there are also some more interesting experimental protocols which could be deployed in precisely the same fashion that do not suffer some of the brokenesses of IPv6. Most notably:
Be sure I become clear about what specific kind of protocol translation you are referring to.
Routing issues will become different in IPv6.
This is simply untrue. Routing issues are EXACTLY the same in IP and IPv6, the only difference is the width of the addresses, which worsens the poor scaling properties of IP with current routing protocols.
It changes the impact of address space vs. route space. If by snapping my fingers, everything were running IPv6 with every AS having exactly one IPv6 block, the size of the route tables would shrink in number. If the blocks are assigned with only fixed sizes, then it is not even needed to store the whole IP address for routing.
If IPv6 allocations will have varying sizes like CIDR, then we might continue to have issues of size based route filtering.
Please understand that the size-based filtering is done to limit the number of prefixes carried, and that this is completely independent of IP vs IPv6. If the number of prefixes must be kept to some maximum by filtering at, say, the 19 bit mark, the same maximum will be maintained even if the address space is much wider, and the straightforward way of doing that is to retain filters at the 19 bit mark.
I doubt if a /19 gets assigned to anyone. I favor fixed sized IPv6 allocations. It can work because there is enough space to give the largest entity all they could ever use, and enough of those spaces to give to each entity. Then no one need be filtered. Right now filtering favors the big guys, who are also the major culprits in polluting the route space with too many prefixes. The little guys can't (and often simply don't need) large enough spaces to be routed. Route table sizes need to be controlled, to be sure. But with the way space is handed out in IPv4 (I call it "dribble mode") the route tables have grown for more than they need to be. IPv6, along with careful handling of it, can eliminate excess route prefixes. There won't be any need for more than 1 prefix per AS. The AS probably should become part of the IPv6 space, but I'll hold off on making that suggestion seriously until I see what kinds of new routing might come along.
OTOH, with the right methods of allocating IPv6 space, no one should ever have to come back to get more space. Eventually that should mean fewer routes as IPv4/IPv6s4 closes down. Route filtering should be encouraged on IPv4 space and prohibited on IPv6 space, at that point, IMHO.
Could you pleace elaborate? I am completely lost by your logic here.
In IPv4, one AS can have many prefixes, and there are lots of them out there right now. That's because as space fills up, administrators go back for more. They are not given all they think they might ever need because there isn't enough of that to go around. In IPv6 outside of IPv6s4, every AS can, and should, be given one and only one prefix. When do you think someone will come back for more space if they originally got IPv6/64? That's 18446744073709551616 addresses. We might well be giving out even larger blocks. And everyone can get the same size block, so the discrimination by size can be eliminated (along with the excess prefixes that caused it in the first place). -- Phil Howard | no8spam1@dumbads6.org suck4it2@noplace9.edu stop0006@anyplace.net phil | stop9it4@nowhere7.edu eat4this@anywhere.edu no5spam9@no67ads7.com at | crash343@nowhere0.org end4ads5@anyplace.edu eat94me5@anyplace.net milepost | end9it62@anywhere.net a9b4c5d7@anywhere.edu w7x2y0z2@no4where.net dot | suck6it1@nowhere0.edu eat33me7@anywhere.net crash530@dumbads9.net com | no6way53@nowhere9.net stop2983@lame9ads.net die9spam@dumbads9.edu
Sean Doran wrote:
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
You are of course correct, but you also say...
The technical goal is that end to end services will work, period, in all cases.
<DELIBERATELY PROVOCATIVE> ... indeed. But this can be accomplished at an even higher layer than NAT uses. EG It's entirely possible to implement a web browsing service without an IPv4 globally routable address space, and without NAT, just by using caching proxy technology (*). An entire ISP serving millions of users could live on a single class C. Not so long ago, we saw one IP address per web site. HTTP extensions now make one address per server possible. Running a provider-side proxy you could theoretically have 1 IP address per farm. An application layer solution is thus also doable. (*)=scalability of this vs NAT is another argument of course. Many applications can be fixed up the same way. Mail? Who needs to talk to anything but a local SMTP/POP server? We had a lot of talk at NANOG about how in general allowing users to talk to arbitrary SMTP servers was a bad thing. Fine. Dual home your SMTP server and run your users on private address space. They can't spam any more. In a world where the internet industry is becoming more and more like the telecoms industry, the necessity of users to have protocol level access to the network is diminishing, and the dangers of doing so are becoming greater. Which telcos will blithely hand out SS7 interconnects to users? Without (routable) IP access, there would be no SYN floods of distant networks, no source spoofing, less hacking, easier traceability, and the BGP table need only be OTO 1 entry per non-leaf node on a provider interconnection graph. Of course there would be applications that would suffer. No telnet for instance, except through a telnet gateway at each end (and, urm, that's probably not a bad thing). Risk of snooping by ISPs on private data (well they can do that anyway, and if you really care, send it encrypted). No IPv4 intranet applications between customers of different providers (hang on, didn't IPv6 require tunnels anyway?). No broken protocols which encapsulate network addresses within the payload (oh well - rewrite the protocols). Sean seems to predicts death of end to end network layer addressing. How about the death of end to end internet? Instead run with a core of IPv4 numbered routers and application layer gateways. Run everything else in private address space. 10.0.0.0/8 has pleny of room. </DELIBERATELY PROVOCATIVE> -- Alex Bligh GX Networks (formerly Xara Networks)
Alex Bligh writes...
possible. Running a provider-side proxy you could theoretically have 1 IP address per farm. An application layer solution is thus also doable.
But you still need to find an excuse to waste 8000 more addresses so you can appear to justify getting a /19 address space to get around route filters so your multi-homing gives a return on investment.
In a world where the internet industry is becoming more and more like the telecoms industry, the necessity of users to have protocol level access to the network is diminishing, and the dangers of doing so are becoming greater. Which telcos will blithely hand out SS7 interconnects to users? Without (routable) IP access, there would be no SYN floods of distant networks, no source spoofing, less hacking, easier traceability, and the BGP table need only be OTO 1 entry per non-leaf node on a provider interconnection graph.
That's why everyone is abandoning traditionals ISPs and going with proxy providers like AOL. I'm not sure if you are limiting this suggestion to just dialup accounts, or widening it to include dedicated accounts. The justfications and impact vary depending on the type of account.
Of course there would be applications that would suffer. No telnet for instance, except through a telnet gateway at each end (and, urm, that's probably not a bad thing). Risk of snooping by ISPs on private data (well they can do that anyway, and if you really care, send it encrypted). No IPv4 intranet applications between customers of different providers (hang on, didn't IPv6 require tunnels anyway?). No broken protocols which encapsulate network addresses within the payload (oh well - rewrite the protocols).
How will you be sure that every provider has a telnet gateway? I suspect that many will just leave it out. And they will leave out many other protocols/applications, as well. IPv4 can be translated to IPv6s4 (my term for IPv6 in an address space that corresponds to IPv4 addresses). Of course if we do this it means we have to be able to continue to route all this address space even after IPv6 is fully deployed (I'd not want to by then).
Sean seems to predicts death of end to end network layer addressing. How about the death of end to end internet? Instead run with a core of IPv4 numbered routers and application layer gateways. Run everything else in private address space. 10.0.0.0/8 has pleny of room.
You've just written a new application based on UDP. How will it get through these application layer gateways? Will you have to write the gateway module, too, for every one of many dozens of gateway platforms? The end-to-end notion is what makes the network so powerful. Without that you end up being limited to those few applications that someone decided there is a business justification for in the gateways. Before the Internet got started in the research and academic world, there simply would never have been a business case to build it, based on the way business does its analysis. Yet, we know what the end result turned out to be. -- Phil Howard | ads9suck@noplace8.net stop4ads@anyplace.org a8b1c0d5@dumbads0.net phil | stop4it1@no0place.net w9x9y5z4@anyplace.org stop3000@no9where.com at | eat23me4@nowhere1.net stop5603@s9p9a5m7.edu no65ads2@dumbads7.edu milepost | die8spam@anyplace.org no68ads1@anywhere.edu w4x9y8z9@spam2mer.edu dot | eat93me4@s4p1a1m2.net stop5475@noplace3.net ads3suck@noplace8.edu com | blow6me7@noplace3.edu no1spam4@no2place.org suck2it6@spam2mer.net
Phil Howard wrote:
possible. Running a provider-side proxy you could theoretically have 1 IP address per farm. An application layer solution is thus also doable.
But you still need to find an excuse to waste 8000 more addresses so you can appear to justify getting a /19 address space to get around route filters so your multi-homing gives a return on investment.
a) Those who put in length dependent route filters have already shown willingness to change them in view of allocation policies (viz /19 vs /18 filtering in 195/8 at Sprint) b) This may be irrelevant anyway. If you are a smaller provider, you could cascade your application gateways onto those of your upstream, making the thing a hierarchy. There are already squid heirarchies which prove this isn't as lunatic as it might seem.
That's why everyone is abandoning traditionals ISPs and going with proxy providers like AOL.
I'm not sure whether that comment was intended to be sarcastic or not. If it was, look at how many providers are currently looking at forced-caching/forced-proxying technology.
I'm not sure if you are limiting this suggestion to just dialup accounts, or widening it to include dedicated accounts. The justfications and impact vary depending on the type of account.
I propose (for the sake of being procative) doing it for every sort of account. Including downstream ISPs.
How will you be sure that every provider has a telnet gateway? I suspect that many will just leave it out. And they will leave out many other protocols/applications, as well.
You aren't. How under the current environment can you ensure that your route advert gets heard by a distant provider? You can't. Both are solved in general by market forces.
IPv4 can be translated to IPv6s4 (my term for IPv6 in an address space that corresponds to IPv4 addresses). Of course if we do this it means we have to be able to continue to route all this address space even after IPv6 is fully deployed (I'd not want to by then).
What problem is IPv6 solving? Is lack of IP address space a real problem? Or is the real problem that communicating hosts need to share a common address space which is represented at the network layer (let alone at the network layer in an end to end manner)?
Sean seems to predicts death of end to end network layer addressing. How about the death of end to end internet? Instead run with a core of IPv4 numbered routers and application layer gateways. Run everything else in private address space. 10.0.0.0/8 has pleny of room.
You've just written a new application based on UDP. How will it get through these application layer gateways? Will you have to write the gateway module, too, for every one of many dozens of gateway platforms?
Possibly. One can imagine a generalized proxy which is not much different from tunnelling. Do we actually want arbitrary new UDP application to have host-host connectivity automatically if the relevant hosts are internet connected? (remember the "deliberately provocative" bit). Let's say that application was "realtime audio broadcast". In my putative world, the answer would be "no". I'd rather have it carried multicast and translate to multiple unicast UDPs near the leaf nodes. I allege most internet users do not want arbitrary protocol packets turning up on abitrary hosts (viz SMB/NetBios holes).
The end-to-end notion is what makes the network so powerful.
and so dangerous.
Without that you end up being limited to those few applications that someone decided there is a business justification for in the gateways.
Not necessarily (viz tunneling) but there is some truth in what you say. But then the same is true currently at the protocol layer. What ever happened to the IPX internet? (not much).
Before the Internet got started in the research and academic world, there simply would never have been a business case to build it, based on the way business does its analysis. Yet, we know what the end result turned out to be.
Compare and contrast with the telephone network where originally most if not all signaling functions were available to end users in the first exchanges. This gradually got hidden (2600 etc.) and is currently largely inaccessible. If you want to build a new phone signalling protocol, you have to be a PTT or an exchange manufacturer, and get broad industry approval. This does not prevent new phone applications from appearing. -- Alex Bligh GX Networks (formerly Xara Networks)
[ soapbox ]
What problem is IPv6 solving?
IPv6 solved the problem it was intended to solve. In '93-'94, there was an increasing sensationalist panic in the half-informed press that the internet was going to run out of address space in the next fifteen minutes. This is despite the ALE WG (soon cidrd) analyses to the contrary, but it made 'good' press. This presented a serious problem to the internet community, both in market perception and in the amount of time and energy (of otherwise useful folk) it was taking to combat the bogus press. The IPv6 decision immediately solved that problem. Victory over the address space problem was declared. The industry press went on to be cluelessly sensationalistic about other things. And IPv6 did not even have to be deployed! Many people consider this to be a good thing, as IPv6 has not really been demonstrated to improve anything. OTOH, the much maligned CIDR effort received horrid press but seems to have been amazingly successful in actual deployment and use. Judge by the results, look at the curve (Tony Bates's weekly cidrd report). randy
Randy, Randy Bush wrote:
What problem is IPv6 solving?
IPv6 solved the problem it was intended to solve. ... And IPv6 did not even have to be deployed! Many people consider this to be a good thing, as IPv6 has not really been demonstrated to improve anything.
I must remember to publish my EGP stargate 99 proposal which by use of parallel quantum-mechanical computation evaluates optimum routing path based on all known network performance and policy criteria. By publishing this, MAE-East being broken will instantly no longer be a problem for us ISPs. :-) Seriously: IPv6 - spot on. -- Alex Bligh GX Networks (formerly Xara Networks)
On Sun, Nov 02, 1997 at 11:59:17AM -0600, Phil Howard wrote:
In a world where the internet industry is becoming more and more like the telecoms industry, the necessity of users to have protocol level access to the network is diminishing, and the dangers of doing so are becoming greater. Which telcos will blithely hand out SS7 interconnects to users? Without (routable) IP access, there would be no SYN floods of distant networks, no source spoofing, less hacking, easier traceability, and the BGP table need only be OTO 1 entry per non-leaf node on a provider interconnection graph.
That's why everyone is abandoning traditionals ISPs and going with proxy providers like AOL.
Um, "Huh, Phil?" Following the Boardwatch ISP directory, for just one source, seems to indicate otherwise. This is a fairly sweeping observation... on what sources do you base it? Replies redirected to nodlist@nodewarrior.net. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Pedantry. It's not just a job, it's an Tampa Bay, Florida adventure." -- someone on AFU +1 813 790 7592
Jay R. Ashworth writes...
On Sun, Nov 02, 1997 at 11:59:17AM -0600, Phil Howard wrote:
That's why everyone is abandoning traditionals ISPs and going with proxy providers like AOL.
Um, "Huh, Phil?"
Following the Boardwatch ISP directory, for just one source, seems to indicate otherwise.
This is a fairly sweeping observation... on what sources do you base it?
Oops. Forgot the smiley. My point being that large numbers of people really do want out of the Internet that which end-to-end addressing can give them, even if they are clueless about how things work to get what they want. Test market a dialup service at a reduced rate that gives people a private space address behind a proxy server. See how many people sign up. IMHO, it won't be all that many. -- Phil Howard | no18ads8@no62ads9.net stop7579@anywhere.org ads1suck@anyplace.com phil | stop7531@no4place.org end6it44@spammer9.net end0it54@no4place.org at | stop0630@nowhere9.net stop6it2@dumb5ads.com end4ads5@s6p4a9m3.net milepost | die6spam@lame5ads.net w3x1y6z9@anyplace.com suck8it1@spam8mer.org dot | w4x9y6z8@spammer9.edu w7x7y7z0@noplace2.edu no1way79@dumbads9.net com | crash974@no6place.edu eat70me1@nowhere4.org stop6964@no27ads7.com
[ CC'd direct, because I don't know if you subscribed or not. ] On Sun, Nov 02, 1997 at 04:33:47PM -0600, Phil Howard wrote:
Jay R. Ashworth writes...
On Sun, Nov 02, 1997 at 11:59:17AM -0600, Phil Howard wrote:
That's why everyone is abandoning traditionals ISPs and going with proxy providers like AOL.
Um, "Huh, Phil?"
Following the Boardwatch ISP directory, for just one source, seems to indicate otherwise.
This is a fairly sweeping observation... on what sources do you base it?
Oops. Forgot the smiley.
Oh. Yup. Got it.
My point being that large numbers of people really do want out of the Internet that which end-to-end addressing can give them, even if they are clueless about how things work to get what they want.
Yup. Everyone thinks they can tinker with fundamental aspects of the architecture of the net without breaking the ineffable qualities that made it get so popular in the first place. Unfortunately, no one can be exactly certain which combination of things it _is_ that's done this...
Test market a dialup service at a reduced rate that gives people a private space address behind a proxy server. See how many people sign up. IMHO, it won't be all that many.
I concur. This will be my last reply on the topic CC'd to NANOG; interested parties, trundle on over to the NODlist. Cheers, -- jra -- Jay R. Ashworth jra@baylink.com Member of the Technical Staff Unsolicited Commercial Emailers Sued The Suncoast Freenet "Pedantry. It's not just a job, it's an Tampa Bay, Florida adventure." -- someone on AFU +1 813 790 7592
Sean M. Doran wrote:
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
Well, there are two equivalent approaches to the scaling problem: NAT or dynamic address allocation. I'm not convinced that NAT is better in long term; though i won't argue that this is the most practical near-term solution. The advantage of dynamic addressing is that it preserves the clean separation between application and transport layers, which is more kosher architectually, and allows to make a lot of things much simpler. The disadvantage is, it requires serious rework of many things, while NAT can be just a "box" in a middle.
Routing issues are EXACTLY the same in IP and IPv6, the only difference is the width of the addresses, which worsens the poor scaling properties of IP with current routing protocols.
The claim that routing in IPv6 will be magically fixed is exactly what prompted me to lose interest in IPv6 completely, and tag the IPv6 crew as clowns. --vadim
"Sean M. Doran" <smd@clock.org> writes:
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
I would prabably be tarred as being a fan of IPv6, and this realization is news to me. What I do think is clear is that NAT has some very immediate short-term benefits. What I am very much less clear about is what happens long term. NAT "fixes" some immediate problems by pushing those problems elsewhere (e.g., your observation later that higher layers better not violate certain assumptions). Whether the problems that crop up elsewhere are easier to solve than the current ones (e.g. CIDR-style forced renumbering) is IMO an open question.
The technical goal is that end to end services will work, period, in all cases. This is possible provided that the higher order protocols do not make invalid assumptions about the transport layer. Most importantly, just as CIDR requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well.
OK. So IPSec and most other security protocols are botched? Fundamentally, security likes the idea that it trusts no one other than the originator of data and the ultimate destination of data. That means no one in between should be able to examine the data, much less modify any of it. That includes NATs rewritting addresses. IPSec (and DNSSEC) do not allow addresses to be rewritten in packets. Full Stop. Thomas
At 01:23 PM 11/3/97 -0500, Thomas Narten wrote:
Fundamentally, security likes the idea that it trusts no one other than the originator of data and the ultimate destination of data. That means no one in between should be able to examine the data, much less modify any of it. That includes NATs rewritting addresses. IPSec (and DNSSEC) do not allow addresses to be rewritten in packets. Full Stop.
Not to be contentious, but there are valid reasons why "addresses" should be very visible to the network and potentially subject to modification. Just offhand, the ability to prevent malacious attacks and hunt down fraud are valid reasons on their own for visibility for network operations. I agree 100% when it comes to payload, but network addresses serve the network as much as the packet. To the extent that we start deploying networks with more functionality (such as mail relaying and web caching), then the same logic applies to DNS names. /John
I agree 100% when it comes to payload, but network addresses serve the network as much as the packet. To the extent that we start deploying networks with more functionality (such as mail relaying and web caching), then the same logic applies to DNS names.
One big problem we have today is that transport addresses have embedded within them network addresses. To cryptographically protect transport-level connections in practice means that network level addresses (i.e., those in the IP header) cannot be safely modified. Sure, we can say "that is broken and must be changed", but doing so will not be painless or free and begs the question as to whether the total cost of doing this exceeds the benefits NAT brings. It is questions like this that make me question whether we fully understand how scalable/viable NAT really is for the long term. Thomas
In message <9711031919.AA19222@cichlid.raleigh.ibm.com>, Thomas Narten writes:
I agree 100% when it comes to payload, but network addresses serve the network as much as the packet. To the extent that we start deploying networks with more functionality (such as mail relaying and web caching), then the same logic applies to DNS names.
One big problem we have today is that transport addresses have embedded within them network addresses. To cryptographically protect transport-level connections in practice means that network level addresses (i.e., those in the IP header) cannot be safely modified.
I don't think that this is correct. IPsec relies on IP in IP tunneling, thus the transport level identifiers can be modified without affecting the payload. Since I don't think we have any boxes doing signatures on single packets, I don't see a problem here. Unless you are modifying the addresses presented to the application insided the Nated network, during the session interval, you will be able to use the transport level indentifiers as a session tag for the decryption. If you have a payload that is encrypted and signed, there is fundementally no reason for the application to know anything other than a magic cookie return address.
Sure, we can say "that is broken and must be changed", but doing so will not be painless or free and begs the question as to whether the total cost of doing this exceeds the benefits NAT brings. It is questions like this that make me question whether we fully understand how scalable/viable NAT really is for the long term.
--- Jeremy Porter, Freeside Communications, Inc. jerry@fc.net PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-458-9810 http://www.fc.net
Yo Jeremy! On Mon, 3 Nov 1997, Jeremy Porter wrote:
If you have a payload that is encrypted and signed, there is fundementally no reason for the application to know anything other than a magic cookie return address.
SSH keeps track, forever, of the remote IP address/key pair to prevent man-in-the-middle and trojan horse attacks. The authors mention in their material that it is an important defense. Check out http://www.cs.hut.fi/ssh for further info. RGDS GARY --------------------------------------------------------------------------- Gary E. Miller Rellim 2680 Bayshore Pkwy, #202 Mountain View, CA 94043-1009 gem@rellim.com Tel:+1(650)964-1186 Fax:+1(650)964-1176
It might be usefull to take into account where the explosion in IP addressable devices is supposed to come from. Embedded devices don't usually need globally unique addresses, i.e. my house might have a globally unique address, but my toaster won't. And my VCR can just go through NAT go update its TV schedule at night. Web servers no longer need globally unique addresses for every virtual website. These addresses will become available again in the next couple years. Web browsers certainly don't need globally unique addresses. My guess is that IPv4 will tie us over until photonic routing changes the rules of the game in 5-10 years anyways. Dirk On Mon, Nov 03, 1997 at 01:23:18PM -0500, Thomas Narten wrote:
"Sean M. Doran" <smd@clock.org> writes:
The thing that amazes me about people who are fans of IPv6 is that they have realized that NAT is THE fundamental scaling technology for the Internet.
I would prabably be tarred as being a fan of IPv6, and this realization is news to me.
What I do think is clear is that NAT has some very immediate short-term benefits. What I am very much less clear about is what happens long term. NAT "fixes" some immediate problems by pushing those problems elsewhere (e.g., your observation later that higher layers better not violate certain assumptions). Whether the problems that crop up elsewhere are easier to solve than the current ones (e.g. CIDR-style forced renumbering) is IMO an open question.
The technical goal is that end to end services will work, period, in all cases. This is possible provided that the higher order protocols do not make invalid assumptions about the transport layer. Most importantly, just as CIDR requires that protocol implementations respect that IP addresses may change over time, NAT as THE new fundamental scaling technology requires that protocol implementations respect that IP addresses may change over space as well.
OK. So IPSec and most other security protocols are botched? Fundamentally, security likes the idea that it trusts no one other than the originator of data and the ultimate destination of data. That means no one in between should be able to examine the data, much less modify any of it. That includes NATs rewritting addresses. IPSec (and DNSSEC) do not allow addresses to be rewritten in packets. Full Stop.
Thomas
participants (13)
-
Alex Bligh
-
Brett Frankenberger
-
Dirk Harms-Merbitz
-
Gary E. Miller
-
Jay R. Ashworth
-
Jeremy Porter
-
Jerry Scharf
-
John Curran
-
Phil Howard
-
Randy Bush
-
Sean M. Doran
-
Thomas Narten
-
Vadim Antonov