Greg A. Woods wrote:
So are you making a case to allow RFC1918 source addresses out into the network?
Huh? No, I thought I was saying very much the opposite! I don't want my upstream provider to use RFC1918 on inter-router links, but they do anyway. I'd like them to filter those addresses too, but they won't.
I do agree they should be filtered out. At what point should we draw the line and say who can, and who cannot, use RFC1918 addresses on links? My first thought would be any link over which traffic from more than one AS transits, or between AS's, should always be fully routable. Any better ideas?
If you do all your internal routing over ATM or FR virtual circuits then you won't need to (and in fact cannot) use IP numbers for those circuits -- it all looks like the physical layer from IP's perspective (the theory being that if you don't need IPs for inter-router links then you won't be using precious unique IPs and feel the pressure to use RFC1918 numbers instead). I'm certainly no expert at this, but from the outside I've seen it done quite successfully. It sure cuts down on the hop count visible from traceroute too!
The FR cloud will look like one hop as far as I can see. But none of my RFC1918 links are FR or ATM. They are plain DS1/24*N (aside from the internal aliases, but those aren't even links).
It's damn near impossible to debug from the outside, of course, but sometimes that's desirable! ;-)
I remember the first place I put up a firewall, I blocked pretty much everything, include ping (from outside) and traceroute (from outside). The reason was to conform to corporate policy regarding confidentiality of facilities and resources to guard against competitors snooping around. Even so much as seeing how many IPs would answer ping was considered to be proprietary company information. It was my goal to limit access to just those resources required for the company's business. I think I did it pretty well. I only got one complaint about it and that was from Randy Bush.
If you're proposing another set of addresses be reserved for uses like this, then I'd be in favor of it with you. Using RFC1918 is certainly not the best way to do this, but using allocated space is no better as long as allocations are tight.
Using any other set of reserved addresses would have exactly the same problem as using RFC1918 addresses has. The only two viable options are to either use globally unique addresses, or not to use any IP routing internally at all.
I do see another possibility. I would call these "public overload" addresses. By public, they would be allowed to transit as sources. By overload, more than one use at a time could be made, although they should be unique within an administrative scope much as RFC1918 is. As to the impact that may cause on the net, I cannot say. There could very well be more impact than RFC1918 has, so it's probably it a good idea. I just see it as a possibility.
People don't know how to separate their internet DNS from intranet DNS. Or maybe they don't want to put the money into that kind of structure. If BIND could be modified to deliver different results depending on the source of the request, or it's interface, then it might become easy for people to setup DNS to avoid this.
Yes, it can be done, but even I am not yet using the latest software, which makes this much easier, on all the machines I manage.
I haven't seen how to do it in the newest BIND. I tried some tricks but haven't managed to accomplish it. -- Phil Howard KA9WGN phil@intur.net phil@ipal.net