In message <Pine.LNX.4.44.0301031853520.19798-100000@www.everquick.net>, "E.B. Dreger" writes:
EL> Date: Fri, 3 Jan 2003 13:44:53 -0500 EL> From: Edward Lewis
EL> The DNS protocol is not 8-bit safe, much less any EL> implementations of it. This is because ASCII upper case EL> characters are down cased in comparisons. I.e., the
My point is there's no need to force chars <= 0x7f if DNS servers are properly implemented. If they're not properly implemented, why not, and whose fault is that? Catering to bad or broken implementations instead of following standards is not a good way to ensure interoperability.
DNS labels are encoded by a one-octet length representation followed by that number of octets, with no restrictions on the content of the octets. Show me where an RFC says something to the extent of "labels and <any type of> RR MUST NOT contain characters >= 0x7f" that rescinds 1035.
Yes, comparisons are case-insensitive. So what? strcasecmp() works on ASCII strings. Now it must work on <new encoding x>. Why not let <new encoding x> be UTF-8, something programmers should support already? Maybe MS-style Unicode encoding? Why add yet another encoding?!
I'm sorry, but this is incorrect in many different dimensions. The subject was discussed exhaustively in the IETF's IDN working group; I refer you to its archive for detailed discussions. Among many other things, your assertion about the simplicity of name comparisons is wrong; see draft-hoffman-stringprep-07.txt for a discussion of that issue. As for 8-bit clean DNS -- well, apart from the many possible ways to encode things, there's the issue of the many applications that aren't 8-bit clean, including (per the RFC 822 spec) SMTP. If "just use 8-bit clean DNS" were sufficient, we'd have been there several years ago. See http://www.ietf.org/html.charters/idn-charter.html for many more pointers. --Steve Bellovin, http://www.research.att.com/~smb (me) http://www.wilyhacker.com (2nd edition of "Firewalls" book)