Re: The End-To-End Internet (was Re: Blocking MX query)
----- Original Message -----
From: "Owen DeLong" <owen@delong.com>
I am confused... I don't understand your comment.
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions. We see it now alleged that the opposite is true: that a laptop, say, like mine, which runs Linux and postfix, and does not require a smarthost to deliver mail to a remote server *is a bad actor* *precisely because it does that* (in attempting to send mail directly to a domain's MX server) *from behind a NAT router*, and possibly different ones at different times. I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
On Sep 4, 2012, at 14:22, Jay Ashworth wrote:
I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*.
Just because something is of extremely high importance does not mean it still can't be overridden when there's good enough reason. In this case, in the majority of "random computer on the internet" IP blocks the ratio of spambots to legitimate mail senders is so far off balance that a whitelisting approach to allowing outbound port 25 traffic is not unreasonable. Unlike the bad kinds of NAT, this doesn't also indiscriminately block thousands of other uses, it exclusively affects email traffic in a way which is trivial for the legitimate user to work around while stopping the random infected hosts in their tracks. Many providers also block traffic on ports like 137 (NetBIOS) on "consumer" space for similar reasons, the malicious or unwanted uses vastly outweigh the legitimate ones. The reason bad NATs get dumped on is because there are better solutions both known and available on the market. If you have an idea for a way to allow your laptop to send messages directly while still stopping or minimizing the ability of the thousands of zombies sharing an ISP with you from doing the same the world would love to hear it. --- Sean Harlow sean@seanharlow.info
On Tue, Sep 4, 2012 at 2:22 PM, Jay Ashworth <jra@baylink.com> wrote:
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions.
That's what firewalls *are for* Jay. They intentionally break end-to-end for communications classified by the network owner as undesirable. Whether a particular firewall employs NAT or not is largely beside the point here. Either way, the firewall is *supposed* to break some of the end to end communication paths. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
----- Original Message -----
From: "William Herrin" <bill@herrin.us>
That's what firewalls *are for* Jay. They intentionally break end-to-end for communications classified by the network owner as undesirable. Whether a particular firewall employs NAT or not is largely beside the point here. Either way, the firewall is *supposed* to break some of the end to end communication paths.
Correct, Bill. Hopefully, everyone else here who thinks DNAT is the anti-Christ heard the "largely beside the point" part of your assertion, with which I agree. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
On Tue, Sep 04, 2012 at 03:45:32PM -0400, William Herrin wrote:
That's what firewalls *are for* Jay. They intentionally break end-to-end for communications classified by the network owner as undesirable. Whether a particular firewall employs NAT or not is largely beside the point here. Either way, the firewall is *supposed* to break some of the end to end communication paths.
Which has had two basic results: First, we've raised at least two generations of programmers who cannot write a network-facing service able to stand up in so much as a stiff breeze. "Well it's behind the firewall, so no one will be able to see it." Second, we've killed -- utterly and completely -- countless promising technologies and forced the rest to somehow figure out either how to pretend to be HTTP or tunnel themselves. That's just sad. But even then, we're not talking about an end user choosing not to permit certain kinds of inbound connectivity. We're talking about carriers inspecting and selectively interfering with (and in some cases outright manipulating) communication in transit. That's just plain wrong. -- . ___ ___ . . ___ . \ / |\ |\ \ . _\_ /__ |-\ |-\ \__
On Tue, Sep 4, 2012 at 3:45 PM, William Herrin <bill@herrin.us> wrote:
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform
On Tue, Sep 4, 2012 at 2:22 PM, Jay Ashworth <jra@baylink.com> wrote: transactions.
That's what firewalls *are for* Jay. They intentionally break end-to-end for communications classified by the network owner as undesirable. Whether a particular firewall employs NAT or not is largely beside the point here. Either way, the firewall is *supposed* to break some of the end to end communication paths.
Exactly - talking about a *(subtle?)* difference here. 1) Breaking the E2E model because your security policy (effectively) dictates it. For the record, this is fine as it is your decision for your network. 2) Being forced to break that model by deficiencies in the underlying protocol/address-family. This is, shall we say, sub-optimal. /TJ
On 9/4/2012 2:22 PM, Jay Ashworth wrote:
----- Original Message -----
From: "Owen DeLong" <owen@delong.com>
I am confused... I don't understand your comment.
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions.
We see it now alleged that the opposite is true: that a laptop, say, like mine, which runs Linux and postfix, and does not require a smarthost to deliver mail to a remote server *is a bad actor* *precisely because it does that* (in attempting to send mail directly to a domain's MX server) *from behind a NAT router*, and possibly different ones at different times.
I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*.
The end-to-end design principle pushes application functions to endpoints instead of placing these functions in the network itself. This principle requires that endpoints be *capable* of creating connections to each other. Network system design must support these connections being initiated by either side - which is where NAT implementations usually fail. There is no requirement that all endpoints be *permitted* to connect to and use any service of any other endpoint. The end-to-end design principle does not require a complete lack of authentication or authorization. I can refuse connections to port 25 on my endpoint (mail server) from hosts that do not conform to my requirements (e.g. those that do not have forward-confirmed reverse DNS) without violating the end-to-end design principle in any way. Thus it is a false chain of conclusions to say that: - end-to-end is violated by restricting connections to/from certain hosts [therefore] - the end-to-end design principle is not important [therefore] - NAT is good ...which I believe is the argument that was being made? ... Ref - http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
Cheers, -- jra
On 09/04/2012 01:07 PM, David Miller wrote:
There is no requirement that all endpoints be *permitted* to connect to and use any service of any other endpoint. The end-to-end design principle does not require a complete lack of authentication or authorization.
I can refuse connections to port 25 on my endpoint (mail server) from hosts that do not conform to my requirements (e.g. those that do not have forward-confirmed reverse DNS) without violating the end-to-end design principle in any way.
The thing that has never set well with me with ISP blanket port 25 blocking is that the fate sharing is not correct. If I have a mail server and I refuse to take incoming connects from dynamic "home" IP blocks, the fate sharing is correct: I'm only hurting myself if there's collateral damage. When ISP's have blanket port 25, the two parties of the intended conversation never get a say: things just break mysteriously as far as both parties are concerned, but the ISP isn't hurt at all. So they have no incentive to drop their false positive rate. That's not good. Mike
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions. Both true. and NAT inherently breaks the end-to-end principal for all
On 9/4/12, Jay Ashworth <jra@baylink.com> wrote: the applications. Blocking port 25 traffic, also breaks the possibility of end-to-end communications on that one port. But not for the SMTP protocol. SMTP End-to-End is preserved, as long as the SMTP relay provided does not introduce further restrictions.
We see it now alleged that the opposite is true: that a laptop, say, like mine, which runs Linux and postfix, and does not require a smarthost to deliver mail to a remote server *is a bad actor* *precisely because it does that* (in attempting to send mail directly to a domain's MX server) *from behind a NAT router*, and possibly different ones at different times.
Ding ding ding... behind a NAT router. The End-to-End principal is already broken. The 1:many NAT router prevents your host from being specifically identified, in order to efficiently and adequately identify, report, and curtail abuse; You can't "break" the end-to-end principal in cases where it has already been broken. And selectively breaking end-to-end in limited circumstances is OK. You choose to break it when the damage can be mitigated and the concerns that demand breaking it are strong enough. The end-to-end principal as you suggest primarily pertains to the Internet protocol; IP and TCP. I believe you are trying to apply the principal in an inappropriate way for the layer you are applying it to. At the SMTP application layer end-to-end internet connectivity means you can send e-mail to any e-mail address and receive e-mail from any e-mail address. For HTTP; that would mean you can retrieve a page from any host, and any remote HTTP client, can retrieve an page from your hosts; that doesn't necessarily imply that the transaction will be allowed, but if it is refused -- it is for an administrative reason, not due to a design flaw. NAT would fall under design flaw, because it breaks end-to-end connectivity, such that there is no longer an administrative choice that can be made to restore it (other than redesign with NAT removed). At the transport layer, end-to-end means you can establish connections on various ports to any peer on the internet, and any peer can connect to all ports on which you allow. It doesn't necessarily mean that all ports are allowed; a remote host, or a firewall under their control, deciding to block your connection is not a violation of end-to-end. At the internet layer, end-to-end means you can send any datagram to any host on the internet it will be delivered to that host; and any host can send a datagram to you. It doesn't mean that none of your packets will be discarded on the way, because some specific application or port has been banned. At the link layer, there is no end-to-end connectivity; it is at IP that the notion first arises.
I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*.
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
-- -JH
Jimmy Hess wrote:
NAT would fall under design flaw, because it breaks end-to-end connectivity, such that there is no longer an administrative choice that can be made to restore it (other than redesign with NAT removed).
The end to end transparency can be restored easily, if an administrator wishes so, with UPnP capable NAT and modified host transport layer. That is, the administrator assigns a set of port numbers to a host behind NAT and sets up port mapping. (global IP, global port) <-> (local IP, global port) then, if transport layer of the host is modified to perform reverse translation (information for the translation can be obtained through UPnP): (local IP, global port) <-> (global IP, global port) Now, NAT is transparent to application layer. The remaining restrictions are that only TCP and UDP are supported by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box to allow more general transport layers) and that a set of port numbers available to the application layer is limited (you may not be able to run a SMTP server at port 25). The point of the end to end transparency is: The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. quoted from "End-To-End Arguments in System Design", the original paper on the end to end argument written by Saltzer et. al. Thus, The NAT function can completely and correctly be implemented with the knowledge and help of the host protocol stack. Masataka Ohta
On Thu, 06 Sep 2012 13:08:29 +0900, Masataka Ohta said:
The end to end transparency can be restored easily, if an administrator wishes so, with UPnP capable NAT and modified host transport layer.
How does the *second* host behind the NAT that wants to use global port 7719 do it?
(2012/09/06 13:15), valdis.kletnieks@vt.edu wrote:
On Thu, 06 Sep 2012 13:08:29 +0900, Masataka Ohta said:
The end to end transparency can be restored easily, if an administrator wishes so, with UPnP capable NAT and modified host transport layer.
How does the *second* host behind the NAT that wants to use global port 7719 do it?
In the previous mails, I wrote:
The remaining restrictions are that ... and that a set of port numbers available to the application layer is limited (you may not be able to run a SMTP server at port 25).
and Jimmy wrote:
At the transport layer, end-to-end means you can establish connections on various ports to any peer on the internet, and any peer can connect to all ports on which you allow. It doesn't necessarily mean that all ports are allowed; a remote host, or a firewall under their control, deciding to block your connection is not a violation of end-to-end.
Masataka Ohta
On Sep 5, 2012, at 21:08 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Jimmy Hess wrote:
NAT would fall under design flaw, because it breaks end-to-end connectivity, such that there is no longer an administrative choice that can be made to restore it (other than redesign with NAT removed).
The end to end transparency can be restored easily, if an administrator wishes so, with UPnP capable NAT and modified host transport layer.
This is every bit as much BS as it was the first 6 times you pushed it.
That is, the administrator assigns a set of port numbers to a host behind NAT and sets up port mapping.
(global IP, global port) <-> (local IP, global port)
then, if transport layer of the host is modified to perform reverse translation (information for the translation can be obtained through UPnP):
(local IP, global port) <-> (global IP, global port)
Now, NAT is transparent to application layer.
Never mind the fact that all the hosts trying to reach you have no way to know what port to use. http://www.foo.com fed into a browser has no way for the browser to determine that it needs to contact 192.0.200.50 on port 8099 instead of port 80.
The remaining restrictions are that only TCP and UDP are supported by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box to allow more general transport layers) and that a set of port numbers available to the application layer is limited (you may not be able to run a SMTP server at port 25).
You're demanding an awful lot of changes to the entire internet to partially restore IPv4 transparency when the better solution is to deploy IPv6 and have real full transparency.
The point of the end to end transparency is:
The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system.
That is one purpose. A more accurate definition of the greater purpose of end-to-end transparency would be: An application can expect the datagram to arrive at the remote destination without any modifications not specified in the basic protocol requirements (e.g. TTL decrements, mac layer header rewrites, reformatting for different lower-layer media, etc.) An application should be able to expect the layer 3 and above addressing elements to be unaltered and to be able to provide "contact me on" style messages in the payload based on its own local knowledge of its addressing.
quoted from "End-To-End Arguments in System Design", the original paper on the end to end argument written by Saltzer et. al.
Thus,
The NAT function can completely and correctly be implemented with the knowledge and help of the host protocol stack.
Masataka Ohta
It could be argued, if one considers "contact me on" style messages to be valid, that the function cannot be completely and correctly implemented in the presence of NAT. Moreover, since NAT provides no benefit other than address compression and the kind of additional effort on NAT of which you speak would be a larger development effort than IPv6 at this point, why bother? Owen
On Wed, Sep 5, 2012 at 9:39 PM, Owen DeLong <owen@delong.com> wrote:
On Sep 5, 2012, at 21:08 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Jimmy Hess wrote:
NAT would fall under design flaw, because it breaks end-to-end connectivity, such that there is no longer an administrative choice that can be made to restore it (other than redesign with NAT removed).
The end to end transparency can be restored easily, if an administrator wishes so, with UPnP capable NAT and modified host transport layer.
This is every bit as much BS as it was the first 6 times you pushed it.
Yep.
Owen DeLong wrote:
then, if transport layer of the host is modified to perform reverse translation (information for the translation can be obtained through UPnP):
(local IP, global port) <-> (global IP, global port)
Now, NAT is transparent to application layer.
Never mind the fact that all the hosts trying to reach you have no way to know what port to use.
Quote from <draft-ohta-e2e-nat-00.txt> A server port number different from well known ones may be specified through mechanisms to specify an address of the server, which is the case of URLs.
http://www.foo.com fed into a browser has no way for the browser to determine that it needs to contact 192.0.200.50 on port 8099 instead of port 80.
See RFC6281 and draft-ohta-urlsrv-00.txt. But, http://www.foo.com:8099 works just fine.
The remaining restrictions are that only TCP and UDP are supported by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box to allow more general transport layers) and that a set of port numbers available to the application layer is limited (you may not be able to run a SMTP server at port 25).
You're demanding an awful lot of changes to the entire internet to
All that necessary is local changes on end systems of those who want the end to end transparency. There is no changes on the Internet.
This is every bit as much BS as it was the first 6 times you pushed it.
As you love BS so much, you should better read your own mails. Masataka Ohta
On Thursday 06 September 2012 14:01:50 Masataka Ohta wrote:
All that necessary is local changes on end systems of those who want the end to end transparency.
There is no changes on the Internet.
You're basically redefining the term "end-to-end transparency" to suit your own agenda. Globally implementing what is basically an application layer protocol in order to facilitate the functioning of an upper layer protocol (i.e. IPv4) is patent nonsense. The purpose of each layer is to facilitate the one it encapsulates, not the other way around. What you advocate is not end-to-end transparency but point-to-point opacity hinging on a morass of hacks that constitute little more than an abuse of existing technologies in an attempt to fulfil an unscalable goal. Fortunately, it is exactly that fact which makes all of your drafts and belligerent evangelising utterly pointless; you can continue to make noise and attempt to argue by redefinition all you like, the world will continue to forge ahead with the deployment of IPv6 and the *actual* meaning of the end-to-end principle will remain as it is. Regards, Oliver
Oliver wrote:
All that necessary is local changes on end systems of those who want the end to end transparency.
There is no changes on the Internet.
You're basically redefining the term "end-to-end transparency" to suit your own
Already in RFC3102, which restrict port number ranges, it is stated that: This document examines the general framework of Realm Specific IP (RSIP). RSIP is intended as a alternative to NAT in which the end- to-end integrity of packets is maintained. We focus on implementation issues, deployment scenarios, and interaction with other layer-three protocols. It's you who tries to change the meaning of "end to end transparency". Masataka Ohta
This has been experimental with no forward progress since 2001. Any sane person would conclude that the experiment failed to garner any meaningful support. Is there any continuing active work on this experiment? Any running code? Didn't think so. Owen On Sep 6, 2012, at 23:23 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Oliver wrote:
All that necessary is local changes on end systems of those who want the end to end transparency.
There is no changes on the Internet.
You're basically redefining the term "end-to-end transparency" to suit your own
Already in RFC3102, which restrict port number ranges, it is stated that:
This document examines the general framework of Realm Specific IP (RSIP). RSIP is intended as a alternative to NAT in which the end- to-end integrity of packets is maintained. We focus on implementation issues, deployment scenarios, and interaction with other layer-three protocols.
It's you who tries to change the meaning of "end to end transparency".
Masataka Ohta
On Friday 07 September 2012 15:23:30 Masataka Ohta wrote:
Oliver wrote:
All that necessary is local changes on end systems of those who want the end to end transparency.
There is no changes on the Internet.
You're basically redefining the term "end-to-end transparency" to suit your own Already in RFC3102, which restrict port number ranges, it is stated that:
This document examines the general framework of Realm Specific IP (RSIP). RSIP is intended as a alternative to NAT in which the end- to-end integrity of packets is maintained. We focus on implementation issues, deployment scenarios, and interaction with other layer-three protocols.
Just because something is documented in RFC does not automatically make it a standard, nor does it necessarily make anyone care. I refer you to RFC1149. Although, since you have such a hard-on for RFCs, you should check out RFC2460 - unlike 3102, it's standards-track and quite widely implemented.
It's you who tries to change the meaning of "end to end transparency".
Denial: not just a river in Egypt. If the best rebuttal you can come up with is an experimental, unused RFC and a one-liner that basically amounts to "NO U", I suggest you do everyone a favour and crawl back into the hole you came from. I realise that it must be a difficult and slow process coming to the realisation that everything you've advocated for and espoused is unmitigated garbage, but whilst you deal with that internal struggle, please save the rest of us from having to waste our time deconstructing the last vestiges of your folly. Regards, Oliver
Oliver wrote:
You're basically redefining the term "end-to-end transparency" to suit your own Already in RFC3102, which restrict port number ranges, it is stated that:
This document examines the general framework of Realm Specific IP (RSIP). RSIP is intended as a alternative to NAT in which the end- to-end integrity of packets is maintained. We focus on implementation issues, deployment scenarios, and interaction with other layer-three protocols.
Just because something is documented in RFC does not automatically make it a standard, nor does it necessarily make anyone care.
That's not a valid argument against text in the RFC proof read by the RFC editor as the evidence of established terminology of the Internet community.
It's you who tries to change the meaning of "end to end transparency".
Denial: not just a river in Egypt.
Invalid denial. Masataka Ohta
On 09/09/2012 23:24, Masataka Ohta wrote:
Oliver wrote:
Just because something is documented in RFC does not automatically make it a standard, nor does it necessarily make anyone care.
That's not a valid argument against text in the RFC proof read by the RFC editor as the evidence of established terminology of the Internet community.
you may want to read rfc 1796, and then retract what you said because it sounds silly. Nick
Nick Hilliard wrote:
Just because something is documented in RFC does not automatically make it a standard, nor does it necessarily make anyone care.
That's not a valid argument against text in the RFC proof read by the RFC editor as the evidence of established terminology of the Internet community.
you may want to read rfc 1796, and then retract what you said because it sounds silly.
Anything written in RFC1796 should be ignored, because RFC1796, an informational, not standard track, RFC, states so. Or, is it time to retract your silliness? Masataka Ohta
On Tue, 11 Sep 2012 05:51:53 +0900, Masataka Ohta said:
Anything written in RFC1796 should be ignored, because RFC1796, an informational, not standard track, RFC, states so.
On the other hand, if you're relying on the fact that 1796 is informational in order to ignore it, then you're following its guidance even though it's not a standard. Insisting on being pedantic on its status will merely leave you wondering who shaves the barber.
Or, is it time to retract your silliness?
Standard or not, we have Christian Huitema, John Postel, and Steve Crocker telling you something about RFCs and how they actually work. Which is more likely, that all 3 of them were wrong, or that you're the one that's confused?
(2012/09/11 20:52), valdis.kletnieks@vt.edu wrote:
On Tue, 11 Sep 2012 05:51:53 +0900, Masataka Ohta said:
Anything written in RFC1796 should be ignored, because RFC1796, an informational, not standard track, RFC, states so.
On the other hand, if you're relying on the fact that 1796 is informational
No, I don't. It's Jimmy, Eliot and you who are relying on a non standard track RFC to deny RFC3102 and all the non standard track RFCs, which is the silly paradox.
Standard or not, we have Christian Huitema, John Postel, and Steve Crocker
If you have some respect to them, don't involve them with your silly paradox. Masataka Ohta
On 9/11/12, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
On Tue, 11 Sep 2012 05:51:53 +0900, Masataka Ohta said: No, I don't. It's Jimmy, Eliot and you who are relying on a non standard track RFC to deny RFC3102 and all the non standard track RFCs, which is
(2012/09/11 20:52), valdis.kletnieks@vt.edu wrote: the silly paradox.
Straw man. We don't rely on a non standard track RFC to "deny" rfc3102 having significance, furthermore, we don't indicate that all non standard track RFCs are of no significance; he's just being nice about it. The rfc1796 just happens to contain a useful explanation. If you don't read 3102 selectively, ignoring the disclaimers, you can very easily see that RSIP is not considered a viable standard in its present form, but possibly someone /might/ find it suitable for experimental use. RFC3102 denies itself, and please read the IESG note at the top of the document; where fundamental problems are admitted to with regards to interoperability and transparency. "This memo defines an Experimental Protocol for the Internet community. It does not specify an Internet standard of any kind." Which means that RSIP has not received technical review or acceptance like standards have. The protocol doesn't have to work for an Experimental or Informational RFC to be published. The non-standards track rfc might still contain useful information, but 3102 is a little more authoritative than someone's blog entry. There are other non-standards track RFCs that are more important, take RFC 1149 for example.... Just being a RFC alone doesn't make a document important or not, reliable or not, anymore than a news article is important or not just because it appeared on a certain bulletin board. For now, and in its current form, I will discount the relevance or usefulness of the experimental protocol described in rc3102.
Standard or not, we have Christian Huitema, John Postel, and Steve Crocker
If you have some respect to them, don't involve them with your silly paradox.
-- -JH
On Sun, Sep 9, 2012 at 6:24 PM, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Oliver wrote:
You're basically redefining the term "end-to-end transparency" to suit your own Already in RFC3102, which restrict port number ranges, it is stated that:
This document examines the general framework of Realm Specific IP (RSIP). RSIP is intended as a alternative to NAT in which the end- to-end integrity of packets is maintained. We focus on implementation issues, deployment scenarios, and interaction with other layer-three protocols.
Just because something is documented in RFC does not automatically make it a standard, nor does it necessarily make anyone care.
That's not a valid argument against text in the RFC proof read by the RFC editor as the evidence of established terminology of the Internet community.
In case Nick's comment wasn't obvious enough: RFC 1796: "Not All RFCs Are Standards It is a regrettably well spread misconception that publication as an RFC provides some level of recognition. It does not, or at least not any more than the publication in a regular journal." RFC 3102: "This memo defines an Experimental Protocol for the Internet community. It does not specify an Internet standard of any kind." As to your original point that software could be constructed that restores end-to-end through a NAT device by some kind of dynamic but published assignment of incoming ports to specific hosts behind the NAT... that's not really true. End-to-end is generally described as a layer 3 phenomenon. Dinking around with ports means you're at layer 4. Which means that only specific pre-programmed transports can pass even it you would like all protocols to be able to. There is another technology, also called NAT and described in RFC 1631, which translates layer 3 addresses 1:1, exactly one address inside for one address outside. While it's possible to talk about end-to-end with that technology, we are for practical, operational purposes just shy of -never- talking about or using that kind of NAT. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
William Herrin wrote:
In case Nick's comment wasn't obvious enough:
Anything written in RFC1796 should be ignored, because RFC1796, an informational, not standard track, RFC, states so. It's so obvious.
RFC 1796:
It is a regrettably well spread misconception that publication as an RFC provides some level of recognition. It does not, or at least not any more than the publication in a regular journal."
Your silliness, too, is appreciated.
End-to-end is generally described as a layer 3 phenomenon.
Read the original paper on it: http://groups.csail.mit.edu/ana/Publications/PubPDFs/End-to-End%20Arguments%... to find that the major example of the paper is file transfer, an application.
we are for practical, operational purposes just shy of -never- talking about or using that kind of NAT.
For practical operational purposes, it is enough that PORT command of ftp works transparently. Masataka Ohta
On 9/6/12, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Owen DeLong wrote:
You're demanding an awful lot of changes to the entire internet to All that necessary is local changes on end systems of those who want the end to end transparency.
Achieving "end to end", and breaking interoperability while introducing a level of complexity and points of failure that noone will accept, is no good. I refer you back to RFC1925 number (6). If you had to modify the implementation on endpoints that want to communicate end-to-end, then by definition you don't have transparency. The inability to communicate end-to-end with unmodified endpoints makes it non-transparent, and is itself a break of the principle. UPnP is not robust enough either for the suggested application. The RFC3102 you mention doesn't have acceptance; the concept of RSIP was not proven tenable, that it actually works or scales and can be implemented reliably with real applications on real networks in the first place. Achieving true 'end to end' with such a scheme would require alterations to many protocol standards which didn't happen, and there would be many interoperability breaks.
There is no changes on the Internet. Masataka Ohta
-- -JH
On Wed, Sep 05, 2012 at 09:39:44PM -0700, Owen DeLong wrote:
Never mind the fact that all the hosts trying to reach you have no way to know what port to use.
Despite my scepticism of the overall project, I find the above claim a little hard to accept. RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely? Best, A -- Andrew Sullivan Dyn Labs asullivan@dyn.com
On Thu, Sep 6, 2012 at 11:14 AM, Andrew Sullivan <asullivan@dyn.com> wrote:
RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely?
Hi Andrew, Because the developer of the next killer app knows exactly squat about the DNS and won't discover anything about the DNS that can't be had via getaddrinfo() until long after its too late redefine the protocol in terms of seeking SRV records. Leaving SRV out of getaddrinfo() means that SRVs will be no more than lightly used for the duration of the current networking API. The last iteration of the API survived around 20 years of mainstream use so this one probably has another 15 to go. Also there are efficiency issues associated with seeking SRVs first and then addresses, the same kind of efficiency issues with reverse lookups that lead high volume software like web servers to not do reverse lookups. But those pale in comparison to the first problem. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Thu, Sep 06, 2012 at 01:49:06PM -0400, William Herrin wrote:
the DNS and won't discover anything about the DNS that can't be had via getaddrinfo() until long after its too late redefine the protocol in terms of seeking SRV records.
Oh, sure, I get that. One of the problems I've had with the "end to end NAT" argument is exactly that I can't see how it's any more deployable than IPv6, for exactly this reason. But the claim upthread was (I thought) that the application _can't_ know about this stuff, not that it's hard today. Because of the 20-year problem, I think now would be an excellent time to start thinking about how to make usable all those nice features we already have in the DNS. Maybe by the time I die, we'll have a useful system! Best, Andrew "living in constant, foolish, failed hope" Sullivan -- Andrew Sullivan Dyn Labs asullivan@dyn.com
Andrew Sullivan wrote:
the DNS and won't discover anything about the DNS that can't be had via getaddrinfo() until long after its too late redefine the protocol in terms of seeking SRV records.
Oh, sure, I get that. One of the problems I've had with the "end to end NAT" argument is exactly that I can't see how it's any more deployable than IPv6, for exactly this reason.
The easiest part of the deployment is to modify end systems.
Because of the 20-year problem, I think now would be an excellent time to start thinking about how to make usable all those nice features we already have in the DNS.
Apple did it. See RFC6281. Masataka Ohta
On Sep 6, 2012, at 23:30 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Andrew Sullivan wrote:
the DNS and won't discover anything about the DNS that can't be had via getaddrinfo() until long after its too late redefine the protocol in terms of seeking SRV records.
Oh, sure, I get that. One of the problems I've had with the "end to end NAT" argument is exactly that I can't see how it's any more deployable than IPv6, for exactly this reason.
The easiest part of the deployment is to modify end systems.
Then why is IPv6 deployment happening faster in the internet core than at the edge? The real world seems to defy your claims. Owen
Owen DeLong wrote:
Then why is IPv6 deployment happening faster in the internet core than at the edge?
The real world seems to defy your claims.
Which world, are you talking about? Martian?
This has been experimental with no forward progress since 2001.
Obviously because it is a new protocol requiring new gateways, which is not the case with UPnP capable NAT. Moreover, it has nothing to do with the definition of the end to end transparency. Masataka Ohta
On Sep 6, 2012, at 08:14 , Andrew Sullivan <asullivan@dyn.com> wrote:
On Wed, Sep 05, 2012 at 09:39:44PM -0700, Owen DeLong wrote:
Never mind the fact that all the hosts trying to reach you have no way to know what port to use.
Despite my scepticism of the overall project, I find the above claim a little hard to accept. RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely?
If browsers started implementing it, it could. I suppose the more accurate/detailed statement would be: Without modifying every client on the internet, there is no way for the clients trying to reach you to reliably be informed of this port number situation. If you're going to touch every client, it's easier to just do IPv6. Owen
On 9/6/12 8:27 PM, Owen DeLong wrote:
Despite my scepticism of the overall project, I find the above claim a little hard to accept. RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely?
If browsers started implementing it, it could.
This is currently being discussed in the httpbis working group as part of the http 2.0 discussion. Also, I'll note that at least one browser has implemented XMPP without the mandatory SRV record, and it's next to useless for XMPP (in fact it seems to only work with a handful of broken XMPP implementations), so look for SRV in at least one browser in the next year or so, I'd guess.
I suppose the more accurate/detailed statement would be:
Without modifying every client on the internet, there is no way for the clients trying to reach you to reliably be informed of this port number situation.
If you're going to touch every client, it's easier to just do IPv6.
Well, this depends on who you think "you" is. The browser gang regularly touches many MANY (but not all) clients. Eliot
Well, this depends on who you think "you" is. The browser gang regularly touches many MANY (but not all) clients.
Not everything on the internet is accessed using a browser. Is adding SRV to browsers a good thing? Yes. Is end-to-end transparent addressing a good thing? Yes. Does one have anything to do with the other? Only in the delusional mind of Masataka. Real transparent addressing will come with IPv6. IPv4 does not scale. It's time to move forward. Owen
Eliot Lear wrote:
On 9/6/12 8:27 PM, Owen DeLong wrote:
If you're going to touch every client, it's easier to just do IPv6.
Well, this depends on who you think "you" is. The browser gang regularly touches many MANY (but not all) clients.
Though I merely stated: The easiest part of the deployment is to modify end systems. according to Owen's delusion confirmed by private communications (I can't understand why he can do it public), "client", seemingly, also means middle NAT boxes, even though they are still fine as long as they are "client"s or servers supporting UPnP. Yes, the easiest part of the deployment is to modify end systems, to modify protocol stacks and browsers of the end systems. Masataka Ohta
On Thu, 06 Sep 2012 11:14:58 -0400, Andrew Sullivan said:
Despite my scepticism of the overall project, I find the above claim a little hard to accept. RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely?
My PS3 may want to talk to the world, but I have no control over Comcast's DNS.
In message <85250.1346959671@turing-police.cc.vt.edu>, valdis.kletnieks@vt.edu writes:
--==_Exmh_1346959671_1993P Content-Type: text/plain; charset=us-ascii
On Thu, 06 Sep 2012 11:14:58 -0400, Andrew Sullivan said:
Despite my scepticism of the overall project, I find the above claim a little hard to accept. RFC 2052, which defined SRV in an experiment, came out in 1996. SRV was moved to the standards track in 2000. I've never heard an argument why it won't work, and we know that SRV records are sometimes in use. Why couldn't that mechanism be used more widely?
My PS3 may want to talk to the world, but I have no control over Comcast's DNS.
What point are you trying to make? Comcast's servers support SRV as do all general purpose name servers. For HTTP at least you need to be backwards compatible so there is no reason not to add SRV support.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Exmh version 2.5 07/13/2001
iQIVAwUBUEj5NwdmEQWDXROgAQL/fRAAjHmAtVBMjQAybs2TWrzWMcE6e9k6A7Av LvOXJAS1leKr0tyg0lG4+IwuMCN5bV3+V8F7+bWAfQFSBIj9aH5ymSuxdO/LJVoj TdPRSRzTcPCL0mmIB5LbBdrDgi/PcruLdGDgOiLiLPhUkXnRJ+OmzR2WmAh4jgOz dVLb0ugujqbmqm7tzgxeiC0yzF9EiL3RQAZwzZI9Tcbnh0rELMHWBhgGeIO5KbA9 4iCh79MkrPXr4uONVQrCmbNBuqcziGIekKDGCpSUqwynzbc7NK00+Xhhkz2inNOn m7v73JFKzLd3AAjBenv3Yqz9hIwUGT4D9kW6Kof5Ah5SmzLY1ZOKpi+08M6i6SS/ I54neNmQ7lLuO9p7EsGpRTfUN1MOMEYXo8yOFTNQI7FDWCXNhWz/MjE3wxmXQYeA jBd8EE7m0QGuM6l/AhaS9BRXdZUXn8KK5E4N5YubJonLIuTzkTXuHmhFOB3Khlrl rHfl84sf8TdeDuxlJZs4PLdfRJooknNjSqLYfyfH0UeK3mSjlY3rpjcAZbSZsMdy vUDO0hU1C6FNFCXdkwRVZUtHxFX+l1sOtk76bt4s7NiWhwwGxwrykvk66qPa3YsH nyIWS7SsX245hy7dayKMKpYIByaAO6E7uVWzhgOobRMe3omP911BE30D2KYLXFvn wVqujobWuC4= =o0nz -----END PGP SIGNATURE-----
--==_Exmh_1346959671_1993P--
-- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Fri, 07 Sep 2012 08:30:12 +1000, Mark Andrews said:
In message <85250.1346959671@turing-police.cc.vt.edu>, valdis.kletnieks@vt.edu writes:
My PS3 may want to talk to the world, but I have no control over Comcast's DNS.
What point are you trying to make? Comcast's servers support SRV as do all general purpose name servers. For HTTP at least you need to be backwards compatible so there is no reason not to add SRV support.
Sure, Comcast's servers will happily support an SRV entry for my PS3. However, Comcast's business processes don't support a way for me to request said SRV record be listed. Heck, I don't even get a static IP with my current service package. ;) Now *I* have the technical chops to talk to the guys at dyndns.org or other providers and get an SRV entry created under some domain name pointing back at my IP address. However, Joe Sixpack doesn't really have that option. And unless you figure out a scalable and universal way for Joe Sixpack's Xbox or PS3 or whatever to request an SRV entry saying that the PS3 wants to do service "foobar" on port 34823, you can't use SRV like that. A better proposal would probably be having the NAT itself run a 'portmap' type service on a well known port like 111. Except that still doesn't do a very good job of disambiguating two instances of "foobar" behind a NAT...
On Sep 6, 2012, at 23:44, Valdis.Kletnieks@vt.edu wrote:
However, Joe Sixpack doesn't really have that option. And unless you figure out a scalable and universal way for Joe Sixpack's Xbox or PS3 or whatever to request an SRV entry saying that the PS3 wants to do service "foobar" on port 34823, you can't use SRV like that.
I think you missed the point on this one. Even if your PS3 has a public IP with no limitations on ports, how exactly are others going to find it to connect to it? I see three options here: 1. You have to give them the IP address, in which case it's not exactly rocket science to give them the port as well. The same Joe Sixpack who couldn't find the port couldn't likely figure out his IP either, so the PS3 would have to be able to identify its own public-facing IP and tell him, in which case it could also tell him the port. 2. DNS, which while many don't have this ability any that do should have no problem setting a SRV record. Obviously not all clients support the use of SRV records, but that's not the subject of this particular tangent. 3. Intermediary directory server hosted at a known location. Pretty much the standard solution for actual PS3 titles as well as almost all other games since the late '90s. Rather than telling the user, the PS3 tells the central server its IP and port plus any useful metadata about the service being offered and the user just needs to give his friend a name or other identifier to find it in the list. None of these options are impacted by being behind a NAT as long as they have the ability to open a port via UPnP or equivalent, so if in an ideal world all client software understood SRV records this particular negative of NAT would be of minimal impact. Of course the real world is nowhere close to ideal and personally SIP phones and Jabber clients are the only things I've ever observed widely using SRV records, so at least for now I can't just do something like "_http._tcp.seanharlow.info 86400 IN SRV 0 5 8080 my.home.fake." and host my personal site on my home server (Mozilla has had a bug open on this for over ten years, Chrome for three, and Webkit closed their bug as WONTFIX two years ago so I don't expect this to change any time soon). At this point we're coming back around to the already raised point about if we're going to have to change a lot of things, why not just put that effort in to doing it right with IPv6 rather than trying to keep address conservation via DNAT alive? --- Sean Harlow sean@seanharlow.info
On Sep 6, 2012, at 22:31 , Sean Harlow <sean@seanharlow.info> wrote:
On Sep 6, 2012, at 23:44, Valdis.Kletnieks@vt.edu wrote:
However, Joe Sixpack doesn't really have that option. And unless you figure out a scalable and universal way for Joe Sixpack's Xbox or PS3 or whatever to request an SRV entry saying that the PS3 wants to do service "foobar" on port 34823, you can't use SRV like that.
I think you missed the point on this one. Even if your PS3 has a public IP with no limitations on ports, how exactly are others going to find it to connect to it?
I see three options here:
1. You have to give them the IP address, in which case it's not exactly rocket science to give them the port as well. The same Joe Sixpack who couldn't find the port couldn't likely figure out his IP either, so the PS3 would have to be able to identify its own public-facing IP and tell him, in which case it could also tell him the port. 2. DNS, which while many don't have this ability any that do should have no problem setting a SRV record. Obviously not all clients support the use of SRV records, but that's not the subject of this particular tangent.
Anyone can set up free DNS from a variety of providers and get a domain name for ~$10/year. I'm not sure why you think there is anyone who can't get DNS. If you can operate a web browser and come up with $10/year or so, you can have forward DNS. The inability to influence Comcast's nameservers would only affect reverse lookups (which SRV goes forward, not reverse IIRC).
3. Intermediary directory server hosted at a known location. Pretty much the standard solution for actual PS3 titles as well as almost all other games since the late '90s. Rather than telling the user, the PS3 tells the central server its IP and port plus any useful metadata about the service being offered and the user just needs to give his friend a name or other identifier to find it in the list.
Which becomes ugly and unnecessary with full transparency and useful DNS, which we get with IPv6 even without SRV records. To make SRV records meaningful, OTOH, virtually every client needs an update.
None of these options are impacted by being behind a NAT as long as they have the ability to open a port via UPnP or equivalent, so if in an ideal world all client software understood SRV records this particular negative of NAT would be of minimal impact. Of course the real world is nowhere close to ideal and personally SIP phones and Jabber clients are the only things I've ever observed widely using SRV records, so at least for now I can't just do something like "_http._tcp.seanharlow.info 86400 IN SRV 0 5 8080 my.home.fake." and host my personal site on my home server (Mozilla has had a bug open on this for over ten years, Chrome for three, and Webkit closed their bug as WONTFIX two years ago so I don't expect this to change any time soon). At this point we're coming back around to the already raised point about if we're going to have to change a lot of things, why not just put that effort in to doing it right with IPv6 rather than trying to keep address conservation via DNAT alive?
+1 -- Address transparency is a good thing. Owen
Sean Harlow wrote:
None of these options are impacted by being behind a NAT as long as they have the ability to open a port via UPnP or equivalent, so if in an ideal world all client software understood SRV records this particular negative of NAT would be of minimal impact.
My point is that the impact can be minimized if 1) a set of port numbers is statically allocated to a host behind NAT without UPnP or PCP, just as allocating a static address to a host, which means there is no security concern w.r.t. dynamic assignment. Dynamic DNS update is not necessary, either. UPnP or PCP can still be used to collect information for reverse translation. 2) reverse translation can be performed by network and/or transport layer without involving applications, which makes modifications to application programs completely unnecessary. I have already confirmed that ftp PORT command work transparently.
Of course the real world is nowhere close to ideal and personally SIP phones and Jabber clients are the only things I've ever observed widely using SRV records,
As we can explicitly specify port numbers in URLs, support for SRV is not very essential. But, SRV will be more commonly used as PCP will be deployed. Masataka Ohta
In message <108454.1346989445@turing-police.cc.vt.edu>, valdis.kletnieks@vt.edu writes:
--==_Exmh_1346989445_1993P Content-Type: text/plain; charset=us-ascii
On Fri, 07 Sep 2012 08:30:12 +1000, Mark Andrews said:
In message <85250.1346959671@turing-police.cc.vt.edu>, valdis.kletnieks@vt.edu writes:
My PS3 may want to talk to the world, but I have no control over Comcast's DNS.
What point are you trying to make? Comcast's servers support SRV as do all general purpose name servers. For HTTP at least you need to be backwards compatible so there is no reason not to add SRV support.
Sure, Comcast's servers will happily support an SRV entry for my PS3.
However, Comcast's business processes don't support a way for me to request said SRV record be listed. Heck, I don't even get a static IP with my current service package. ;)
There are plenty of companies that will serve whatever you want them to serve.
Now *I* have the technical chops to talk to the guys at dyndns.org or other providers and get an SRV entry created under some domain name pointing back at my IP address. However, Joe Sixpack doesn't really have that option. And unless you figure out a scalable and universal way for Joe Sixpack's Xbox or PS3 or whatever to request an SRV entry saying that the PS3 wants to do service "foobar" on port 34823, you can't use SRV like that.
There is NOTHING stopping Sony adding code to the PS3 to perform dynamic updates to add the records. We have a well established protocol to do this securely. 100's of millions of records get updated daily using this protocol in the corporate environment. This is NOTHING Joe Sixpack can't do with a smidgen of help on behalf of product vendors. Home router vendors already have code to do this. domain name for the PS account name password account name and password form the TSIG information to secure the dynamic update.
A better proposal would probably be having the NAT itself run a 'portmap' type service on a well known port like 111. Except that still doesn't do a very good job of disambiguating two instances of "foobar" behind a NAT... -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On Fri, 07 Sep 2012 16:01:10 +1000, Mark Andrews said:
There is NOTHING stopping Sony adding code to the PS3 to perform dynamic updates to add the records. We have a well established protocol to do this securely. 100's of millions of records get updated daily using this protocol in the corporate environment. This is NOTHING Joe Sixpack can't do with a smidgen of help on behalf of product vendors. Home router vendors already have code to do this.
domain name for the PS account name password
account name and password form the TSIG information to secure the dynamic update.
And my point was merely that you can't actually use SRV for this use case until the above is actually deployed, rather than in the "nothing stopping SONY" hand-waving state.
It would be really nice if people making statements about the end to end principle would talk about the end to end principle. http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf The abstract of the paper states the principle: This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements. One of the authors of the paper has since restated it in a way that is significantly less useful, which is that the only place anything intelligent should be done in a network is in the end system. If you believe that argument, then WiFi networks should not retransmit lost packets (and as a result would become far less useful) and the Internet should not use routing protocols - it should only use source routing. So, yes, I think the "Rise of the Stupid Network" is a very interesting paper and site, but needs to be handled carefully. The argument argues for simplicity and transparency; when a function at a lower layer does something an upper layer - not just the application, but any respectively higher and lower layers - it can be difficult to debug the behavior. However, it is not a hard-and-fast "the network should never do things that the end system doesn't expect"; the paper makes it clear that there is a trade-off, and if the trade-off makes sense (retransmitting at the link layer in addition to the end to end retransmission in the case of WiFi) it doesn't preclude the behavior. It merely suggests that one think about such things (retransmitting in LAPB turned out to be a bad idea way back when). Complexities of various types are unavoidable; to quote a fourteenth century logician, "a satisfactory syllogism contains no unnecessary complexity". Yes, I think that stateful network address translation violates the end to end principle. But it doesn't violate it because everyone can't talk with everyone directly; it violates it because a change is made in a packet that subverts an end system's intent and as a result randomly breaks things (for example, an ICMP packet-too-large response has to be specially handled by the NAT to make PMTU work). I would argue that a network-imposed policies like traffic filters violate the end to end principle no more than network-imposed routing (including not-routing) policies do. If you can't get somewhere or a given address isn't instantiated with a host, that's not particularly surprising. What might be surprising and difficult to diagnose would be something that sometimes allows packets through and sometimes doesn't without any clear reason. I suspect everyone on this list will agree that the KISS principle, aka end-to-end, is pretty important, and get irritated with vendors (cough) that push them towards complex solutions. A host directly sending mail to a remote MTA is not automatically a bad actor for any reason related to KISS. There are two issues, however, that you might think about. My employer tells me that they discard about 98% of email traffic headed to me; a study of my own email history says that the email from outside that passes that filter and which is what I expect to receive comes through less than 1000 immediate SMTP predecessors to the first Cisco MTA; running the same survey on my junk folder (which is only 30 days, not 18 years) has about 5000 SMTP predecessors, the 1000 and the 5000 are disjoint sets. So an SMTP connection to a remote MTA is not a bad thing automatically, but it raises security eyebrows - and should - because it is similar to how spam and other attacks are transmitted. In addition, at least historically, in many cases those MUAs directly connecting to remote MTAs try or tried to use them as open relays, and it was difficult for the relay to authenticate random systems. Having an MUA give its traffic to a first hop MTA using SSL or some other form of service authentication/authorization improves the security of the service. That can be overcome using S/MIME, of course, given a global PKI, but DKIM depends on the premise that the originator has somehow been authenticated and shown to be authorized to send email. On Sep 4, 2012, at 11:22 AM, Jay Ashworth wrote:
----- Original Message -----
From: "Owen DeLong" <owen@delong.com>
I am confused... I don't understand your comment.
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions.
We see it now alleged that the opposite is true: that a laptop, say, like mine, which runs Linux and postfix, and does not require a smarthost to deliver mail to a remote server *is a bad actor* *precisely because it does that* (in attempting to send mail directly to a domain's MX server) *from behind a NAT router*, and possibly different ones at different times.
I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*.
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
---------------------------------------------------- The ignorance of how to use new knowledge stockpiles exponentially. - Marshall McLuhan
participants (18)
-
Andrew Sullivan
-
Cameron Byrne
-
David Miller
-
Eliot Lear
-
Fred Baker (fred)
-
Izaac
-
Jay Ashworth
-
Jimmy Hess
-
Mark Andrews
-
Masataka Ohta
-
Michael Thomas
-
Nick Hilliard
-
Oliver
-
Owen DeLong
-
Sean Harlow
-
TJ
-
valdis.kletnieks@vt.edu
-
William Herrin