It would be really nice if people making statements about the end to end principle would talk about the end to end principle. http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf The abstract of the paper states the principle: This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements. One of the authors of the paper has since restated it in a way that is significantly less useful, which is that the only place anything intelligent should be done in a network is in the end system. If you believe that argument, then WiFi networks should not retransmit lost packets (and as a result would become far less useful) and the Internet should not use routing protocols - it should only use source routing. So, yes, I think the "Rise of the Stupid Network" is a very interesting paper and site, but needs to be handled carefully. The argument argues for simplicity and transparency; when a function at a lower layer does something an upper layer - not just the application, but any respectively higher and lower layers - it can be difficult to debug the behavior. However, it is not a hard-and-fast "the network should never do things that the end system doesn't expect"; the paper makes it clear that there is a trade-off, and if the trade-off makes sense (retransmitting at the link layer in addition to the end to end retransmission in the case of WiFi) it doesn't preclude the behavior. It merely suggests that one think about such things (retransmitting in LAPB turned out to be a bad idea way back when). Complexities of various types are unavoidable; to quote a fourteenth century logician, "a satisfactory syllogism contains no unnecessary complexity". Yes, I think that stateful network address translation violates the end to end principle. But it doesn't violate it because everyone can't talk with everyone directly; it violates it because a change is made in a packet that subverts an end system's intent and as a result randomly breaks things (for example, an ICMP packet-too-large response has to be specially handled by the NAT to make PMTU work). I would argue that a network-imposed policies like traffic filters violate the end to end principle no more than network-imposed routing (including not-routing) policies do. If you can't get somewhere or a given address isn't instantiated with a host, that's not particularly surprising. What might be surprising and difficult to diagnose would be something that sometimes allows packets through and sometimes doesn't without any clear reason. I suspect everyone on this list will agree that the KISS principle, aka end-to-end, is pretty important, and get irritated with vendors (cough) that push them towards complex solutions. A host directly sending mail to a remote MTA is not automatically a bad actor for any reason related to KISS. There are two issues, however, that you might think about. My employer tells me that they discard about 98% of email traffic headed to me; a study of my own email history says that the email from outside that passes that filter and which is what I expect to receive comes through less than 1000 immediate SMTP predecessors to the first Cisco MTA; running the same survey on my junk folder (which is only 30 days, not 18 years) has about 5000 SMTP predecessors, the 1000 and the 5000 are disjoint sets. So an SMTP connection to a remote MTA is not a bad thing automatically, but it raises security eyebrows - and should - because it is similar to how spam and other attacks are transmitted. In addition, at least historically, in many cases those MUAs directly connecting to remote MTAs try or tried to use them as open relays, and it was difficult for the relay to authenticate random systems. Having an MUA give its traffic to a first hop MTA using SSL or some other form of service authentication/authorization improves the security of the service. That can be overcome using S/MIME, of course, given a global PKI, but DKIM depends on the premise that the originator has somehow been authenticated and shown to be authorized to send email. On Sep 4, 2012, at 11:22 AM, Jay Ashworth wrote:
----- Original Message -----
From: "Owen DeLong" <owen@delong.com>
I am confused... I don't understand your comment.
It is regularly alleged, on this mailing list, that NAT is bad *because it violates the end-to-end principle of the Internet*, where each host is a full-fledged host, able to connect to any other host to perform transactions.
We see it now alleged that the opposite is true: that a laptop, say, like mine, which runs Linux and postfix, and does not require a smarthost to deliver mail to a remote server *is a bad actor* *precisely because it does that* (in attempting to send mail directly to a domain's MX server) *from behind a NAT router*, and possibly different ones at different times.
I find these conflicting reports very conflicting. Either the end-to-end principle *is* the Prime Directive... or it is *not*.
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA #natog +1 727 647 1274
---------------------------------------------------- The ignorance of how to use new knowledge stockpiles exponentially. - Marshall McLuhan