Matthew Kaufman wrote:
End-to-end requires that people writing the software at the end learn about buffer overruns (and other data-driven access violations) or program using tools that prevent such things. It is otherwise an excellent idea.
There is supposedly some magic going into this in the next "Service Pack" of a mentioned major exploding Pinto. Not sure if it´s just flipping the joke of firewall on by default or something more comprehensive/destructive like non-executable stack. Or a completely new invention like bug free code :-)
Unfortunately, the day that someone decided their poorly-designed machine and operating system would be safer sitting behind a "firewall" pretty much marked the end of universal end-to-end connectivity, and I don't see it coming back for a long long time. Probably not on this Internet. IPv6 or not.
Last I checked most "firewall"s don´t make these machines safe, it might make them safer, so only two out of three malwares hit them. Does not really help too much.
Combine that with ISP pricing models (helped by registry policy) that encourage <=1 IP address per household, and the subsequent boom in NAT boxes, and the fate is probably sealed.
Here I´ve observed opposite trend, most ISP´s are getting rid of NATting because it´s failure prone and expensive to implement and keep running. Pete