"Alex \"Mr. Worf\" Yuriev" <alex@netaxs.com> writes:
It is called KISS principle. In computer security it is also called minimizing possible risk.
Cool, as I just recently said, I love learning brand new things from people older and wiser than me.
Well, then he is *WRONG*.
Dr Perlman's excellent book, _Interconnections_, ISBN 0-201-56332-0, Addison-Wesley, 1992, is probably the definitive text on routing to that date, and subsequently (désolé, Christian :) ) I would look forward to a new version which would take recent developments in integrated IS-IS for IP (and in IP), BGP with all its many new features, IDRP, and the specification of NIMROD into account, since Dr Perlman is not only very informative but also a fun read.
I think it is funny that network operators say "It must be done at the application layer because otherwise my network won't scale" while people that deal with applied crypto say "Are you nuts? Why do you want to make every application utilize its own cryptographic method? You are creating a weakness".
The latter group are lazy. The former group has lazy people too, but it also has people who use rather neat encryption on the physical transmissions layers to prevent people with vampire taps, microwave interceptors, or little satellite dishes eavesdropping or forging traffic. However, security at the physical layer, as with security at the next two or so transport layers, is non-transitive. You do not become more secure because you scramble the bits inside your IP datagram than when you scramble the bits in your TCP segment, and you make it harder to scale using NATs and ALGs, which are the fundamental building blocks of the growing Internet. You also make it less possible to be fair using Van Jacobson's excellent pleas to implement profile enforcement towards the edges, or indeed, to implement anything like RED/WRED/WFQ along parts of the path between two endpoints scrambling everything inside IP datagrams. Moreover, scrambling *all* communications or authenticating *all* communications (thus preventing things like Vixie's and Cisco's various web-query-redirection software from working correctly) is an insane waste of CPU time. Consequently, encryption and authentication *is* something you want explicitly turned on by applications, and if possible it should generate sane (if not necessarily transparently descriptive) TCP headers. Implementation details are up to you host people types. Throwing it back at network operators is silly. Our market is in getting traffic around, not in solving people's security problems (as if any solution proposed or implmented by any ISP or set of ISPs would really seriously be believed to be secure anyway).
Face it, the security of any chain is equal to security of its weakest link and currently that link is not host security.
Yah, multiuser systems are really secure.
The lesson of Kerberos and SSH and so forth is that ES-to-ES security is useless if one of the ESes itself is compromised.
Since when is that the case for Kerberos? Only if you compromise the KDC you break security of the model.
Ticket stealing is an old game. All you need is the right file permissions and decent timing (it's usually a big window). Ran Atkinson mentioned that K5 doesn't give him particularly warm fuzzies. I'm sure you could ask him some of his reasons, and get a lucid explanation of them. He's bright, even if I don't find myself liking the IPSEC model very much.
While disabling LSR does not make network equipment more robust, it prevents a series of very interesting attacks including DOS attacks against that equipment.
Which of those denials-of-service are not implementation dependent, where the implementation generally is now fully end-of-line Cisco 7000+RP or RSP-only equipment?
Sure it is. Or rather, it is useful to be able to infer a number of path characteristics between two communicating endpoints for such things as flow control and route selection.
So why don't we all have at least SNMP access to the routers of those networks? A lot of people would surely want to what is going on inside there?
Mostly because SNMP is a bad misdesign compared to earlier efforts before SNMP's standardization, and the ability to dig out what you really need to know is sometimes more challenging using SNMP than observing TCP timings and using hacks like provoking ttl exceededs.
I think the answer to this question is simple - such ability would conflict with a security policy
I have trouble understanding why infering things about the path, for instance, the amount of congestion, the round trip times, the behaviour of queues, and so forth gets in the way of security policies, but people who believe that are, of course, more than welcome to use ALGs and firewalling techniques to prevent access to their networks. Remember, the point is that only people authorized to use your network should have access to the things you seem to think are possibly heavy secrets. If you don't want random people finding these things out, you don't let random people use your network. It's quite simple. However, if you do let random people use your network, you probably should not cripple your own use of that connectivity by doing stupid things in the name of some slightly additional warm fuzzy security. Sean.
participants (1)
-
Sean M. Doran