On Sun, 22 Sep 2002, William Allen Simpson wrote:
Sorry, any security requires a *SECRET*.
The only thing security really requires is *trust*. Secret keys won't do any good if the platform is compromised. Elaborate protections are useless if people who are allowed access are untruthworthy. No matter what you do it always boils down to trustworthiness of the physical implementations and people. Technological tricks simply modify the communication space by shifting vulnerable points around. This is often useful, but by no means can eliminate the need for inherently trusted devices and people at the end points. --vadim PS. As a side note - the "shocking" discovery that ObL's guys didn't really use steganography and other modern tricks much and still have world-wide network which is very hard to compromise or penetrate (all those montains of cool high-tech gagetry NSA has, notwithstanding) is a good illustration: they rely on the "first principle" of building trusted systems - i.e. building the network of personal loyalties and face-to-face communications, instead of fooling with techno fixes. PPS. I'm really really amazed at how people can consider any opaque system truthworthy. Most computer users naively trust their secrets to effectively every one of thousands of Microsoft engineers who can easily plant trapdoors. The same goes for trusting Intel. How hard it is for a CPU designer to plant an obscure bug causing switch to a privileged mode? It is hard _not_ to create trapdoors like that by mistake, even in much simpler designs (check the 30-year old report on Multics security).