Re: [[Infowarrior] - NSA Said to Have Used Heartbleed Bug for Years]
On 4/16/2014 4:34 PM, Jason Iannone wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
Isn't that a result of the language? Low level languages give that power to the author rather than assuming any responsibility. Hacker News had a fairly in-depth discussion regarding the nature of C with some convincing opinions as to why it's not exactly the right tool to build this sort of system with. The gist, forcing the author of a monster like OpenSSL to manage memory is a problem.
I dropped out of the discussion because I couldn't get a foot-hold, but I would like to know this: If the hardware (as has been suggested) or the OS does any of this, how do diagnostic routine in or running under the OS work? -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. (Adapted from Stephen Pinker)
On Wed, Apr 16, 2014 at 4:12 PM, Larry Sheldon <LarrySheldon@cox.net> wrote:
If the hardware (as has been suggested) or the OS does any of this, how do diagnostic routine in or running under the OS work?
The OS does it, when allocating memory to userland programs. For memory, before memory is allocated to a new process it is cleared. If the same block of memory is re-allocated to (or within) that process then it is generally NOT cleared. ie, if you request some memory within a process there's no guarantee that it'll be zeroed out (unless you specifically request it to be), but there is a guarantee that anything in the memory is something that your own process put there. For kernel-level code, this does NOT happen by default (again, depending on which exact functional you call). So within the kernel you can allocate a block of memory and end up with random user-land data it in - but if you think that's a problem then you probably don't understand where the kernel fits in within the bigger picture. (Hint: at a minimum, it can real any memory anywhere in the system) There is obviously a cost of clearing that memory, which is why it's normally only done when absolutely necessary (eg, allocating a new page to a userland process), but not when it's not (eg, allocating to the kernel) For disk, physical space normally isn't assigned by the filesystem until you actually write to a block. Writing obviously overwrites what was there previously, so reading it back only gives you your own data. If you read back an area of a file that you haven't yet written (presuming the filesystem supports it) then you've got what's called a "sparse" file, and as no block on disk has yet been allocated for that space yet the OS simply returns you a pile of zeros. Those zeros never actually existed on the disk, they are just a logical concept for any blocks that have not yet been written to. None of these controls stops someone with root access from accessing memory or disk - root generally has access to interfaces like /proc/mem and the raw disk devices, so can read anything. Scott
participants (2)
-
Larry Sheldon
-
Scott Howard