Re: [[Infowarrior] - NSA Said to Have Used Heartbleed Bug for Years]
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead in any but a classified system running up around "secret" or higher. (I know of a system in Silicon Valley where they would bring us core dumps to print because their system was down so hard. The dump program would take about a third of a box of fanfold and stack it, still blank, as I recall, in the stacker. Seems like the law of the land was "If you did not set the value, you can make no assumptions about it." -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. (Adapted from Stephen Pinker)
Larry Sheldon writes:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead in any but a classified system running up around "secret" or higher. (I know of a system in Silicon Valley where they would bring us core dumps to print because their system was down so hard.
In 2005, Stanford researchers "found that with careful design and implementation, secure deallocation can be accomplished with minimal overhead (roughly 1% for most workloads)". https://www.usenix.org/legacy/events/sec05/tech/full_papers/chow/chow.pdf This is for the RAM case rather than the disk case; maybe disk is worse because writes are more expensive. -- Seth David Schoen <schoen@loyalty.org> | No haiku patents http://www.loyalty.org/~schoen/ | means I've no incentive to FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150 | -- Don Marti
In article <534C68F4.305@cox.net> you write:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead ...
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days. Pre-VM OS/360 may not have. R's, John
On 04/14/2014 05:50 PM, John Levine wrote:
In article <534C68F4.305@cox.net> you write:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead ...
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days.
When you say "clear the disk allocated to programs" what do you mean exactly?
On Mon, Apr 14, 2014 at 07:47:46PM -0700, Doug Barton wrote:
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days.
When you say "clear the disk allocated to programs" what do you mean exactly?
"On a clear disc, you can seek forever" - with apologies to Barbara S. /bill
On Mon, Apr 14, 2014 at 7:47 PM, Doug Barton <dougb@dougbarton.us> wrote:
On 04/14/2014 05:50 PM, John Levine wrote:
In article <534C68F4.305@cox.net> you write:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead ...
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days.
When you say "clear the disk allocated to programs" what do you mean exactly?
Is that like "sudo rm -rf /bin" ? ;P Matt
From: Doug Barton [mailto:dougb@dougbarton.us]
When you say "clear the disk allocated to programs" what do you mean exactly?
Seriously? When files are deleted, their sectors are simply released to the free space pool without erasing their contents. Allocation of disk sectors without clearing them gives users/programs access to file contents previously stored by other users/programs. As to why this is a problem, well, as they write in some math textbooks, the answer is trivial and left as an exercise to the reader. Well, usually trivial. matthew black california state university, long beach -----Original Message----- From: Doug Barton [mailto:dougb@dougbarton.us] Sent: Monday, April 14, 2014 7:48 PM To: nanog@nanog.org Subject: Re: [[Infowarrior] - NSA Said to Have Used Heartbleed Bug for Years] On 04/14/2014 05:50 PM, John Levine wrote:
In article <534C68F4.305@cox.net> you write:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead ...
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days.
When you say "clear the disk allocated to programs" what do you mean exactly?
On 04/15/2014 09:56 AM, Matthew Black wrote:
From: Doug Barton [mailto:dougb@dougbarton.us]
When you say "clear the disk allocated to programs" what do you mean exactly?
Seriously? When files are deleted, their sectors are simply released to the free space pool without erasing their contents. Allocation of disk sectors without clearing them gives users/programs access to file contents previously stored by other users/programs.
As to why this is a problem, well, as they write in some math textbooks, the answer is trivial and left as an exercise to the reader. Well, usually trivial.
matthew black california state university, long beach
Bruce Schneier gave a plug for bleachbit - it does a reasonable job of trying to clean things up for you.
-----Original Message----- From: Doug Barton [mailto:dougb@dougbarton.us] Sent: Monday, April 14, 2014 7:48 PM To: nanog@nanog.org Subject: Re: [[Infowarrior] - NSA Said to Have Used Heartbleed Bug for Years]
On 04/14/2014 05:50 PM, John Levine wrote:
In article <534C68F4.305@cox.net> you write:
On 4/14/2014 9:38 AM, Matthew Black wrote:
Shouldn't a decent OS scrub RAM and disk sectors before allocating them to processes, unless that process enters processor privileged mode and sets a call flag? I recall digging through disk sectors on RSTS/E to look for passwords and other interesting stuff over 30 years ago.
I have been out of the loop for quite a while but my strongly held belief is that such scrubbing would be an enormous (and intolerable) overhead ...
It must be quite a while. Unix systems have routinely cleared the RAM and disk allocated to programs since the earliest days.
When you say "clear the disk allocated to programs" what do you mean exactly?
-- Glen Wiley KK4SFV
On Tue, Apr 15, 2014 at 6:56 AM, Matthew Black <Matthew.Black@csulb.edu>wrote:
Seriously? When files are deleted, their sectors are simply released to the free space pool without erasing their contents. Allocation of disk sectors without clearing them gives users/programs access to file contents previously stored by other users/programs.
No worthwhile filesystem will allow you to read a block of disk that you haven't already written to. Once you've written to it, any existing data that was there is overwritten. The same isn't true for block-level access, but as a rule that requires admin access, and once you have that all bets are off... Scott
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it. Sort of the complement of copy-on-write, you do it by lazy evaluation. For a newly allocated disk sector for example you don't have to actually zero it on the disk when allocated, you just return a block of zeros if a process tries to read it (i.e,. the kernel mediates.) Typically you allocate disk space (other than by writing blocks) by doing a seek forward, maybe write a block, and then seek backwards to access the virgin space in between. But anything in that virgin space can be marked in kernel memory as having to read back as zeros, no need to read anything at all from the actual disk. In fact, there's no need to actually allocate the block on disk which is why we have the notion of files with "holes", see for example the 'tar' man or info page for discussion of dealing with file holes when archiving. That is, whether to try to remember them as holes per se or to actually write all the zeros. It's important because it can create (it's a command line option) an actual TB tar file which has expanded that hole into blocks of zeros, a tar archive (e.g.) which might really be over a TB, or it might only be a few blocks, header info plus info that the rest is just to be treated as a TB hole. Or of course the tar archive could only appear to be a TB file but really be a big hole itself. This is not at all limited to tar, it's just some place it came up very explicitly for the reasons I described. Maybe that (lazy-evaluation of returning zeros) never got widely implemented but I thought it showed up around BSD 4.3 ca 1984. I think some of the models being presented her are somewhat oversimplified. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it. Isn't that a result of the language? Low level languages give that power to the author rather than assuming any responsibility. Hacker News had a fairly in-depth discussion regarding the nature of C with some convincing opinions as to why it's not exactly the right tool to build this sort of system with. The gist, forcing the author of a monster like OpenSSL to manage memory is a problem. On Tue, Apr 15, 2014 at 11:37 AM, Barry Shein <bzs@world.std.com> wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
Sort of the complement of copy-on-write, you do it by lazy evaluation.
For a newly allocated disk sector for example you don't have to actually zero it on the disk when allocated, you just return a block of zeros if a process tries to read it (i.e,. the kernel mediates.)
Typically you allocate disk space (other than by writing blocks) by doing a seek forward, maybe write a block, and then seek backwards to access the virgin space in between.
But anything in that virgin space can be marked in kernel memory as having to read back as zeros, no need to read anything at all from the actual disk.
In fact, there's no need to actually allocate the block on disk which is why we have the notion of files with "holes", see for example the 'tar' man or info page for discussion of dealing with file holes when archiving. That is, whether to try to remember them as holes per se or to actually write all the zeros.
It's important because it can create (it's a command line option) an actual TB tar file which has expanded that hole into blocks of zeros, a tar archive (e.g.) which might really be over a TB, or it might only be a few blocks, header info plus info that the rest is just to be treated as a TB hole. Or of course the tar archive could only appear to be a TB file but really be a big hole itself.
This is not at all limited to tar, it's just some place it came up very explicitly for the reasons I described.
Maybe that (lazy-evaluation of returning zeros) never got widely implemented but I thought it showed up around BSD 4.3 ca 1984.
I think some of the models being presented her are somewhat oversimplified.
-- -Barry Shein
The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
Jason Iannone wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
That's true of virtual memory pages which are new to the process. But that isn't what malloc() usually returns, otherwise all malloc()ed allocations would require a 4KB virtual memory page. Rather malloc() and free() in the C library manage a pool of allocations and when that pool empties more virtual memory is allocated by using the brk() system call to ask the operating system for a new 4KB page of virtual memory -- which the operating system does set to zero, using the hardware to (perhaps lazily) set the page to zero if such a hardware feature is available. As a result the memory returned by malloc() often isn't zeroed and may contain data previously used by the process. Therefore a process can leak information between a program and its use of libraries. Because there is no inter-process information leakage this isn't seen as a problem in the traditional Unix view of security. You may have differing views if your program is a daemon servicing a multitude of networked users. Thus the interest in alternative malloc() and free() implementations. -- Glen Turner <http://www.gdt.id.au/~gdt/>
On April 17, 2014 at 10:03 gdt@gdt.id.au (Glen Turner) wrote:
Jason Iannone wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
Actually those were my words trying to describe kernel management of disk blocks, sparse files, etc, not user space. -b
BAE did this cute poster on the attack model https://image-store.slidesharecdn.com/6f0027d2-c58c-11e3-af1f-12313d0148e5-o... On 4/16/2014 7:50 PM, Barry Shein wrote:
On April 17, 2014 at 10:03 gdt@gdt.id.au (Glen Turner) wrote:
Jason Iannone wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
Actually those were my words trying to describe kernel management of disk blocks, sparse files, etc, not user space.
-b
-- ------------- Personal Email - Disclaimers Apply
On Wed, Apr 16, 2014 at 9:39 PM, TGLASSEY <tglassey@earthlink.net> wrote:
BAE did this cute poster on the attack model
https://image-store.slidesharecdn.com/6f0027d2- c58c-11e3-af1f-12313d0148e5-original.jpeg?goback=%2Egde_1271127_member_ 5862330295302262788
I'm guessing accuracy probably wasn't their primary concern, but... The SSL handshake shown is wrong. Obviously it's over-simplified, and that's to be expected, but to claim that the client generates and session key and then "Encrypts it with the servers private key" and sends it over the wire is outright wrong. The session key in and of itself is *never* transmitted over the wire (encrypted or not). Exactly what is sent depends on the exact algorithm, but presuming they are describing RSA key exchange then it's the "pre-master secret", which is then used by both the client and the server (along with other information they have exchanged) to both independently generate the session key. Semantics perhaps, but... Scott
On April 16, 2014 at 15:34 jason.iannone@gmail.com (Jason Iannone) wrote:
I can't cite chapter and verse but I seem to remember this zeroing problem was solved decades ago by just introducing a bit which said this chunk of memory or disk is new (to this process) and not zeroed but if there's any attempt to actually access it then read it back as if it were filled with zeros, or alternatively zero it.
Those were my words. I was talking about kernel memory/disk management. And then Jason Iannone...
Isn't that a result of the language? Low level languages give that power to the author rather than assuming any responsibility. Hacker News had a fairly in-depth discussion regarding the nature of C with some convincing opinions as to why it's not exactly the right tool to build this sort of system with. The gist, forcing the author of a monster like OpenSSL to manage memory is a problem.
This is a potentially huge discussion with many dimensions. A library like openssl is intended to fit into a huge software ecosystem much of which is already written in C. Writing it in another language (other than perhaps C++) would require a cross-language API or similar (e.g., IPC) which introduces other issues. So, oftentimes you use a three-prong plug because you are faced with three-prong receptacles and rebuilding the entire building to a new standard just isn't practical even if you believe the result presents a potential shock hazard. And, if I may editorialize, there's a reason most of that ecosystem is built in C, it's not only legacy. Other languages have their own shortcomings, you can't just consider one aspect. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
participants (13)
-
Barry Shein
-
bmanning@vacation.karoshi.com
-
Doug Barton
-
Glen Turner
-
Glen Wiley
-
Jason Iannone
-
John Levine
-
Larry Sheldon
-
Matthew Black
-
Matthew Petach
-
Scott Howard
-
Seth David Schoen
-
TGLASSEY