On Saturday, 27 September, 2014 20:49, Jimmy Hess said:
On Sat, Sep 27, 2014 at 8:10 PM, Jay Ashworth <jra@baylink.com> wrote:
I haven't an example case, but it is theoretically possible.
Qmail-smtpd has a buffer overflow vulnerability related to integer overflow which can only be reached when compiled on a 64-bit platform. x86_64 did not exist when the code was originally written.
If memory serves, the author never acknowledged the vulnerability and declined to pay bounty or fix the bug stating that nobody allows gigabytes of RAM per smtp process.
However.... you see, there you have a lingering bug that can be exposed under the right environment.... (Year 2030... computers have Petabytes of RAM... why would you seriously limit any one process to less than a terabyte....?)
-> http://www.guninski.com/where_do_you_want_billg_to_go_today_4.html
** the person making the change in this instance is referred to as "you". This is not to imply that it is any of you personally, but simply because it is easier to write using that pronoun rather than figuring out an appropriate third-person descriptive. I personally think "the retard" is most appropriate, but then that is just me :) In this case however, you have implemented a change and it has changed the vulnerability profile. Presumably at one point you were running Qmail-smtpd on x86 (32-bit). You introduced a change (x86 -> x64) which changed the vulnerability profile. Perhaps you implemented two changes -- one to an x64 kernel, and second to an x64 userland. Whatever the case, there was still a change, and it was only because of the change that the vulnerability manifested. If you had not made the change, you would not have introduced the vulnerability. That your assessment of the change in vulnerability profile resulting from your change was defective is the whole point of this discussion. If you had been rational about the change to from x86 -> x64 and 32-bit userland to 64-bit userland, you would have limited all processes to the same per-process address space as they had in the x86 model in order to prevent the introduction of vulnerabilities such as this, and only permitted larger address spaces for processes that required them (ie, were part of the justification for making the change). If the change was "thrust upon you" with no justification, then the "thruster" should be terminated (with extreme prejudice, if possible). Notwithstanding, if you are responsible for the Safety and Security of the system changed, you should go stand in the corner for failing to perform your job adequately, or perhaps be promoted to management where you will no longer be a threat. And the assumption about "(Year 2030... computers have Petabytes of RAM... why would you seriously limit any one process to less than a terabyte....?)" implies that there is an intent to implement further change. Before you make the change to "computers having petabytes of RAM" and "not limiting any process to less than a terabyte of per process address space", I would hope that you would assess the impact of that change. Especially since you now have additional information on which to generate a more accurate assessment of the impact of making that change, and what mitigations you should put in place when you make that change to prevent yourself from being either terminated with extreme prejudice or promotion to management (unless, of course, those are your goals). One of the reasons for "limiting a process to less than a terabyte of process address space" is that it may lead to the manifestation of vulnerabilities such as the one discussed here. My original proposition still holds perfectly: (1) The vulnerability profile of a system is fixed at system commissioning. (2) Vulnerabilities do not get created nor destroyed except through implementation of change. (3) If there is no change to a system, then there can be no change in its vulnerabilities. Of course, you must set the boundaries of "system" correctly. Choosing the wrong boundary may cause you to mislead yourself as to what a particular change may impact.