Unfortunately, that page contains near the top the ludicrous and impossible assertion:
""Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities", by Clark, Fry, Blaze and Smith makes clear that ignoring these devices is foolhardy; unmaintained systems become more vulnerable, with time."
It is impossible for unchanged/unmaintained systems to develop more vulnerabilities with time. Perhaps what these folks mean is that "vulnerabilities which existed from the time the system was first developed become more well known over time".
That's not actually true. A running process, instantiated from a file of object code, derives its behavior from a number of interfaces; to the kernel, the libc, the filesystem of the machine it's run on, and even network interfaces off the machine.
While the device itself may be -- or seem to be -- immutable, unless you're certain that *nothing around it changed either*, then you cannot assume that things became vulnerabilities *as deployed* which were not so previously.
Ok, yes, that's partially equivalent to what you said, at least in the case of, say, consumer routers, booting from flash. But for code running on real OSs, it's possible for a vulnerability to "appear" because something under or beside the code got updated, turning something which could not be attacked into something which could.
I haven't an example case, but it is theoretically possible.
No, it is not. If you have "changed something" then the system is not unchanged. For example, if you have updated libc, applied a patch to something, changed the network adapter and therefore the drivers, you have changed the "vulnerability profile" of the box (or, more correctly everything in the path of direct connection to the change). This is why there is change control. So that when you "make a change" you can assess the impact of the change on *everything* it affects. A running process on a commodity OS on a given box will not spontaneously have vulnerabilities created or destroyed save and except by a change made within the "sphere of influence" (meaning, in that case, anything inside "the box"). By very definition an "unmaintained" system is not subjected to change, and therefore the vulnerability profile cannot change. For example, if I build a Linux machine running Slackware 1.0 (I had one of those running for many years) that is performing a specific task, and that "box" is not maintained (that is, no changes are made to any part of the software or hardware comprising the box), then the vulnerability profile does not change despite the passage of 15 years of time. It cannot spontaneously have a new vulnerability created five years after commissioning that did not exist at the time it was commissioned, nor can one that existed at commissioning go away. By the same token, external change -- for example increasing the network transit of the network to which "the box" is connected from an ISDN connection to a DS3 -- does not change the "vulnerability profile" of the box itself; and, cannot cause a new vulnerability to be created or destroyed within that box. Such an "external change" may affect the likelihood of exploitation of a pre-existing vulnerability, but cannot create a new vulnerability nor cause one that exists to go away. This is why XP (for example) becomes *more* secure when Microsoft support ends, not *less*. It is no longer subject to change and therefore its vulnerability profile has become fixed, rather than a perpetually moving target. From the time the box becomes "no longer subject to change" it will have its vulnerability profile fixed at that point right up until it is decommissioned (or decommissions itself by, for example, having the magic smoke escape). This then leads to the assessment of the value of "vendor support". "Vendor Support" that is predicated on a perpetual requirement to make change for change sake (such as Microsoft updates) is a negative value proposition and is unmaintainable. It is impossible to perform the required re-assessments at the fantastic rate of change imposed by such systems. On the other hand, "Vendor Support" which does not require perpetual change, has a value and that value is the difference between the future value of the series of recurring "support" fee's and the cost of "replacement" of the affected component when a "change" has been assessed to be necessary, either to remove a pre-existing vulnerability (which may no longer be able to be mitigated, or may be excessively expensive to mitigate) or for new features (or hardware warranty services, for example, in the specific case of SmartNet contracts). The decision to implement the change (and thus an evaluable shift in the vulnerability profile, which becomes fixed at the time the change is made and continues unchanged, right up until the next change occurs) is subject to rational management of change processes and rational analysis. For systems predicated on massive continual change the impact of which cannot be assessed (such as most "Microsoft" products, or Adobe Flash, or, <there are several others>) leads to a complacent dependence on the vendor due to a complete inability to rationally assess the vulnerabilities contained in such systems or anything "within the sphere of direct influence" of them. This then has the effect of imposing "dependence" and "lock in" to the vendor. Commodity computing hardware vendors (Dell is an example) do the same thing as well, deliberately imposing uncontrollable change with deliberate requirements for implementation of non-consequent change (ie, deliberately removing compatibility in order to force upgrades to new versions of Microsoft Windows), in order to "lock" their customers into their products and institute dependent behaviour. Interestingly the behaviour of vendors and the customers of vendors with such practices is remarkably similar to the behavioural traits displayed by crack addicts and their dealers.