This is, amongst other things, an epidemiological problem. We've known through practical experience since 1989 that worms can spread at the speed of light. And so neither an auto-update process nor BCP 38 filtering alone will stop infection. There may be ways like MUD to slow an infection, but even MUD won't be enough in circumstances when device Bob is attacking Sally on an authorized port. MUD might have prevented Bob from being infected in the first place, but not if the infection came via USB key (for instance). In some of these circumstances where it is critical, one may wish to go "up stack" with an auditing function in the form of an application-layer gateway functionality that examines the semantics of a transaction and lets the good ones through. But that in itself carries risks in several dimensions, the first of which being that the auditor is compromised, the second of which is that the auditor may misinterpret the semantics, and consequently slow the pace of deployment of new code. From an SP/Consumer perspective, I expect this case will be rare. It is worth asking what protections are necessary for a device that regulates insulin. Along these lines I've written a very DRAFTY draft called draft-lear-network-helps-01 that discusses these sorts of situations. That draft needs work and co-authors, perhaps. Eliot On 11/8/16 6:05 AM, Ronald F. Guilmette wrote:
In message <20161108035148.2904B5970CF1@rock.dv.isc.org>, Mark Andrews <marka@isc.org> wrote:
* Deploying regulation in one country means that it is less likely to be a source of bad traffic. Manufactures are lazy. With sensible regulation in single country everyone else benefits as manufactures will use a single code base when they can. I said that too, although not as concisely.
* Automated updates do reduce the numbers of vulnerable machines to known issues. There are risks but they are nowhere as bad as not doing automated updating. I still maintain, based upon the abundant evidence, that generallized hopes that timely and effective updates for all manner of devices will be available throughout the practical lifetime of any such IoT thingies is a mirage. We will just never be there, in practice. And thus, manufacturers should be encouraged, by force of law if necessary, to design software with a belt-and-suspenders margin of safety built in from the first day of shipping.
You don't send out a spacecraft, or a medical radiation machine, without such addtional constraints built in from day one. You don't send out such things and say "Oh, we can always send out of firmware update later on if there is an issue."
From a software perspective, building extra layers of constraints is not that hard to do, and people have been doing this kind of thing already for decades. It's called engineering. The problem isn't in anybody's ability or inability to do safety engineering in the firmware of IoT things. The only problem is providing the proper motivation to cause it to happen.
Regards, rfg