First real-world SCADA attack in US
On an Illinois water utility: http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
I wonder if they are using private IP addresses. -as On 21 Nov 2011, at 13:32, Jay Ashworth wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
LOL. I see what you did there..... -Hammer- "I was a normal American nerd" -Jack Herer On 11/21/2011 01:17 PM, Arturo Servin wrote:
I wonder if they are using private IP addresses.
-as
On 21 Nov 2011, at 13:32, Jay Ashworth wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth& Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
I checked the SCADA boxes used in our "smart" building. They are all using 127.0.0.1 Is that a security risk? -- Leigh Porter On 21 Nov 2011, at 19:20, "Arturo Servin" <arturo.servin@gmail.com> wrote:
I wonder if they are using private IP addresses.
-as
On 21 Nov 2011, at 13:32, Jay Ashworth wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________
______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________
Might I suggest using 127.0.0.2 if you want less spam :P Pretty scary that folks have 1. Their scada gear on public networks, not behind vpns and firewalls. 2. Allow their hardware vendor to keep a list of usernames / passwords. 2b. Obviously don't change these so often. Whens the last time they really "called support" and refreshed the password with the hw vendor.... Probably when they installed the gear... Sheesh.. Perhaps the laws people suggest we need to protect ourselves should be added to. If you are the operator of a network and due to complete insanity leave yourself wide open to attack, you are just as guilty as the bad guys... But then again I don't want to goto jail for leaving my car door open and having someone steal my car, so nix that idea. Ryan Pavely Director Research And Development Net Access Corporation http://www.nac.net/ On 11/21/2011 2:48 PM, Leigh Porter wrote:
I checked the SCADA boxes used in our "smart" building. They are all using 127.0.0.1
Is that a security risk?
----- Original Message -----
From: "Ryan Pavely" <paradox@nac.net>
Perhaps the laws people suggest we need to protect ourselves should be added to. If you are the operator of a network and due to complete insanity leave yourself wide open to attack, you are just as guilty as the bad guys... But then again I don't want to goto jail for leaving my car door open and having someone steal my car, so nix that idea.
There is a difference, there, Ryan, both in degree of danger, and in duty of care. If you leave your car open, the odds that someone will steal it *and use it to plow into a crowd of people* are pretty low; the odds that someone breaking into a SCADA network mean to cause harm to the unsuspecting public are probably a bit higher. Also, the people running that SCADA network *get paid* to do so in a fashion which does not cause undue risk to the general public be they customers of the utility or not; this is also not true of your stolen car. So I don't think there's all that much danger of "making laws to protect the public from attacked SCADA networks not secured in accordance with generally accepted best practices" being generalized into "you're going to jail if someone steals your car, even if they *do* use it as a weapon". Even as stupid and grandstander as our Congress is. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
Am 21.11.2011 um 21:22 schrieb Ryan Pavely:
But then again I don't want to goto jail for leaving my car door open and having someone steal my car, so nix that idea.
Oh, but you are. (Not sure about criminal liability, but definitely civil.) -- Stefan Bethke <stb@lassitu.de> Fon +49 151 14070811
On 21 Nov 2011, at 20:23, "Ryan Pavely" <paradox@nac.net> wrote:
Might I suggest using 127.0.0.2 if you want less spam :P
Pretty scary that folks have 1. Their scada gear on public networks, not behind vpns and firewalls.
Do people really do that? Just dump a /24 of routable space on a network and use it? Fifteen years ago perhaps, but now, really? Or are these legacy installations with Cisco routers that don't do 'ip classless' and that everybody has forgotten about?
2. Allow their hardware vendor to keep a list of usernames / passwords.
Yeah I can believe this. That's if they bothered changing the passwords at all.
2b. Obviously don't change these so often. Whens the last time they really "called support" and refreshed the password with the hw vendor.... Probably when they installed the gear... Sheesh..
I am curious now as to what you would find port scanning for port 23 on some space owned by utility companies. Now, I'm not about to do this, but it would be interesting. Does anybody know what really happened here? We're they just using some ancient VHF radio link to an unmanned pumping station that somebody hacked with an old TCM3105 or AM2911 modem chip and a ham radio? -- Leigh ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________
On 11/21/11 4:09 PM, Leigh Porter wrote:
On 21 Nov 2011, at 20:23, "Ryan Pavely"<paradox@nac.net> wrote:
Might I suggest using 127.0.0.2 if you want less spam :P
Pretty scary that folks have 1. Their scada gear on public networks, not behind vpns and firewalls. Do people really do that? Just dump a /24 of routable space on a network and use it? Fifteen years ago perhaps, but now, really? Or are these legacy installations with Cisco routers that don't do 'ip classless' and that everybody has forgotten about?
2. Allow their hardware vendor to keep a list of usernames / passwords. Yeah I can believe this. That's if they bothered changing the passwords at all.
2b. Obviously don't change these so often. Whens the last time they really "called support" and refreshed the password with the hw vendor.... Probably when they installed the gear... Sheesh.. I am curious now as to what you would find port scanning for port 23 on some space owned by utility companies. Now, I'm not about to do this, but it would be interesting.
Does anybody know what really happened here? We're they just using some ancient VHF radio link to an unmanned pumping station that somebody hacked with an old TCM3105 or AM2911 modem chip and a ham radio?
-- Leigh
Probably nowhere near that sophisticated. More like somebody owned the PC running Windows 98 being used as an operator interface to the control system. Then they started poking buttons on the pretty screen. Somewhere there is a terrified 12 year old. Please don't think I am saying infrastructure security should not be improved - it really does need help. But I really doubt this was anything truly interesting. -- Mark Radabaugh Amplex mark@amplex.net 419.837.5015
On Nov 21, 2011, at 4:30 PM, Mark Radabaugh wrote:
Probably nowhere near that sophisticated. More like somebody owned the PC running Windows 98 being used as an operator interface to the control system. Then they started poking buttons on the pretty screen.
Somewhere there is a terrified 12 year old.
Please don't think I am saying infrastructure security should not be improved - it really does need help. But I really doubt this was anything truly interesting.
That's precisely the problem: it does appear to have been an easy attack. (My thoughts are at https://www.cs.columbia.edu/~smb/blog/2011-11/2011-11-18.html) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Steven Bellovin wrote:
On Nov 21, 2011, at 4:30 PM, Mark Radabaugh wrote:
Probably nowhere near that sophisticated. More like somebody owned the PC running Windows 98 being used as an operator interface to the control system. Then they started poking buttons on the pretty screen.
Somewhere there is a terrified 12 year old.
Please don't think I am saying infrastructure security should not be improved - it really does need help. But I really doubt this was anything truly interesting.
That's precisely the problem: it does appear to have been an easy attack. (My thoughts are at https://www.cs.columbia.edu/~smb/blog/2011-11/2011-11-18.html)
--Steve Bellovin, https://www.cs.columbia.edu/~smb
Umm hmm. And here's another one poking around: http://pastebin.com/Wx90LLum "I'm not going to expose the details of the box. No damage was done to any of the machinery; I don't really like mindless vandalism. It's stupid and silly. On the other hand, so is connecting interfaces to your SCADA machinery to the Internet. I wouldn't even call this a hack, either, just to say. This required almost no skill and could be reproduced by a two year old with a basic knowledge of Simatic." --Michael
"First" https://ciip.wordpress.com/2009/06/21/a-list-of-reported-scada-incidents/ On 22/11/11 04:32, Jay Ashworth wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra
----- Original Message -----
From: "Mark Foster" <blakjak@blakjak.net>
"First"
Hey; I don't write em; I just quote em. :-)
https://ciip.wordpress.com/2009/06/21/a-list-of-reported-scada-incidents/
The Willows CA is the only one in the first part of that list that was a) an actual attack, b) that actually had results c) in the US, but yeah; I was unsurprised to find out they were wrong in their characterization. Cheers, - jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On 11/21/11 10:32 AM, Jay Ashworth wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention. This is a problem that doesn't really need a bunch of new legislation. It's an education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
Industrial Controls systems are normally only replaced when they are so old that parts can no longer be obtained. PC's started to be widely used as operator interfaces about the time Windows 95 came out. A lot of those Win95 boxes are still running and have been connected to the network over the years. And... if you can destroy a pump by turning it off and on too often then somebody engineered the control and drive system incorrectly. Operators (and processes) do stupid things all the time. As the control systems engineer your supposed to deal with that so that things don't go boom. -- Mark Radabaugh Amplex mark@amplex.net 419.837.5015
Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention. This is a problem that doesn't really need a bunch of new legislation. It's an education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
Industrial Controls systems are normally only replaced when they are so old that parts can no longer be obtained. PC's started to be widely used as operator interfaces about the time Windows 95 came out. A lot of those Win95 boxes are still running and have been connected to the network over the years.
And... if you can destroy a pump by turning it off and on too often then somebody engineered the control and drive system incorrectly. Operators (and processes) do stupid things all the time. As the control systems engineer your supposed to deal with that so that things don't go boom.
-- Mark Radabaugh Amplex
mark@amplex.net 419.837.5015
===============================================
There are still industrial control machines out there running MS-DOS. As you said not replaced until you can't get parts anymore. Chuck
On 11/21/11 4:38 PM, Charles Mills wrote:
Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention. This is a problem that doesn't really need a bunch of new legislation. It's an education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
Industrial Controls systems are normally only replaced when they are so old that parts can no longer be obtained. PC's started to be widely used as operator interfaces about the time Windows 95 came out. A lot of those Win95 boxes are still running and have been connected to the network over the years.
And... if you can destroy a pump by turning it off and on too often then somebody engineered the control and drive system incorrectly. Operators (and processes) do stupid things all the time. As the control systems engineer your supposed to deal with that so that things don't go boom.
-- Mark Radabaugh Amplex
mark@amplex.net <mailto:mark@amplex.net> 419.837.5015 <tel:419.837.5015>
===============================================
There are still industrial control machines out there running MS-DOS.
As you said not replaced until you can't get parts anymore. Chuck Oh yeah.... just not too many of those MS-DOS machines have TCP stacks :-)
I still get calls to work on machines I designed in 1999. It's a real pain finding a computer that can run the programming software. A lot of the software was written for 386 or slower machines and used timing loops to control the RS-232 ports. Modern processors really screw that software up. -- Mark Radabaugh Amplex mark@amplex.net 419.837.5015
Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention.
+1 Just for context, let me tell everyone about an operational characteristic of one such system (Sold by a Fortune 10 (almost Fortune 5 ;) company for not a small amt. of $) that might be surprising; the hostname of the server system cannot be longer than eight characters. The software gets so many things so very very wrong I wonder how it is there are not more exploits! ~JasonG
On Mon, Nov 21, 2011 at 4:51 PM, Jason Gurtz <jasongurtz@npumail.com> wrote:
Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention.
+1
Just for context, let me tell everyone about an operational characteristic of one such system (Sold by a Fortune 10 (almost Fortune 5 ;) company for not a small amt. of $) that might be surprising; the hostname of the server system cannot be longer than eight characters.
The software gets so many things so very very wrong I wonder how it is there are not more exploits!
siemens, honeywell... essentially all of the large named folks have just horrendous security postures when it comes to any facilities/scada-type systems. they all believe that their systems are deployed on stand-alone networks, and that in the worst case there is a firewall/vpn between their 'management' site and the actually deployed system(s). You think your SCADA network is "secure", what about your management company's network? What about actual AAA for any of the changes made? Can you patch the servers/software on-demand? or must you wait for the vendor to supply you with the patch set? folks running scada systems (this includes alarm systems for buildings, or access systems! HVAC in larger complexes, etc) really, really ought to start with RFC requirements that include strong security measures, before outfitting a building you'll be in for 'years'. -chris
On Mon, Nov 21, 2011 at 3:35 PM, Mark Radabaugh <mark@amplex.net> wrote:
On 11/21/11 10:32 AM, Jay Ashworth wrote: education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
The "existing normal methods" used by much of the IT industry fail way too often, and therefore, some measure of regulation is in order, when the matter is about critical public infrastructure -- it's simply not in the public interest to let agencies fail or use slipshod/ half measure techniques that are commonly practiced by some of the IT industry. They should be required to engage in practices that can be proven to mitigate risks to a know controllable quantity. The weakness of typical IT security is probably OK, when the only danger of compromise is that an intruder might get some sensitive information, or IT might need to go to the tapes. That just won't do, when the result of compromise is, industrial equipment is forced outside of safe parameters, resulting in deaths, or a city's water supply is shut down, resulting in deaths. Hard perimeter and mushy interior with OS updates just to address known issues, and malware scanners to "try and catch" things just won't do. ..."an OS patch introduces a serious crash bug" is also a type of security issue. Patching doesn't necessarily improve security; it only helps with issues you know about, and might introduce issues you don't know about. Enumerating badness is simply not reliable, and patch patch patch is simply an example of that -- when security really matters, don't attach it to a network, especially not one that might eventually be internet connected -- indirect or not. Connection to a management LAN that has any PC on it that is or was ever internet connected "counts" as an internet connection.
Industrial Controls systems are normally only replaced when they are so old that parts can no longer be obtained. PC's started to be widely used as operator interfaces about the time Windows 95 came out. A lot of those Win95 boxes are still running and have been connected to the network over the years.
The "Windows 95" part is fine. The "connected to the network" part is not fine. -- -JH
----- Original Message -----
From: "Jimmy Hess" <mysidia@gmail.com>
On Mon, Nov 21, 2011 at 3:35 PM, Mark Radabaugh <mark@amplex.net> wrote:
On 11/21/11 10:32 AM, Jay Ashworth wrote: education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
Careful with the attribution; you're quoting Mark, not me.
The weakness of typical IT security is probably OK, when the only danger of compromise is that an intruder might get some sensitive information, or IT might need to go to the tapes.
That just won't do, when the result of compromise is, industrial equipment is forced outside of safe parameters, resulting in deaths, or a city's water supply is shut down, resulting in deaths.
(72 character hard wrap... please.)
Hard perimeter and mushy interior with OS updates just to address known issues, and malware scanners to "try and catch" things just won't do.
Precisely. THe case in point example these days is traffic light controllers. I know from traffic light controllers; when I was a kid, that was my dad's beat for the City of Boston. Being a geeky kid, I drilled the guys in the signal shop, the few times I got to go there (Saturdays, and such). The old design for traffic signal controllers was that the relays that drove each signal/group were electrically interlocked: the relay that made N/S able to engage it's greens *got its power from* the relay that made E/W red; if there wasn't a red there, you *couldn't* make the other direction green. These days, I'm not sure that's still true: I can *see* the signal change propagate across a row of 5 LED signals from one end to the other. Since I don't think the speed of electricity is slow enough to do that (it's probably on the order of 5ms light to light), I have to assume that it's processor delay as the processor runs a display list to turn on output transistors that drive the LED light heads. That implies to me that it is *physically* possible to get opposing greens (which we refer to, in technical terms as "traffic fatalities") out of the controller box... in exactly the same way that it didn't used to be. That's unsettling enough that I'm going to go hunt down a signal mechanic and ask. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On Mon, Nov 21, 2011 at 11:16:14PM -0500, Jay Ashworth wrote:
That implies to me that it is *physically* possible to get opposing greens (which we refer to, in technical terms as "traffic fatalities") out of the controller box... in exactly the same way that it didn't used to be.
Not necessarily. Microwave ovens have an interlock system that has 3 sequentially timed microswitches. The first two cut power to the oven, and the third one shorts out the power supply in case the previous two failed, blowing a fuse. The switches are operated by 2 "fingers" placed on the door so that if the door is bent enough to not seal properly, the switches will be activated in the wrong order causing the shorting switch to operate. This can also happen if you slam the door closed too hard. This is all nice in theory, in practice the microswitches are so flimsy nowadays that I'd not be too surprised if the shorting switch did not succeed in blowing a fuse - and the other two will easily weld together even in normal use (I have seen this happen. Swap the switches and fuse and the oven works again.) The traffic lights can also have some kind of fault-detection logic that sees they are in an illegal state and latches them into a fault mode. IMHO this is stupid extra complexity when relays are obviously 100% correct and reliable for this function, but it seems to be all the rage nowadays to use some kind of "proven correct" software system for safety critical logic. It is so much sexier than mechanical or electro-mechanical interlocks. Anybody who has seen what kind of bizarre malfunctions failed electrolytics cause in consumer electronics will probably not feel very comfortable trusting traffic lights whose safety relies on software that is proven correct. OTOH, the risk is astronomically small compared to someone just running the red lights. Jussi Peltola
On Tue, 22 Nov 2011 07:11:43 +0200, Jussi Peltola said:
Anybody who has seen what kind of bizarre malfunctions failed electrolytics cause in consumer electronics will probably not feel very comfortable trusting traffic lights whose safety relies on software that is proven correct.
Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald Knuth :)
On Mon, Nov 21, 2011 at 11:16:14PM -0500, Jay Ashworth wrote:
Precisely. THe case in point example these days is traffic light controllers.
I know from traffic light controllers; when I was a kid, that was my dad's beat for the City of Boston. Being a geeky kid, I drilled the guys in the signal shop, the few times I got to go there (Saturdays, and such).
The old design for traffic signal controllers was that the relays that drove each signal/group were electrically interlocked: the relay that made N/S able to engage it's greens *got its power from* the relay that made E/W red; if there wasn't a red there, you *couldn't* make the other direction green.
These days, I'm not sure that's still true: I can *see* the signal change propagate across a row of 5 LED signals from one end to the other. Since I don't think the speed of electricity is slow enough to do that (it's probably on the order of 5ms light to light), I have to assume that it's processor delay as the processor runs a display list to turn on output transistors that drive the LED light heads.
That implies to me that it is *physically* possible to get opposing greens (which we refer to, in technical terms as "traffic fatalities") out of the controller box... in exactly the same way that it didn't used to be.
That's unsettling enough that I'm going to go hunt down a signal mechanic and ask.
The typical implementation in a modern controller is to have a separate conflict monitor unit that will detect when conflicting greens (for example) are displayed, and trigger a (also separate) flasher unit that will cause the signal to display a flashing red in all directions (sometimes flashing yellow for one higher volume route). So the controller would output conflicting greens if it failed or was misprogrammed, but the conflict monitor would detect that and restore the signal to a safe (albeit flashing, rather than normal operation) state. -- Brett
----- Original Message -----
From: "Brett Frankenberger" <rbf+nanog@panix.com>
The typical implementation in a modern controller is to have a separate conflict monitor unit that will detect when conflicting greens (for example) are displayed, and trigger a (also separate) flasher unit that will cause the signal to display a flashing red in all directions (sometimes flashing yellow for one higher volume route).
So the controller would output conflicting greens if it failed or was misprogrammed, but the conflict monitor would detect that and restore the signal to a safe (albeit flashing, rather than normal operation) state.
"... assuming the *conflict monitor* hasn't itself failed." There, FTFY. Moron designers. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On Tue, Nov 22, 2011 at 10:16:56AM -0500, Jay Ashworth wrote:
----- Original Message -----
From: "Brett Frankenberger" <rbf+nanog@panix.com>
The typical implementation in a modern controller is to have a separate conflict monitor unit that will detect when conflicting greens (for example) are displayed, and trigger a (also separate) flasher unit that will cause the signal to display a flashing red in all directions (sometimes flashing yellow for one higher volume route).
So the controller would output conflicting greens if it failed or was misprogrammed, but the conflict monitor would detect that and restore the signal to a safe (albeit flashing, rather than normal operation) state.
"... assuming the *conflict monitor* hasn't itself failed."
There, FTFY.
Moron designers.
Yes, but then you're two failures deep -- you need a controller failure, in a manner that creates an unsafe condition, followed by a failure of the conflict monitor. Lots of systems are vulnerable to multiple failure conditions. Relays can have interesting failure modes also. You can only protect for so many failures deep. -- Brett
On 11/22/2011 5:59 AM, Brett Frankenberger wrote:
The typical implementation in a modern controller is to have a separate conflict monitor unit that will detect when conflicting greens (for example) are displayed, and trigger a (also separate) flasher unit that will cause the signal to display a flashing red in all directions (sometimes flashing yellow for one higher volume route). So the controller would output conflicting greens if it failed or was misprogrammed, but the conflict monitor would detect that and restore the signal to a safe (albeit flashing, rather than normal operation) state. -- Brett
Indeed. All solid-state controllers, microprocessor or not, are required to have a completely independent conflict monitor that watches the actual HV outputs to the lamps and, in the event of a fault, uses electromechanical relays to disconnect the controller and connect the reds to a separate flasher circuit. The people building these things and writing the requirements do understand the consequences of failure. Matthew Kaufman
Here is the latest folks, "DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system in Springfield, Illinois." http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html Andrew
andrew.wallace wrote:
Here is the latest folks,
"DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system in Springfield, Illinois."
http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html
Andrew
And "In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported." I'd bet we'll soon be hearing more from this loldhs pr0f character in .ro. --Michael
This might be of interest to those wishing to dive deeper into the subject. Telecommunications Handbook for Transportation Professionals: The Basics of Telecommunications by the Federal Highway Administration. http://ops.fhwa.dot.gov/publications/telecomm_handbook/ I'm still digging through it to see what they say about network security. -- Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474
On Tue, Nov 22, 2011 at 04:00:52PM -0800, Joe Hamelin wrote:
This might be of interest to those wishing to dive deeper into the subject.
Telecommunications Handbook for Transportation Professionals: The Basics of Telecommunications by the Federal Highway Administration.
http://ops.fhwa.dot.gov/publications/telecomm_handbook/
I'm still digging through it to see what they say about network security. Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474
They don't. Not at all. The most they do say is that on one system, one class of users has RW access to data, while another has RO access. This quote: "Firewall" - is a term used to describe a software application designed to prevent unauthorized access to the initial entry point of a system. is indicative of the level at which the doc is written, and of the intended audience. Worse yet, the dfn. is _*WRONG*_. I work for a state highway department; we take network security a whole lot more seriously than *that*. 73 DE -- Mike Andrews, W5EGO mikea@mikea.ath.cx Tired old sysadmin
On Tue, 22 Nov 2011 13:32:23 -1000, Michael Painter said:
http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html
And "In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported."
It's interesting to read the rest of the text while doing some deconstruction: "There is no evidence to support claims made in the initial Fusion Center report ... that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant." Notice that they're carefully framing it as "no evidence that credentials were stolen" - while carefully tap-dancing around the fact that you don't need to steal credentials in order to totally pwn a box via an SQL injection or a PHP security issue, or to log into a box that's still got the vendor-default userid/passwords on them. You don't need to steal the admin password if Google tells you the default login is "admin/admin" ;) "No evidence that the vendor was involved" - *HAH*. When is the vendor *EVER* involved? The RSA-related hacks of RSA's customers are conspicuous by their uniqueness. And I've probably missed a few weasel words in there...
On Nov 22, 2011, at 7:51 59PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 22 Nov 2011 13:32:23 -1000, Michael Painter said:
http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html
And "In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported."
It's interesting to read the rest of the text while doing some deconstruction:
"There is no evidence to support claims made in the initial Fusion Center report ... that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant."
Notice that they're carefully framing it as "no evidence that credentials were stolen" - while carefully tap-dancing around the fact that you don't need to steal credentials in order to totally pwn a box via an SQL injection or a PHP security issue, or to log into a box that's still got the vendor-default userid/passwords on them. You don't need to steal the admin password if Google tells you the default login is "admin/admin" ;)
"No evidence that the vendor was involved" - *HAH*. When is the vendor *EVER* involved? The RSA-related hacks of RSA's customers are conspicuous by their uniqueness.
And I've probably missed a few weasel words in there...
They do state categorically that "After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois." I'm waiting to see Joe Weiss's response. --Steve Bellovin, https://www.cs.columbia.edu/~smb
On Nov 22, 2011, at 8:08 58PM, Steven Bellovin wrote:
On Nov 22, 2011, at 7:51 59PM, Valdis.Kletnieks@vt.edu wrote:
On Tue, 22 Nov 2011 13:32:23 -1000, Michael Painter said:
http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html
And "In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported."
It's interesting to read the rest of the text while doing some deconstruction:
"There is no evidence to support claims made in the initial Fusion Center report ... that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant."
Notice that they're carefully framing it as "no evidence that credentials were stolen" - while carefully tap-dancing around the fact that you don't need to steal credentials in order to totally pwn a box via an SQL injection or a PHP security issue, or to log into a box that's still got the vendor-default userid/passwords on them. You don't need to steal the admin password if Google tells you the default login is "admin/admin" ;)
"No evidence that the vendor was involved" - *HAH*. When is the vendor *EVER* involved? The RSA-related hacks of RSA's customers are conspicuous by their uniqueness.
And I've probably missed a few weasel words in there...
They do state categorically that "After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois."
I'm waiting to see Joe Weiss's response.
See http://www.wired.com/threatlevel/2011/11/scada-hack-report-wrong/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
On Nov 22, 2011, at 8:08 58PM, Steven Bellovin wrote:
They do state categorically that "After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois."
I'm waiting to see Joe Weiss's response.
See http://www.wired.com/threatlevel/2011/11/scada-hack-report-wrong/
--Steve Bellovin, https://www.cs.columbia.edu/~smb
"Weiss expressed frustration over the conflicting reports." Somewhat related...New broom at DHS. From SANS NewsBites Vol.13, Num.93: "Good News! Yesterday, Mark Weatherford took over as Deputy Undersecretary for Cyber Security at the U.S. Department of Homeland Security. For the first time in many years, the U.S. cybersecurity program will be run by a technologist rather than by a lawyer. There are good reasons to believe that this change will herald an era of greater balance in national cybersecurity leadership between NSA and DHS."
Note to self. When my opc/modbus code goes to hell and wipes out an hvac unit; blame cyber terrorists, crappy vendors, and provide a random shady ip address. This was sad when it was possibly an unprotected network, with poor password procedures, horrible protection code in the logics, etc etc. Now it even got worse. Sigh. Ryan Pavely Director Research And Development Net Access Corporation http://www.nac.net/ On 11/22/2011 6:32 PM, Michael Painter wrote:
andrew.wallace wrote:
Here is the latest folks,
"DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system in Springfield, Illinois."
http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html
Andrew
And "In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported."
I'd bet we'll soon be hearing more from this loldhs pr0f character in .ro.
--Michael
"There is no evidence to support claims made in initial reports -- which were based on raw, unconfirmed data and subsequently leaked to the media." http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html From what I'm seeing and hearing is the report by the fusion centre was private and facts were still being *fusioned* when somebody decided to leak to the media. What we had was a half baked report not ment for public consumption. What needs to be looked at is lockering out certain people who think its OK to leak reports from these state resources. Andrew
----- Original Message -----
From: "Matthew Kaufman" <matthew@matthew.at>
Indeed. All solid-state controllers, microprocessor or not, are required to have a completely independent conflict monitor that watches the actual HV outputs to the lamps and, in the event of a fault, uses electromechanical relays to disconnect the controller and connect the reds to a separate flasher circuit.
The people building these things and writing the requirements do understand the consequences of failure.
If you mean "an independent conflict monitor which, *in the event there is NO discernable fault*, *connects* the controller to the lamp outputs... so that in the event the monitor itself fails, gravity or springs will return those outputs to the flasher circuit", than I'll accept that latter assertion. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On Tue, Nov 22, 2011 at 06:14:54PM -0500, Jay Ashworth wrote:
----- Original Message -----
From: "Matthew Kaufman" <matthew@matthew.at>
Indeed. All solid-state controllers, microprocessor or not, are required to have a completely independent conflict monitor that watches the actual HV outputs to the lamps and, in the event of a fault, uses electromechanical relays to disconnect the controller and connect the reds to a separate flasher circuit.
The people building these things and writing the requirements do understand the consequences of failure.
If you mean "an independent conflict monitor which, *in the event there is NO discernable fault*, *connects* the controller to the lamp outputs... so that in the event the monitor itself fails, gravity or springs will return those outputs to the flasher circuit", than I'll accept that latter assertion.
That protects against a conflicting output from the controller at the same time the conflict monitor completely dies (assuming its death is in a manner that removes voltage from the relays). It doesn't protect against the case of conflicting output from the controller which the conflict monitor fails to detect. (Which is one of the cases you seemed to be concerned about before.) -- Brett
On Tue, Nov 22, 2011 at 5:23 PM, Brett Frankenberger <rbf+nanog@panix.com> wrote:
On Tue, Nov 22, 2011 at 06:14:54PM -0500, Jay Ashworth wrote: in a manner that removes voltage from the relays). It doesn't protect against the case of conflicting output from the controller which the conflict monitor fails to detect. (Which is one of the cases you seemed to be concerned about before.)
Reliable systems have triple redundancy. And indeed... hardwired safety is a lot better than relying on software. But it's not like transistors/capacitors don't fail either, so whether solid state or not, a measure of added protection is in order beyond a single monitor. There should be a "conflict monitor test path" that involves a third circuit intentionally creating a safe "test" conflict at pre-defined sub-millisecond intervals, by generating a conflict in a manner the monitor is supposed to detect but won't actually produce current through the light, and checking for absence of a test signal on green; if the test fails, the test circuit should intentionally blow a pair of fuses, breaking the test circuit's connections to the controller and conflict monitor. In addition the 'test circuit' should generate a pair of clock signals of its own, that is a side effect and only possible with correct test outcomes and will be verified by both the conflict monitor and the controller; if the correct clock indicating successful test outcomes is not detected by either the conflict monitor or by the controller, both systems should independently force a fail, using different methods. So you have 3 circuits, and any one circuit can detect the most severe potential failure of any pair of the other circuits.
-- Brett -- -JH
----- Original Message -----
From: "Jimmy Hess" <mysidia@gmail.com>
So you have 3 circuits, and any one circuit can detect the most severe potential failure of any pair of the other circuits.
Just so. Byzantine monitoring, just like a Byzantine clock. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On Tue, Nov 22, 2011 at 8:35 AM, Mark Radabaugh <mark@amplex.net> wrote:
Having worked on plenty of industrial and other control systems I can safely say security on the systems is generally very poor. The vulnerabilities have existed for years but are just now getting attention. This is a problem that doesn't really need a bunch of new legislation. It's an education / resource issue. The existing methods that have been used for years with reasonable success in the IT industry can 'fix' this problem.
I agree, it is mostly education and resources issue . But the environment of control networks is slightly different from IT industry, IMHO. 1) control network people have been living in a kind of isolation for too long and haven't realized that their networks are connected to Big Bad Internet (or at least intranet..) now so the threat model has changed completely. 2) There aren't many published cases of successful (or even unsuccessful) attacks on control networks. As a result, the risk of an attack is considered to have large potential loss and but *very* low probability of occurring and high cost of countermeasures => ignoring.. 3) Interconnections between control networks and "normal" LANs are a kind of grey area (especially taking into account that both types of networks are run by different teams of engineers). It is very hard to get any technical/security requirements etc - usually none of them exist. And as the whole system as as secure as the weakest element.... the result is easily predictable. 4) any changes in control network are to be done in much more conservative way. all those "apply the patch..oh, damn, it crashed..rollback' doesn't work there. In addition (from my experience which might not be statistically reliable) the testing/lab resources are usually much more limited for control networks; 5) as the life cycle of hw&sw is much longer than in IT industry, it is very hard to meet the security requirements w/o significant changes to existing control network (inc. procedures/policies) - but see #4 above.. So there is a gap - those control networks are 10 (20?) years behind internet in terms of security. This gap can be filled but not immediately. The good news that such stories as the one we are discussing could help scary the decision makers..oops, sorry, I was going to say 'raise the level of security awareness' -- SY, Jen Linkova aka Furry
On Mon, Nov 21, 2011 at 10:32 AM, Jay Ashworth <jra@baylink.com> wrote:
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
Cheers, -- jra
I can say from experience working on one rural sewage treatment plant that IT security is not even in their consciousness. I have also seen major medical software companies that have the same admin password on all install sites and don't see a problem with it. Trying to explain the consequence of this is almost impossible. It's very very scary.
If NSA had no signals information prior to the attack, this should be a wake up call for the industry. Andrew ________________________________ From: Jay Ashworth <jra@baylink.com> To: NANOG <nanog@nanog.org> Sent: Monday, November 21, 2011 3:32 PM Subject: First real-world SCADA attack in US On an Illinois water utility: http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274
On Mon, 21 Nov 2011 14:24:48 PST, "andrew.wallace" said:
If NSA had no signals information prior to the attack, this should be a wake up call for the industry.
Actually, it should be a wake up call whether or not NSA had signals information. However, it's pretty obvious that the entire SCADA segment is pretty much bound and determined to keep hitting the snooze button as long as possible - they've known they have an endemic security problem just about the same number of years the telecom segment has known they will need to deploy IPv6. ;) And let's think about this for a moment - given that there's *no* indication that the attack was an organized effort from a known group, and could quite possibly be just a bored 12 year old in Toledo Ohio, why should the NSA have any signals info before the attack? Let's think it through a bit more. Even if the NSA *did* have info beforehand that pointed at a kid in Toledo, they can't easily release that info before the fact, for several reasons: (a) they're not supposed to be surveillancing US citizens, so having intel on a kid in Toledo would be embarassing at the least, and (b) revealing they have the intel would almost certainly leak out the details of where, when, and how they got said info - and the NSA would almost certainly be willing to sacrifice somebody else's water pump rather than reveal how they got the info. Bottom line - the fact the NSA didn't say something beforehand means that they either didn't know, or didn't wish to tell. So why are you bringing the NSA into it?
Subject: First real-world SCADA attack in US
On an Illinois water utility:
http://www.msnbc.msn.com/id/45359594/ns/technology_and_science-security
"that which does not kill us makes us stronger" --Friedrich Nietzsche
participants (24)
-
-Hammer-
-
andrew.wallace
-
Arturo Servin
-
Brett Frankenberger
-
Charles Mills
-
Christopher Morrow
-
George Bonser
-
Jason Gurtz
-
Jay Ashworth
-
Jay Nakamura
-
Jen Linkova
-
Jimmy Hess
-
Joe Hamelin
-
Jussi Peltola
-
Leigh Porter
-
Mark Foster
-
Mark Radabaugh
-
Matthew Kaufman
-
Michael Painter
-
Mike Andrews
-
Ryan Pavely
-
Stefan Bethke
-
Steven Bellovin
-
Valdis.Kletnieks@vt.edu