CDPwn: 5 new zero-day Cisco exploits https://www.armis.com/cdpwn/ What's the impact on your network? Everything is under control? Jean
I really thought that more Cisco devices were deployed among NANOG. I guess that these devices are not used anymore or maybe that I understood wrong the severity of this CVE. Happy NANOG #78 Cheers Jean On 2020-02-07 09:21, Jean | ddostest.me via NANOG wrote:
CDPwn: 5 new zero-day Cisco exploits
What's the impact on your network? Everything is under control?
Jean
On Monday, 10 February, 2020 11:50, "Jean | ddostest.me via NANOG" <nanog@nanog.org> said:
I really thought that more Cisco devices were deployed among NANOG.
I guess that these devices are not used anymore or maybe that I understood wrong the severity of this CVE.
The phones / cameras side of it seems very much like an Enterprise problem. I'm not sure what the split is here of people operating Enterprise networks vs Service Provider, but I'd expect a skew towards the latter. There is some SP kit on the vulnerable list too, but in my experience, CDP there is used to validate L2 topologies amongst SP kit only, and disabled on customer-facing ports. So maybe a "we *do* have CDP turned off everywhere we don't need it, right?" sanity-check, but not necessarily a rush to patch. I'd have expected greater consternation had this hit vanilla-IOS/XE boxes that are likely to be in managed CPE roles, such as ISR and ASR1K. There I can see the potential for CDP to be enabled customer-facing, either for diagnostics with the customer, or for the voice / data VLAN stuff outlined in the article. Regards, Tim.
On Mon, 10 Feb 2020 at 13:52, Jean | ddostest.me via NANOG <nanog@nanog.org> wrote:
I really thought that more Cisco devices were deployed among NANOG.
I guess that these devices are not used anymore or maybe that I understood wrong the severity of this CVE.
Network devices are incredibly fragile and mostly work because no one is motivated to bring the infrastructure down. Getting any arbitrary vendor down if you have access to it on L2 is usually so easy you accidentally find ways to do it. There are various L3 packet of deaths where existing infra can be crashed with single packet, almost everyone has no or ridiculously broken iACL and control-plane protection, yet business does not seem to suffer from it. Probably lower availability if you do upgrade your devices just because there is a known issue, due to new production affecting issues. -- ++ytti
I remember a Cisco device with an ACL that was leaking. It was a 20 lines ACL with few lines to drop some packets based on UDP ports. When under heavy stress, nearly line rate, we would see some of these packets going through the ACL. I said to my peers that the ACL was leaking. They didn't believe me so I showed them the netflows. We were very surprised to see that. We thought that drop means drop. On 2020-02-10 08:40, Saku Ytti wrote:
On Mon, 10 Feb 2020 at 13:52, Jean | ddostest.me via NANOG <nanog@nanog.org> wrote:
I really thought that more Cisco devices were deployed among NANOG.
I guess that these devices are not used anymore or maybe that I understood wrong the severity of this CVE. Network devices are incredibly fragile and mostly work because no one is motivated to bring the infrastructure down. Getting any arbitrary vendor down if you have access to it on L2 is usually so easy you accidentally find ways to do it. There are various L3 packet of deaths where existing infra can be crashed with single packet, almost everyone has no or ridiculously broken iACL and control-plane protection, yet business does not seem to suffer from it.
Probably lower availability if you do upgrade your devices just because there is a known issue, due to new production affecting issues.
On 10/02/2020 13:40, Saku Ytti wrote:
There are various L3 packet of deaths where existing infra can be crashed with single packet, almost everyone has no or ridiculously broken iACL and control-plane protection, yet business does not seem to suffer from it.
The cynic in me would suggest that we haven't had a World War in a while; business is far too good. -- Tom
Disclaimer, I do not work for any vendor right now, and I don't sell any product that might benefit from scaring anyone, so this is just some whining for a real issue that someone needs to do something about. I've worked for the CDP vendor for a long time, and I do concur to what Saku is saying...the L3 packet of death [threat] is very real, and I'd like to take this opportunity (the buzz around CDPwn) to say a thing or two about these 'soft, mushy and vulnerable' code stacks we have running all over the world, under our feet, waiting for someone with the right incentive to take advantage of. IMHO, "Skilled" software developers, and in parallel...'software exploiters/reverse engineers' haven't been paying attention to these 'infrastructure' boxes (for now), maybe it is because they always had other pieces of the technology stack to work with, and these other tech. stacks were much more rewarding to spend time on (I'm quite sure Node.js / Kubernetes for example...will have a lot more vulnerability researchers looking at them than CDP/LLDP/SNMP....etc code). < and That is a serious sustainability issue on our hands, the risk here is very high; when it comes to infrastructure security of nations, especially in a world where miscreants are no longer script kiddies but actual nation sponsored soldiers...Even MBS is doing it in person. Because the moment some miscreants from some oppressive regime decides to do damage, and not necessarily remote code execution as many might think, but more on the 'L3 packet of death' kind of situation that Saku mentioned earlier, these miscreants have a lot to play with, and the attacks vector is huge, it is green and it is ready for the ripe. In my life time, I've looked at so many 'DDTS' descriptions, and I saw nothing but an unwritten disclaimer of : 'I can be easily used for DDoS'...and that is the case even if *SIRT did their brief analysis of these bugs, so again, if some miscreants found it in themselves to look at bugs with the right 'optics', we are going to be in an interesting situation. Luckily, we haven't seen a CDPwn/STPwn/BGPwn/NTPwn/*.*Pwn...etc worm/ransomware yet, but we also don't have reason to think its not possible, and to make matters worse, the code these babies are running is ancient (in every possible way), many of the libraries used to develop that code is glibc_ishy like in nature, and to make matters a bit more interesting, patching those babies is not easy, and the nature of their software architecture makes them even much more fragile than any piece of cheap IP camera out there on the internet or on enterprise networks. So yeah iACLs, CoPP and all sorts of basic precautions are needed, but I'm thinking something more needs to be done, specially if these ancient code stacks are being imported into new age 'IoT' devices, multiplying the attack vector by a factor of too many. ~A On Mon, Feb 10, 2020 at 5:42 AM Saku Ytti <saku@ytti.fi> wrote:
On Mon, 10 Feb 2020 at 13:52, Jean | ddostest.me via NANOG <nanog@nanog.org> wrote:
I really thought that more Cisco devices were deployed among NANOG.
I guess that these devices are not used anymore or maybe that I understood wrong the severity of this CVE.
Network devices are incredibly fragile and mostly work because no one is motivated to bring the infrastructure down. Getting any arbitrary vendor down if you have access to it on L2 is usually so easy you accidentally find ways to do it. There are various L3 packet of deaths where existing infra can be crashed with single packet, almost everyone has no or ridiculously broken iACL and control-plane protection, yet business does not seem to suffer from it.
Probably lower availability if you do upgrade your devices just because there is a known issue, due to new production affecting issues.
-- ++ytti
On Tue, 11 Feb 2020 at 09:09, Ahmed Borno <amaged@gmail.com> wrote:
So yeah iACLs, CoPP and all sorts of basic precautions are needed, but I'm thinking something more needs to be done, specially if these ancient code stacks are being imported into new age 'IoT' devices, multiplying the attack vector by a factor of too many.
I can't see situation getting better. Why should vendor invest in high quality code, certainly the cultural shift will cost something, it's not 0 cost and what is the upside? If IOS and JunOS realistically were significantly less buggy many of us would stop buying support, because we either know how to configure these or can get help faster free from the community, we largely need the support because the software quality is so bad _everyone_ finds new bugs all the time and we don't have the source code to fix it as a community. So I suspect significantly better quality software would at least initially cost more to produce and it would reduce revenue in loss of support. I also think the way we develop needs to be fundamentally rethought, we need to stop believing I am the person who can code working C, it's the other people who are incompetent. At some point we should look, are the tools we using the right tools? Can we move complexity from human to computers at compile time to create more guarantees of correctness? MSFT claims 70% of their bugs are memory safety, issue which could be solved almost perfectly programmatically by making the compiler and language smarter, not the person more resistant to mistakes. I think ANET at least for some part essentially writes their own DSL which compiles to C++, I think solution like this for any large long-lived project probably quickly pays dividends in quality, because you can address lot of the systematic errors during the compilation time and in DSL design. -- ++ytti
On 2/11/2020 2:04 AM, Saku Ytti wrote:
On Tue, 11 Feb 2020 at 09:09, Ahmed Borno <amaged@gmail.com> wrote:
So yeah iACLs, CoPP and all sorts of basic precautions are needed, but I'm thinking something more needs to be done, specially if these ancient code stacks are being imported into new age 'IoT' devices, multiplying the attack vector by a factor of too many.
I can't see situation getting better. Why should vendor invest in high quality code, certainly the cultural shift will cost something, it's not 0 cost and what is the upside? If IOS and JunOS realistically were significantly less buggy many of us would stop buying support, because we either know how to configure these or can get help faster free from the community, we largely need the support because the software quality is so bad _everyone_ finds new bugs all the time and we don't have the source code to fix it as a community. So I suspect significantly better quality software would at least initially cost more to produce and it would reduce revenue in loss of support.
Yeah, things need to get better, and soon. At least, some things... Was I too subtle just now? -- Harlan Stenn <stenn@nwtime.org> http://networktimefoundation.org - be a member!
I remember my conversation with an executive one day, where I was enlightened on corporate greed. I asked, why is there no investment in quality code, and I was schooled. The exec said, one dollar spent on fixing bugs, returns zero dollars but one dollar spent on nee features brings in 3 dollars ;) These vendors tried different DSL throughout the years and it is way too difficult for them to execute on that shift, too many things at stake, i mean their business, not the end customer...of course. They are not even being 'creative', they can easily pull some tricks like the one you mentioned (compiler built sanity) in their system configuration logic, like you cant turn on an interface without an iACL applied...but then why would they :) Sorry for the sad tone, i just wish network operators would find a way to challenge these vendors and call their less than optimal quality. ~A On Tue, Feb 11, 2020 at 2:05 AM Saku Ytti <saku@ytti.fi> wrote:
On Tue, 11 Feb 2020 at 09:09, Ahmed Borno <amaged@gmail.com> wrote:
So yeah iACLs, CoPP and all sorts of basic precautions are needed, but I'm thinking something more needs to be done, specially if these ancient code stacks are being imported into new age 'IoT' devices, multiplying the attack vector by a factor of too many.
I can't see situation getting better. Why should vendor invest in high quality code, certainly the cultural shift will cost something, it's not 0 cost and what is the upside? If IOS and JunOS realistically were significantly less buggy many of us would stop buying support, because we either know how to configure these or can get help faster free from the community, we largely need the support because the software quality is so bad _everyone_ finds new bugs all the time and we don't have the source code to fix it as a community. So I suspect significantly better quality software would at least initially cost more to produce and it would reduce revenue in loss of support.
I also think the way we develop needs to be fundamentally rethought, we need to stop believing I am the person who can code working C, it's the other people who are incompetent. At some point we should look, are the tools we using the right tools? Can we move complexity from human to computers at compile time to create more guarantees of correctness? MSFT claims 70% of their bugs are memory safety, issue which could be solved almost perfectly programmatically by making the compiler and language smarter, not the person more resistant to mistakes. I think ANET at least for some part essentially writes their own DSL which compiles to C++, I think solution like this for any large long-lived project probably quickly pays dividends in quality, because you can address lot of the systematic errors during the compilation time and in DSL design.
-- ++ytti
On Tue, 11 Feb 2020 at 16:09, Ahmed Borno <amaged@gmail.com> wrote:
Sorry for the sad tone, i just wish network operators would find a way to challenge these vendors and call their less than optimal quality.
It's hard, TINA. We can talk about white label, but in the end of the day, that box is just as proprietary as rest of them, because you can't buy BRCM and make it open. It's like 90s of Linux, GPUs and NICs were not supported, because vendors thought the specs were their secret sauce. When some vendor finally releases full specs on github including P4 compiler target for their chip and will sell chip on their web for 1 unit at x USD, we may start to see some real progress, we can start building open source NOS with data-planes. Maybe INTC could start the revolution with Tofino. Ship PCI cards with Tofino and few 100GE ports (local switching support) and open it up entirely. Maybe JNPR could ship Trio PCI cards, why not, it's not like they have lot to lose, considering terrible market performance. -- ++ytti
Being realistic, as you mentioned, these vendors do not have the right incentive. Thats one thing that operators can do and maybe it should be a recurring theme at NANOG, calling out vendors to put some sanity and logic into how iACLs and CoPP are handled. They can do a lot if they cared to spend some $ on being creative, maybe a BoF for this specific topic. Creativity in the form of ways to avoid the fragile stacks and L3 packet of death, They can even separate the Mgmt plane from the Control plane if they are serious about it, they can enforce iACL on Mgmt interfaces, they can have logic to validate packets before they are processed, and to be fair, this needs to happen in the existing install base too, not only the new ones. I am trying to say that if they cant hire skilled programmers then they should show innovation around the most vulnerable part of their code...the trusting nature of protocols. P.S: How many junior network engineers care to turn on authentication on L2 segments. ~A On Tue, Feb 11, 2020 at 6:24 AM Saku Ytti <saku@ytti.fi> wrote:
On Tue, 11 Feb 2020 at 16:09, Ahmed Borno <amaged@gmail.com> wrote:
Sorry for the sad tone, i just wish network operators would find a way to challenge these vendors and call their less than optimal quality.
It's hard, TINA. We can talk about white label, but in the end of the day, that box is just as proprietary as rest of them, because you can't buy BRCM and make it open. It's like 90s of Linux, GPUs and NICs were not supported, because vendors thought the specs were their secret sauce. When some vendor finally releases full specs on github including P4 compiler target for their chip and will sell chip on their web for 1 unit at x USD, we may start to see some real progress, we can start building open source NOS with data-planes.
Maybe INTC could start the revolution with Tofino. Ship PCI cards with Tofino and few 100GE ports (local switching support) and open it up entirely. Maybe JNPR could ship Trio PCI cards, why not, it's not like they have lot to lose, considering terrible market performance.
-- ++ytti
Large operators have very little to gain from calling out the equipment suppliers. In my personal experience large operators are already getting custom code builds based on their exact requirements, which include disabling many of the “standard” features they don’t use. Sent from my iPhone
On Feb 11, 2020, at 9:48 AM, Ahmed Borno <amaged@gmail.com> wrote:
Being realistic, as you mentioned, these vendors do not have the right incentive.
Thats one thing that operators can do and maybe it should be a recurring theme at NANOG, calling out vendors to put some sanity and logic into how iACLs and CoPP are handled. They can do a lot if they cared to spend some $ on being creative, maybe a BoF for this specific topic.
Creativity in the form of ways to avoid the fragile stacks and L3 packet of death, They can even separate the Mgmt plane from the Control plane if they are serious about it, they can enforce iACL on Mgmt interfaces, they can have logic to validate packets before they are processed, and to be fair, this needs to happen in the existing install base too, not only the new ones. I am trying to say that if they cant hire skilled programmers then they should show innovation around the most vulnerable part of their code...the trusting nature of protocols.
P.S: How many junior network engineers care to turn on authentication on L2 segments.
~A
On Tue, Feb 11, 2020 at 6:24 AM Saku Ytti <saku@ytti.fi> wrote: On Tue, 11 Feb 2020 at 16:09, Ahmed Borno <amaged@gmail.com> wrote:
Sorry for the sad tone, i just wish network operators would find a way to challenge these vendors and call their less than optimal quality.
It's hard, TINA. We can talk about white label, but in the end of the day, that box is just as proprietary as rest of them, because you can't buy BRCM and make it open. It's like 90s of Linux, GPUs and NICs were not supported, because vendors thought the specs were their secret sauce. When some vendor finally releases full specs on github including P4 compiler target for their chip and will sell chip on their web for 1 unit at x USD, we may start to see some real progress, we can start building open source NOS with data-planes.
Maybe INTC could start the revolution with Tofino. Ship PCI cards with Tofino and few 100GE ports (local switching support) and open it up entirely. Maybe JNPR could ship Trio PCI cards, why not, it's not like they have lot to lose, considering terrible market performance.
-- ++ytti
participants (7)
-
Ahmed Borno
-
Harlan Stenn
-
Jean | ddostest.me
-
Saku Ytti
-
sronan@ronan-online.com
-
tim@pelican.org
-
Tom Hill