Can somebody explain these ransomwear attacks?
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me. Mike
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible. On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com> wrote:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
On 6/24/21 2:55 PM, JoeSox wrote:
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible.
But if you pay without finding how they got in, they could turn around and do it again, or sell it on the dark web, right? Mike
On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com <mailto:mike@mtcc.com>> wrote:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
A lot of the payments for Ransomware come from Insurance Companies under "Business Interruption Insurance". It in fact may be more cost effective to pay the ransom, than to pay for continued business interruption. Of course along with paying the ransom, a full forensic audit of the systems/network is conducted. The vector for many of these attacks is via a worm triggered by someone opening an attachment on an email or downloading compromised software from the Internet. Short of not allowing email attachments or blocking Internet access, the best method is to properly train users to not click on attachments or visit "untrusted" sites, but nothing is perfect. Shane On Thu, Jun 24, 2021 at 6:01 PM Michael Thomas <mike@mtcc.com> wrote:
On 6/24/21 2:55 PM, JoeSox wrote:
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible.
But if you pay without finding how they got in, they could turn around and do it again, or sell it on the dark web, right?
Mike
On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com> wrote:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
I think a big problem may be that the ransom is actually very cost effective and probably the lowest line item cost in many of these situations where large revenue streams are interrupted and time=money (and maybe also health or life). The original thought that it should be handled like standard DR and tighten up security may apply to very small businesses though where they could afford to try to ignore the ransom request and rebuild more securely hoping the criminals will move on and not come back for revenge.
On Jun 24, 2021, at 3:08 PM, Shane Ronan <shane@ronan-online.com> wrote:
A lot of the payments for Ransomware come from Insurance Companies under "Business Interruption Insurance". It in fact may be more cost effective to pay the ransom, than to pay for continued business interruption.
Of course along with paying the ransom, a full forensic audit of the systems/network is conducted. The vector for many of these attacks is via a worm triggered by someone opening an attachment on an email or downloading compromised software from the Internet. Short of not allowing email attachments or blocking Internet access, the best method is to properly train users to not click on attachments or visit "untrusted" sites, but nothing is perfect.
Shane
On Thu, Jun 24, 2021 at 6:01 PM Michael Thomas <mike@mtcc.com <mailto:mike@mtcc.com>> wrote:
On 6/24/21 2:55 PM, JoeSox wrote:
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible.
But if you pay without finding how they got in, they could turn around and do it again, or sell it on the dark web, right?
Mike
On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com <mailto:mike@mtcc.com>> wrote:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
On Thu, Jun 24, 2021 at 5:41 PM Brandon Svec via NANOG <nanog@nanog.org> wrote:
I think a big problem may be that the ransom is actually very cost effective and probably the lowest line item cost in many of these situations where large revenue streams are interrupted and time=money (and maybe also health or life).
Big problem that with organizations' existing Disaster Recovery DR methods -- the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less. The ransom price becomes the perceived cost of paying from the perspective of the organizations faced with the decision, But the actual cost to the whole world of them paying a ransom is much higher and will be borne by others (And/or themselves if they are unlucky) in the future, when their having paid the criminals encourages and causes more and more of that nefarious activity. I would call that a regulatory issue regarding commerce and payments not able to be addressed by technology. No matter how much companies can improve your DR process to cost less for a recovery and take less time -- a recovery is bound to still involve some downtime and cost a large enough amount where it will then be possible for motivated criminals to come up with a dollars cost improvement for a ransom that will be less than it. I do wonder for a moment.. about companies paying ransoms: Do they somehow manage to get the crooks' W-9 and verify their identity, as required when an organization makes a payment to any 3rd party -- or do those paying ransoms somehow circumvent the mandatory tax reporting and witholdings, B/c it seems like making a payment to an Unnamed / unidentified / unverifiable party ought to be a crime or make the payor be considered an accomplice in the crooks' evasion of the taxing authority? I always think.. have the governments impose penalties, eg. "If you make a payment for a ransom, then a penalty of $10k plus 10000% the ransom will be due." / Have it be a more-severely penalized crime to send any digital payment for a transaction above X say $1000 without the Proof of Identity and Physical location of all Payees -- make sure it gets enforced strictly against anyone paying a ransom. Make the ransoms not payable without larger repurcussions, and perhaps the crooks will have to find a new profession.
The original thought that it should be handled like standard DR and tighten up security may apply to very small businesses though where they could afford to try to ignore the ransom request and rebuild more securely hoping the criminals will move on and not come back for revenge.
-- -Jim
Hi Jim, Very nice text from you and you seem to offer good hints on how to stop it long term. The reality is that USA is going in the direct opposing direction that you express. The payment to ransomware gangs is now tax-deductible. "Extorted by ransomware gangs? The payments may be tax-deductible". Published June 21st. https://www.cbsnews.com/news/ransomware-payments-may-be-tax-deductible/ Again from cbsnews. Not sure if we can rely on them to report accurate news? Jean -----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Jim Sent: June 25, 2021 8:26 AM To: Brandon Svec <bsvec@teamonesolutions.com> Cc: nanog@nanog.org Subject: Re: Can somebody explain these ransomwear attacks? On Thu, Jun 24, 2021 at 5:41 PM Brandon Svec via NANOG <nanog@nanog.org> wrote:
I think a big problem may be that the ransom is actually very cost effective and probably the lowest line item cost in many of these situations where large revenue streams are interrupted and time=money (and maybe also health or life).
Big problem that with organizations' existing Disaster Recovery DR methods -- the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less. The ransom price becomes the perceived cost of paying from the perspective of the organizations faced with the decision, But the actual cost to the whole world of them paying a ransom is much higher and will be borne by others (And/or themselves if they are unlucky) in the future, when their having paid the criminals encourages and causes more and more of that nefarious activity. I would call that a regulatory issue regarding commerce and payments not able to be addressed by technology. No matter how much companies can improve your DR process to cost less for a recovery and take less time -- a recovery is bound to still involve some downtime and cost a large enough amount where it will then be possible for motivated criminals to come up with a dollars cost improvement for a ransom that will be less than it. I do wonder for a moment.. about companies paying ransoms: Do they somehow manage to get the crooks' W-9 and verify their identity, as required when an organization makes a payment to any 3rd party -- or do those paying ransoms somehow circumvent the mandatory tax reporting and witholdings, B/c it seems like making a payment to an Unnamed / unidentified / unverifiable party ought to be a crime or make the payor be considered an accomplice in the crooks' evasion of the taxing authority? I always think.. have the governments impose penalties, eg. "If you make a payment for a ransom, then a penalty of $10k plus 10000% the ransom will be due." / Have it be a more-severely penalized crime to send any digital payment for a transaction above X say $1000 without the Proof of Identity and Physical location of all Payees -- make sure it gets enforced strictly against anyone paying a ransom. Make the ransoms not payable without larger repurcussions, and perhaps the crooks will have to find a new profession.
The original thought that it should be handled like standard DR and tighten up security may apply to very small businesses though where they could afford to try to ignore the ransom request and rebuild more securely hoping the criminals will move on and not come back for revenge.
-- -Jim
The payment to ransomware gangs is now tax-deductible.
It's not new. In the US, losses due to theft have been at least partly deductible for a very long time. By IRS definitions ( https://www.irs.gov/publications/p547), blackmail and extortion both qualify as theft, and it's fairly safe to say those apply to all ransomware attacks. Everything can be broken, and nothing will ever be 100% secure. If you strive to make sure the cost to break in is massively larger than the value of what could be extracted, you'll generally be ahead of the game. On Fri, Jun 25, 2021 at 8:39 AM Jean St-Laurent via NANOG <nanog@nanog.org> wrote:
Hi Jim,
Very nice text from you and you seem to offer good hints on how to stop it long term.
The reality is that USA is going in the direct opposing direction that you express.
The payment to ransomware gangs is now tax-deductible.
"Extorted by ransomware gangs? The payments may be tax-deductible". Published June 21st. https://www.cbsnews.com/news/ransomware-payments-may-be-tax-deductible/
Again from cbsnews. Not sure if we can rely on them to report accurate news?
Jean
-----Original Message----- From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Jim Sent: June 25, 2021 8:26 AM To: Brandon Svec <bsvec@teamonesolutions.com> Cc: nanog@nanog.org Subject: Re: Can somebody explain these ransomwear attacks?
On Thu, Jun 24, 2021 at 5:41 PM Brandon Svec via NANOG <nanog@nanog.org> wrote:
I think a big problem may be that the ransom is actually very cost
effective and probably the lowest line item cost in many of these situations where large revenue streams are interrupted and time=money (and maybe also health or life).
Big problem that with organizations' existing Disaster Recovery DR methods -- the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less.
The ransom price becomes the perceived cost of paying from the perspective of the organizations faced with the decision, But the actual cost to the whole world of them paying a ransom is much higher and will be borne by others (And/or themselves if they are unlucky) in the future, when their having paid the criminals encourages and causes more and more of that nefarious activity.
I would call that a regulatory issue regarding commerce and payments not able to be addressed by technology.
No matter how much companies can improve your DR process to cost less for a recovery and take less time -- a recovery is bound to still involve some downtime and cost a large enough amount where it will then be possible for motivated criminals to come up with a dollars cost improvement for a ransom that will be less than it.
I do wonder for a moment.. about companies paying ransoms: Do they somehow manage to get the crooks' W-9 and verify their identity, as required when an organization makes a payment to any 3rd party -- or do those paying ransoms somehow circumvent the mandatory tax reporting and witholdings, B/c it seems like making a payment to an Unnamed / unidentified / unverifiable party ought to be a crime or make the payor be considered an accomplice in the crooks' evasion of the taxing authority?
I always think.. have the governments impose penalties, eg. "If you make a payment for a ransom, then a penalty of $10k plus 10000% the ransom will be due." / Have it be a more-severely penalized crime to send any digital payment for a transaction above X say $1000 without the Proof of Identity and Physical location of all Payees -- make sure it gets enforced strictly against anyone paying a ransom. Make the ransoms not payable without larger repurcussions, and perhaps the crooks will have to find a new profession.
The original thought that it should be handled like standard DR and
tighten up security may apply to very small businesses though where they could afford to try to ignore the ransom request and rebuild more securely hoping the criminals will move on and not come back for revenge.
-- -Jim
I agree with you that 100% secure is not achievable. The goal is to make your business very difficult to hack that it is no longer economically viable for terrorists to attack it in the first place. That’s the best insurance you can give to your business. Jean
The goal is to make your business very difficult to hack that it is no longer economically viable for terrorists to attack it in the first place.
That’s the best insurance you can give to your business.
And yet, so often their system is vulnerable owing to ineptness, cluelessness, or laziness. For example, when the City of Baltimore's system got locked up, the attacker exploited a vulnerability for which MS had issued a patch *2 years earlier* (if memory serves). Anne -- Anne P. Mitchell, Attorney at Law CEO Get to the Inbox by SuretyMail, GetToTheInbox.com Dean of Cyberlaw and Cyber Security, Lincoln Law School Email Marketing Deliverability and Best Practices Expert Board of Directors, Denver Internet Exchange Former Counsel: MAPS Anti-Spam Blacklist Chair Emeritus, Asilomar Microcomputer Workshop
On Fri, 2021-06-25 at 10:05 -0400, Tom Beecher wrote:
Everything can be broken, and nothing will ever be 100% secure. If you strive to make sure the cost to break in is massively larger than the value of what could be extracted, you'll generally be ahead of the game.
Easy to say. IMHO the only workable long-term defence is heterogeneity - supported by distribution, redundancy and just taking the simple things seriously. Business has spent the last few decades discarding heterogeneity and the bigger they are, the more comprehensively they have discarded it. Companies that are floor to ceiling and wall to wall Windows. Centralised updates, centralised networking, centralised storage, centralised ops teams, and (typically) a culture of sharing. A relentless prioritising of convenience over security. For goodness sake, even the NSA had the attitude that "if you are this side of the drawbridge you must be OK"! We need to start building systems that are not seamless, that are not highly interchangeable, that are not fully interconnected, and we have to include our human systems in that approach. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer
On 6/25/21 8:39 AM, Karl Auer wrote:
On Fri, 2021-06-25 at 10:05 -0400, Tom Beecher wrote:
Everything can be broken, and nothing will ever be 100% secure. If you strive to make sure the cost to break in is massively larger than the value of what could be extracted, you'll generally be ahead of the game. Easy to say.
IMHO the only workable long-term defence is heterogeneity - supported by distribution, redundancy and just taking the simple things seriously.
Business has spent the last few decades discarding heterogeneity and the bigger they are, the more comprehensively they have discarded it. Companies that are floor to ceiling and wall to wall Windows. Centralised updates, centralised networking, centralised storage, centralised ops teams, and (typically) a culture of sharing. A relentless prioritising of convenience over security. For goodness sake, even the NSA had the attitude that "if you are this side of the drawbridge you must be OK"!
We need to start building systems that are not seamless, that are not highly interchangeable, that are not fully interconnected, and we have to include our human systems in that approach.
How does one go about that in real life? You certainly want your servers patched with the latest security updates. For all intents and purposes there is just Windows and Linux. I suppose you could throw in some hardware diversity with ARM or MIPS. Routers are definitely in better shape on that front as there are lots of choices and at least Cisco has tons of different BU's that compete with each other with different software and hardware. Mike
On Fri, 2021-06-25 at 15:18 -0700, Michael Thomas wrote:
On 6/25/21 8:39 AM, Karl Auer wrote:
We need to start building systems that are not seamless, that are not highly interchangeable, that are not fully interconnected, and we have to include our human systems in that approach. How does one go about that in real life?
I don't know. I'm trying to figure it out too. I just know that the less diverse an ecosystem, the more vulnerable it is to destruction. Heterogeneity (and change, by the way, i.e being a moving target) mitigates against the risks of a monoculture. Homogenous, centrally managed, massively networked systems bring many benefits, but we are now seeing the sorts of weaknesses it brings, too. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer
On Fri, Jun 25, 2021 at 5:28 AM Jim <mysidia@gmail.com> wrote:
Big problem that with organizations' existing Disaster Recovery DR methods -- the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less.
I think you're right. DR methods are a *huge* part of the problem. I manage DR systems for a number of companies including a large unnamed healthcare provider. A year ago they were still running Exchange 2007. No, that's not a typo. Cryptolocker strolled right into the network via file attachment and somehow made it past the non-existent 3rd-party AV software that totally wasn't integrated into Exchange because it cost too much. It spread across the network and started encrypting around 1 AM on a Friday morning. Due to the way this particular strain worked, it missed several of the monitoring tools that would have alerted my company to the massive file encryption that was happening and it managed to completely encrypt 21 offices and all their patient data. At 6 AM my monitoring system alerted me to a problem. By about 6:30 I realized the scope of the problem, disabled all the site-to-site VPNs, dropped the 1 or 2 infected workstations off the network and the encryption stopped. We do local snapshots every 15 minutes, local backups twice daily, local disconnected backups several times per week, and off-site write-only backups multiple times per day. After I figured out when cryptolocker launched, I ran a few commands from our config management server and had every office restored and running in about 28 minutes and the internal techs for the company were dispatched to swap out the infected workstations. The first rule I follow is: Windows *never* touches bare metal. I amended that last year to: Windows *never* touches bare metal, including workstations. People *really* need to work on their backups and DR plans. You don't need some expensive 3rd-party cloud solution coupled with expensive VMWare licenses to do it. The other part of the problem is the insurance companies. It might surprise you to learn that particular company has been cryptolocker'd 8 times in the last 15 years. They've never lost more than a few minutes of data and recovery times are measured in minutes. This line has literally been thrown around a few times: "We don't need to spend $xxx,xxx to upgrade to current software versions. We have a $5,000,000 cyber insurance policy." The insurance company issued the policy after *port scanning* their public IPs and finding no ports open. Our only 'ding' we got was that the routers responded to pings and the insurance company thought they shouldn't. Insurance failed to do any sort of competent audit (i.e. NIST 800-171). If they did, they would have found the techs "solve" problems by making people local admins or domain admins and that their primary line-of-business app actually requires 'local admin' to run 'properly'. While they finally replaced Exchange 2007 in 2020 by switching to GMail (not for security, but because it made work-from-home easier), they still run about 1/3 of their systems on Windows 7 with a few Windows 8 and 8.1 machines here and there. They even still have 2 Windows XP machines. Their upgrade policy is currently "If the machine dies, you can replace it with something newer". Their oldest machine is around 15 years old. Incompetent insurance companies combined with incompetent IT staff and under-funded IT departments are the nexus of the problem. -A
Incompetent insurance companies combined with incompetent IT staff and under-funded IT departments are the nexus of the problem.
Nah, it's even simpler. It's just dollars all around. Always is.
From this company's point of view, the cost to RECOVER from the problems is so much smaller than it would be to prevent the problems from happening to begin with, so they are happy to let you guys handle it. From the insurance company's point of view, they are collecting premiums, but no claims are being filed, so they have no incentive to do anything differently.
Sometimes those of us who know stuff and can fix things are just too darn good at it for anyone's good. :) On Fri, Jun 25, 2021 at 11:03 AM Aaron C. de Bruyn via NANOG < nanog@nanog.org> wrote:
On Fri, Jun 25, 2021 at 5:28 AM Jim <mysidia@gmail.com> wrote:
Big problem that with organizations' existing Disaster Recovery DR methods -- the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less.
I think you're right. DR methods are a *huge* part of the problem. I manage DR systems for a number of companies including a large unnamed healthcare provider. A year ago they were still running Exchange 2007. No, that's not a typo. Cryptolocker strolled right into the network via file attachment and somehow made it past the non-existent 3rd-party AV software that totally wasn't integrated into Exchange because it cost too much. It spread across the network and started encrypting around 1 AM on a Friday morning. Due to the way this particular strain worked, it missed several of the monitoring tools that would have alerted my company to the massive file encryption that was happening and it managed to completely encrypt 21 offices and all their patient data. At 6 AM my monitoring system alerted me to a problem. By about 6:30 I realized the scope of the problem, disabled all the site-to-site VPNs, dropped the 1 or 2 infected workstations off the network and the encryption stopped. We do local snapshots every 15 minutes, local backups twice daily, local disconnected backups several times per week, and off-site write-only backups multiple times per day. After I figured out when cryptolocker launched, I ran a few commands from our config management server and had every office restored and running in about 28 minutes and the internal techs for the company were dispatched to swap out the infected workstations.
The first rule I follow is: Windows *never* touches bare metal. I amended that last year to: Windows *never* touches bare metal, including workstations.
People *really* need to work on their backups and DR plans. You don't need some expensive 3rd-party cloud solution coupled with expensive VMWare licenses to do it.
The other part of the problem is the insurance companies. It might surprise you to learn that particular company has been cryptolocker'd 8 times in the last 15 years. They've never lost more than a few minutes of data and recovery times are measured in minutes. This line has literally been thrown around a few times: "We don't need to spend $xxx,xxx to upgrade to current software versions. We have a $5,000,000 cyber insurance policy."
The insurance company issued the policy after *port scanning* their public IPs and finding no ports open. Our only 'ding' we got was that the routers responded to pings and the insurance company thought they shouldn't. Insurance failed to do any sort of competent audit (i.e. NIST 800-171). If they did, they would have found the techs "solve" problems by making people local admins or domain admins and that their primary line-of-business app actually requires 'local admin' to run 'properly'.
While they finally replaced Exchange 2007 in 2020 by switching to GMail (not for security, but because it made work-from-home easier), they still run about 1/3 of their systems on Windows 7 with a few Windows 8 and 8.1 machines here and there. They even still have 2 Windows XP machines. Their upgrade policy is currently "If the machine dies, you can replace it with something newer". Their oldest machine is around 15 years old.
Incompetent insurance companies combined with incompetent IT staff and under-funded IT departments are the nexus of the problem.
-A
On Fri, Jun 25, 2021 at 10:43 AM Tom Beecher <beecher@beecher.cc> wrote:
Incompetent insurance companies combined with incompetent IT staff and
under-funded IT departments are the nexus of the problem.
Nah, it's even simpler. It's just dollars all around. Always is.
Agreed.
From this company's point of view, the cost to RECOVER from the problems is so much smaller than it would be to prevent the problems from happening to begin with, so they are happy to let you guys handle it. From the insurance company's point of view, they are collecting premiums, but no claims are being filed, so they have no incentive to do anything differently.
I'm sure that'll change drastically if either of these conditions are true: * A claim is filed * An audit is required * Ransomware surges throughout 2021 and payouts go through the roof I think it's reasonable to expect at least one of those things will happen in the next year. -A
fre. 25. jun. 2021 21.33 skrev Aaron C. de Bruyn via NANOG <nanog@nanog.org
:
On Fri, Jun 25, 2021 at 10:43 AM Tom Beecher <beecher@beecher.cc> wrote:
Incompetent insurance companies combined with incompetent IT staff and
under-funded IT departments are the nexus of the problem.
Nah, it's even simpler. It's just dollars all around. Always is.
Agreed.
From this company's point of view, the cost to RECOVER from the problems is so much smaller than it would be to prevent the problems from happening to begin with, so they are happy to let you guys handle it. From the insurance company's point of view, they are collecting premiums, but no claims are being filed, so they have no incentive to do anything differently.
I'm sure that'll change drastically if either of these conditions are true: * A claim is filed * An audit is required * Ransomware surges throughout 2021 and payouts go through the roof
I think it's reasonable to expect at least one of those things will happen in the next year.
-A
Or they do business in the EU where huge fines are becoming the norm. The ransomware does not matter but the implied data breach does.
On 6/25/21 5:25 AM, Jim wrote:
I think a big problem may be that the ransom is actually very cost effective and probably the lowest line item cost in many of these situations where large revenue streams are interrupted and time=money (and maybe also health or life). Big problem that with organizations' existing Disaster Recovery DR methods --
On Thu, Jun 24, 2021 at 5:41 PM Brandon Svec via NANOG <nanog@nanog.org> wrote: the time and cost to recovery from any event including downtime will be some amount.. likely a high one, and criminals' ransom demands will presumably be set as high a price as they think they can get -- but still orders of magnitudes less than cost to recover / repair / restore, and the downtime may be less.
The ransom price becomes the perceived cost of paying from the perspective of the organizations faced with the decision, But the actual cost to the whole world of them paying a ransom is much higher and will be borne by others (And/or themselves if they are unlucky) in the future, when their having paid the criminals encourages and causes more and more of that nefarious activity.
Well, the cost of the DR fire drill is proportionate to how automated, etc, it is. If you think that the odds of a DR event are really low you want to make it possible but not necessarily cheap. If it happens all of the time, you want to optimize for speed and efficiency. The object here is to break their business model, at least for you. Even if you go through one DR they aren't likely to go back again rather than finding another sucker. Mike
On 6/24/21 3:08 PM, Shane Ronan wrote:
A lot of the payments for Ransomware come from Insurance Companies under "Business Interruption Insurance". It in fact may be more cost effective to pay the ransom, than to pay for continued business interruption.
Of course along with paying the ransom, a full forensic audit of the systems/network is conducted. The vector for many of these attacks is via a worm triggered by someone opening an attachment on an email or downloading compromised software from the Internet. Short of not allowing email attachments or blocking Internet access, the best method is to properly train users to not click on attachments or visit "untrusted" sites, but nothing is perfect.
I wonder if this is preying off the firewall hard-on-the-outside-soft-on-the-inside? At this point I'm not sure how you can justify that because so many people are using their own equipment. It's not just the operational side of the business they can target, after all. Mike
Here are some facts that it’s important to not pay them. 80% of ransomware victims suffer repeat attacks, according to new report https://www.cbsnews.com/news/ransomware-victims-suffer-repeat-attacks-new-re... published June 17th 2021 Don’t pay them. Just clean your mess. 😊 Jean From: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> On Behalf Of Michael Thomas Sent: June 24, 2021 5:59 PM To: JoeSox <joesox@gmail.com> Cc: nanog@nanog.org Subject: Re: Can somebody explain these ransomwear attacks? On 6/24/21 2:55 PM, JoeSox wrote: It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible. But if you pay without finding how they got in, they could turn around and do it again, or sell it on the dark web, right? Mike On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com <mailto:mike@mtcc.com> > wrote: Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me. Mike
NEW ZEALAND HEALTH EXPERIENCE AND DISCUSSION Some of you may be aware that one of our major hospitals was taken off line with 680 compromised servers. Discussion on one local list is that the systems have been open for some time and the rnasom hackers didn't open the systems, they have just caused them to be cleaned up and locked. I was in one of our other hospitals this week. I was presented with Windows 2000 systems. These people don't seem to understand the concepts of a dated DLL stack, combined with inter system networking. They don't leave me with the impression that we've been presenting object level compromise data for decades now. They don't seem to understand that we've made that public facing for, what I would have thought, fairly obvious reasons. By 'we', I don't mean any special, crazy, conspiracy theory, tin foil hat wearing groups, I mean just plain old every day computer geeks who write software. In the NZ hospital case, it looks to me, and I don't know, this is just pure speculation, like someone is going around global hospitals and making them clean up stuff that they should have been upgrading. I personally accept that there are groups around the world with vested interests to have access to our hospital systems, if for no other reason that just to see who's coming and going... you never know when that might make a cool media story ea?.... I keep reading how this is a training issue of staff in hospitals who shouldn't be clicking on email attachments. It's a comment that just strikes me as bonkers. It's not a training issue at all, other than training management that systems have to be patched, updated, and upgraded. Call me crazy, but you can't go around telling kids that IT has great jobs, ask them (make them) pay for education, and then not actually give them jobs to do the work that clearly has to be done. Yes, you can call this a conspiracy theory, but I venture that when old people cry out for young people to learn IT so they can make better health systems, and then 'investors' don't actually upgrade to those 'new systems' and just leave the doors wide open to personal information, at some point some folk are going to get their noses out of joint.... a fairly obvious theory that to many in management are just discounting as conspiracy until things get broken.... then they blame the user for using email. Going back a number of years our whole social services system was found to be wide open because a vendor couldn't make their software work without giving it a 'few more permissions'. Couple that kind of thinking with decades old, compromised, DLL stacks... interests who like to just quietly watch... and a lack of good, reasonably paid IT work... and I have one question.... " Can somebody explain these ransomwear attacks?" ...I don't know... can I? HTH D On 2021-06-25 22:39, Jean St-Laurent via NANOG wrote:
Here are some facts that it’s important to not pay them.
80% OF RANSOMWARE VICTIMS SUFFER REPEAT ATTACKS, ACCORDING TO NEW REPORT
https://www.cbsnews.com/news/ransomware-victims-suffer-repeat-attacks-new-re...
published June 17th 2021
Don’t pay them. Just clean your mess. 😊
Jean
FROM: NANOG <nanog-bounces+jean=ddostest.me@nanog.org> ON BEHALF OF Michael Thomas SENT: June 24, 2021 5:59 PM TO: JoeSox <joesox@gmail.com> CC: nanog@nanog.org SUBJECT: Re: Can somebody explain these ransomwear attacks?
On 6/24/21 2:55 PM, JoeSox wrote:
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups.
So Executives roll the dice to see if service can be restored quickly as possible keeping shareholders and customers happy as possible.
But if you pay without finding how they got in, they could turn around and do it again, or sell it on the dark web, right?
Mike
On Thu, Jun 24, 2021 at 2:44 PM Michael Thomas <mike@mtcc.com> wrote:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
-- Don Gould 5 Cargill Place Richmond 8013 Christchurch, New Zealand Mobile/Telegram: + 64 21 114 0699 www.bowenvale.co.nz
On 6/25/21 11:59 PM, Valdis Klētnieks wrote:
On Thu, 24 Jun 2021 14:55:12 -0700, JoeSox said:
It gets tricky when 'your' company will lose money $$$ while you wait a month to restore from your cloud backups. If that's a concern, you've *already* totally screwed the pooch regarding DR planning.
So what is the industry standard if there is one for DR recovery? Shouldn't this just be considered another hit by the Chaos Monkey? Mike
Ransomwear - the latest fashion idea. "Pay me money or I will continue to wear these clothes" I reckon I could make a killing just by stepping out in a knee-length macrame skirt... Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer
On 6/24/21 4:57 PM, Karl Auer wrote:
Ransomwear - the latest fashion idea.
"Pay me money or I will continue to wear these clothes"
I reckon I could make a killing just by stepping out in a knee-length macrame skirt...
Lol. Thanks, I knew that didn't look right. Maybe with a crop top to complete the ensemble. Mike
On 6/25/21 12:15 AM, Michael Thomas wrote:
On 6/24/21 4:57 PM, Karl Auer wrote:
Ransomwear - the latest fashion idea.
"Pay me money or I will continue to wear these clothes"
I reckon I could make a killing just by stepping out in a knee-length macrame skirt...
Lol. Thanks, I knew that didn't look right. Maybe with a crop top to complete the ensemble.
-------------------------------------------------------------------- No, no, no... Some things can't be unthought! ;) scott
In my humble opinion, the hidden assumption beneath this question seems to be incorrect. Ransomware is not a single event, with assumed similarity to the kind of failures, we regulary see at our network world. The key abstruct differences, might be summed up as follows: A. First and foremost, ransomware attack is not a single failure, such as failing NAS or power outage might be. In fact, it takes enormous amount of time, just to be remotely sure, how this thing got into your network, in the first place. Cause simply bringing your backup network (i.e. your backup solution and its' storage) online, otherwise presents not only you with the ability to revert all files to their saved backups - but more importantly, may allow the ransomware to encrypt your backups, too. It's not a single event. First, you must be sure you plugged the holes and eliminated the threat, before you can even bigin considering, connecting your backups. Think of it this way: ransomware is a program, running on some computer, just looking for more files to encrypt. Without properly removing this threat first (how do you find, which computers have it in the first place?), every new disk connected somewhere at the network, with chances of 99%, will be promptly encrypted. B. Usually (and you may suspect it as much), another hacking initiatiatives are also involved. Recently, we see data theft accompanying ransoware efforts. Mainly with high stakes events (i.e. not that random phishing email that your neighbor clicked on, believing he has relatives stuck in Nigeria without money, since 1985). Simply bringing your backups online, is rushing to action without fully evaluating the threat and hackers/ATPs "just love" rushed and not fully thought thru actions. Once again, it is far-far more complecated question, than just bringing the backups online and starting copying the files over. Without proper *security* (not network!) action, you more likely allowing the bad guys access to more stuff, than simply recovering your operation. C. High stakes ransomware events (i.e. not the same neighbor from above) are complex security events, not just loosing some data. To gain initial access, not the ransomware tools are the tools which used. Moreover, some ATPs deploy surveilance/hacking tools, also during the peak events (such as discovery, your IT/Security folks initial response, ransom negotiations themselves, hiring outside specialists etc.) to (a) maximize their profit from the operation and (b) try and avoid law enforcement. Those might be (and usually are) completely silent tools (such as diskless viruses) whose whole purpose is monitoring your response and give the bad guys as much surveilance power, for their advantage, as they can possibly use. In short, serious ransomware events, are multy faceted, nothing like we at the network level are accostumed too, outages. Sure, there are many similarities and in some cases, may even be complete likeness, but those are usually smaller events. Adittional difference, might be that our outages at 99.9999% are lacking malice while ransomware events are - and you may think to yourself, ah ... it's simply a so small, theoretical question, but it isn't - the most important practical consideration, is that network outage is not *actively* trying to hide it tracks (remember the question, how you find the PC running his software and clean it up?). I never met power outage, which constantly deleting log files. Especially not after everything presumably went up. So, yes - we should never pay the crooks, but's unfortunatelly, a very simplified outlook. I wish, we could allways follow that simple solution but our life, is unfortunatelly much more complicated. Ah ... and one more thing. Gladly, it is not our (network folks) life's complicated. It's system/DBA/and security folks, lifes. But I don't want to get cocky. We got SDN :-) Alex. בתאריך יום ה׳, 24 ביוני 2021, 17:44, מאת Michael Thomas <mike@mtcc.com>:
Not exactly network but maybe, but certainly operational. Shouldn't this just be handled like disaster recovery? I haven't looked into this much, but it sounds like the only way to stop it is to stop paying the crooks. There is also the obvious problem that if they got in, something (or someone) is compromised that needs to be cleaned which sounds sort of like DR again to me.
Mike
Hi! On Fri, 25 Jun 2021 18:56:36 +0300, "Alex K." <nsp.lists@gmail.com> may have written:
Ah ... and one more thing. Gladly, it is not our (network folks) life's complicated. It's system/DBA/and security folks, lifes. But I don't want to get cocky. We got SDN :-)
Yet. Probably. Ransomware gangs /do/ target infrastructure - currently known to be DNS servers (Microsoft), hypervisors, backups, etc. I wouldn't assume that they wouldn't try attacking the network itself today or in the future. -- Mike Meredith, University of Portsmouth Hostmaster, Security, and Chief Systems Engineer
participants (16)
-
Aaron C. de Bruyn
-
Alex K.
-
Anne P. Mitchell, Esq.
-
Baldur Norddahl
-
Brandon Svec
-
Don Gould
-
Jean St-Laurent
-
Jim
-
JoeSox
-
Karl Auer
-
Michael Thomas
-
Mike Meredith
-
scott
-
Shane Ronan
-
Tom Beecher
-
Valdis Klētnieks