--- eric.kuhnke@gmail.com wrote: From: Eric Kuhnke <eric.kuhnke@gmail.com> many contractors *do* have sensitive data on their networks with a gateway out to the public Internet. ---------------------------------------- I could definitely imagine that happening. scott
On 10/04/2018 03:13 PM, Scott Weeks wrote:
--- eric.kuhnke@gmail.com wrote: From: Eric Kuhnke <eric.kuhnke@gmail.com>
many contractors *do* have sensitive data on their networks with a gateway out to the public Internet. ----------------------------------------
I could definitely imagine that happening.
scott
I always loved the early "HIPPA" systems at the doctor's office where the web browser was not restricted, nor the email client, and they ran XP. These didn't even need a hardware feature to exploit... Even in a server, though, given spectre or an equivalent (remember this could be exploited from javascript in a browser or php or...) if apps were present on a machine with both kinds of info/connections, we don't even need custom chips, the path is there in cache-management/pipeline-management bugs. I once ran into a cute bug in a power-pc chip (405ep, used in some older switches as the management processor) where I had to mark all I/O buffers non-cachable (yes, this is a good idea anyhow, but the chip documentation said that an invalidate/flush in the right places took care of that, and I really needed the speed later during packet parsing. And no, copying the packets was prohibitive...) Anyhow, with an 30 (or so) mbit stream coming into ram, about every 30 seconds, the ethertype byte came in 0 instead of 0800 (the responsible bug was in cache management, and the errata item describing it required 5 separate steps involving both processor and I/O access to that address or one in that cache line. At least this system wasn't multiuser... A friend who read the errata item said (and I agree) it looks like a Rube Goldberg sequence. (yes, I'm dating myself.) As far as I know, 10 years later, the bug has never been fixed in the masks (of course, most ppc (and embedded mips) designs are now going to ARM chips. Don't know how much better that is; some of the speed-demon versions of that have a version of spectre.) -- Pete
I have found that the article below provides some interesting analysis on the matter which is informative as apposed to many articles which simply restate what others have already said. https://www.servethehome.com/bloomberg-reports-china-infiltrated-the-supermi... Thanks ~ Bryce Wilson, AS202313
You just need to fire any contractor that allows a server with sensitive data out to an unknown address on the Internet. Security 101. Steven Naslund
From: Eric Kuhnke <eric.kuhnke@gmail.com>
many contractors *do* have sensitive data on their networks with a gateway out to the public Internet. ----------------------------------------
I could definitely imagine that happening.
scott
Important distinction; You fire any contractor who does it *repeatedly* after communicating the requirements for securing your data. Zero-tolerance for genuine mistakes (we all make them) just leads to high contractor turnaround and no conceivable security improvement; A a rotating door of mediocre contractors is a much larger attack surface than a small set of contractors you actively work with to improve security. ~ a On Mon, Oct 8, 2018, at 4:53 AM, Naslund, Steve wrote:
You just need to fire any contractor that allows a server with sensitive data out to an unknown address on the Internet. Security 101.
Steven Naslund
From: Eric Kuhnke <eric.kuhnke@gmail.com>
many contractors *do* have sensitive data on their networks with a gateway out to the public Internet. ----------------------------------------
I could definitely imagine that happening.
scott
Hey,
Important distinction; You fire any contractor who does it *repeatedly* after communicating the requirements for securing your data.
Zero-tolerance for genuine mistakes (we all make them) just leads to high contractor turnaround and no conceivable security improvement; A a rotating door of mediocre contractors is a much larger attack surface than a small set of contractors you actively work with to improve security.
+1. Changing people is a cop out, and often blame shifting. Believing you have better people than your competitor is dangerous. Creating environment where humans can succeed is far harder than creating environment where humans systematically fail. -- ++ytti
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties. As long as your security policy is defaulted to "deny all" outbound that should not be difficult to accomplish. Maybe if a couple contractors feel the pain, they will straighten up. The requirements for securing government sensitive data is communicated very clearly in contractual documents. Genuine mistake can get you in very deep trouble within the military and should apply to contractors as well. I can tell you that the "oh well, it's just a mistake" gets used far too often and its why your personal data is getting compromised over and over again by all kinds of entities. For example, with tokenization there is no reason at all for any retailer to be storing your credit card data (card number, CVV, exp date) at all (let alone unencrypted) but it keeps happening over and over. There needs to be consequences especially for contractors in the age of cyber warfare. Steven Naslund Chicago IL
Important distinction; You fire any contractor who does it *repeatedly* after communicating the requirements for securing your data.
Zero-tolerance for genuine mistakes (we all make them) just leads to high contractor turnaround and no conceivable security improvement; A a rotating door of mediocre contractors is a much larger >attack surface than a small set of contractors you actively work with to improve security.
On Wed, Oct 10, 2018 at 02:21:40PM +0000, Naslund, Steve wrote:
For example, with tokenization there is no reason at all for any retailer to be storing your credit card data (card number, CVV, exp date) at all (let alone unencrypted) but it keeps happening over and over.
It's been a while since I've had to professionally worry about this, but as I recall, compliance with PCI [Payment Card Industry] Data Security Standards prohibit EVER storing the CVV. Companies which do may find themselves banned from being able to process card payments if they're found out (which is unlikely). - Brian
Yet this data gets compromised again and again, and I know for a fact that the CVV was compromised in at least four cases I personally am aware of. As long as the processors are getting the money, do you really think they are going to kick out someone like Macy's or Home Depot? After all, it is really only an inconvenience to you and neither of them care much about that. Steve
It's been a while since I've had to professionally worry about this, but as I recall, compliance with PCI [Payment Card Industry] Data Security Standards prohibit EVER storing the CVV. Companies which do may find themselves banned from being able to process card payments if they're found out (which is unlikely). - Brian
They actually profit from fraud; and my theory is that that's why issuers have mostly ceased allowing consumers to generate one time use card numbers via portal or app, even though they claim it's simply because "you're not responsible for fraud." When a stolen credit card is used, the consumer disputes the resulting fraudulent charges. The dispute makes it to the merchant account issuer, who then takes back the money their merchant had collected, and generally adds insult to injury by charging the merchant a chargeback fee for having to deal with the issue (Amex is notable for not doing this). The fee is often as high as $20, so the merchant loses whatever merchandise or service they sold, loses the money, and pays the merchant account bank a fee on top of that. Regarding CVV; PCI permits it being stored 'temporarily', but with specific conditions on how that are far more restrictive than the card number. Suffice it to say, it should not be possible for an intrusion to obtain it, and we know how that goes.... These days javascript being inserted on the payment page of a compromised site, to steal the card in real time, is becoming a more common occurrence than actually breaching an application or database. Websites have so much third party garbage loaded into them now, analytics, social media, PPC ads, etc. that it's nearly impossible to know what should or shouldn't be present, or if a given block of JS is sending the submitted card in parallel to some other entity. There's technologies like subresource integrity to ensure the correct code is served by a given page, but that doesn't stop someone from replacing the page, etc. On 10/10/18, 10:41 AM, "NANOG on behalf of Naslund, Steve" <nanog-bounces@nanog.org on behalf of SNaslund@medline.com> wrote: Yet this data gets compromised again and again, and I know for a fact that the CVV was compromised in at least four cases I personally am aware of. As long as the processors are getting the money, do you really think they are going to kick out someone like Macy's or Home Depot? After all, it is really only an inconvenience to you and neither of them care much about that. Steve >It's been a while since I've had to professionally worry about this, >but as I recall, compliance with PCI [Payment Card Industry] Data >Security Standards prohibit EVER storing the CVV. Companies which >do may find themselves banned from being able to process card >payments if they're found out (which is unlikely). > - Brian
Having gone through this I know that it's all on you which is why no one really cares. You have to notice a fraudulent charge (in most cases), you have to dispute it, you have to prove it was not you that made the charge, and if they agree then they change all of your numbers at which point you have to contact everyone that might be auto charging your accounts for you. It is a super pain in the neck. So many merchants have been compromised that it seems to be having less and less impact on their reputation.
They actually profit from fraud; and my theory is that that's why issuers have mostly ceased allowing consumers to generate one time use card numbers via portal or app, even though they claim it's >simply because "you're not responsible for fraud." When a stolen credit card is used, the consumer disputes the resulting fraudulent charges. The dispute makes it to the merchant account issuer, who >then takes back the money their merchant had collected, and generally adds insult to injury by charging the merchant a chargeback fee for having to deal with the issue (Amex is notable for not doing >this). The fee is often as high as $20, so the merchant loses whatever merchandise or service they sold, loses the money, and pays the merchant account bank a fee on top of that.
Regarding CVV; PCI permits it being stored 'temporarily', but with specific conditions on how that are far more restrictive than the card number. Suffice it to say, it should not be possible for an >intrusion to obtain it, and we know how that goes....
These days javascript being inserted on the payment page of a compromised site, to steal the card in real time, is becoming a more common occurrence than actually breaching an application or database. >Websites have so much third party garbage loaded into them now, analytics, social media, PPC ads, etc. that it's nearly impossible to know what should or shouldn't be present, or if a given block of JS >is sending the submitted card in parallel to some other entity. There's technologies like subresource integrity to ensure the correct code is served by a given page, but that doesn't stop someone from >replacing the page, etc.
Well, Once you get the Expiry Date (which is the most prevalent data that is not encoded with the CHD) CVV is only 3 digits, we saw ppl using parallelizing tactics to find the correct sequence using acquirers around the world. With the delays in the reporting pipeline, they have the time to completely abuse that CHD/Date/CVV before getting caught. For chipless markets ( You know who you are ) I'm way more worried about Pinpads carrying Track1+Track2 unencrypted thru Serial, USB, Bluetooth, Wireless custom connection... ( I snooped Serial, USB, Bluetooth for a Pinpad PA-DSS project ) And with the PA-DSS spec being dropped by 2020 it will become worst. ----- Alain Hebert ahebert@pubnix.net PubNIX Inc. 50 boul. St-Charles P.O. Box 26770 Beaconsfield, Quebec H9W 6G7 Tel: 514-990-5911 http://www.pubnix.net Fax: 514-990-9443 On 10/10/18 10:32, Brian Kantor wrote:
For example, with tokenization there is no reason at all for any retailer to be storing your credit card data (card number, CVV, exp date) at all (let alone unencrypted) but it keeps happening over and over. It's been a while since I've had to professionally worry about this, but as I recall, compliance with PCI [Payment Card Industry] Data Security Standards prohibit EVER storing the CVV. Companies which do may find themselves banned from being able to process card
On Wed, Oct 10, 2018 at 02:21:40PM +0000, Naslund, Steve wrote: payments if they're found out (which is unlikely). - Brian
The entire point of the CVV has become useless. Recently my wife was talking to an airline ticket agent on the phone (American Airlines) and one of the things they ask for on the phone is the CVV. If you are going to read that all out over the phone with all the other data you are completely vulnerable to fraud. It would be trivial to implement a system where you make a charge over the phone like that and get a text asking you to authorize it instead of asking for a CVV. After all this time it is stupid to have the same data being used over and over. We have had SecurID and other token/pin systems in the IT world forever. I have a token on my iPhone right now that I use for certain logins to systems. The hardware tokens cost very little (especially compared to the credit card companies revenue). The soft tokens are virtually free. A token should be useful for one and only one transaction. You would be vulnerable from the time you read your token to someone (or something) until the charge hit your account. You would also not have to worry about a call center agent or waiter stealing that data because it could only be used once (and if it is not their employer it would become apparent really quickly). Recurring transactions should be unique tokens for a set amount range from a particular entity (i.e. 12 transactions, one per month, not more than $500 each, Comcast only). For example, my reusable token given to my cable company should not be usable by anyone else. Why hasn’t this been done yet…..simple there is no advantage to the retailers and processors. There has been some one-time use numbers for stuff like that but it is inconvenient for the user so it won’t be that popular. The entire system is archaic and dates back to the time of imprinting on paper. Tokenized transactions exist today between some entities and the processors but it is time to extend that all the way from card holder to processor. Steven Naslund Chicago IL
Well,
Once you get the Expiry Date (which is the most prevalent data that is not encoded with the CHD)
CVV is only 3 digits, we saw ppl using parallelizing tactics to find the correct sequence using acquirers around the world.
With the delays in the reporting pipeline, they have the time to completely abuse that CHD/Date/CVV before getting caught.
On October 10, 2018 at 15:55 SNaslund@medline.com (Naslund, Steve) wrote:
The entire point of the CVV has become useless. Recently my wife was talking to an airline ticket agent on the phone (American Airlines) and one of the things they ask for on the phone is the CVV. If you are going to read that all out over the phone with all the other data you are completely vulnerable to fraud. It would be trivial to implement a system where you make a charge over the phone like that and get a text asking you to authorize it instead of asking for a CVV.
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card. For example someone dumpster-diving a restaurant etc particularly in the old imprint days when this was dreamed up wouldn't have the CVV or at least not from that source. Many merchant contracts' fees are based on whether you do sales on physical cards (lower) vs not like online. I don't know off-hand how that's affected by verifying the CVV online, I suspect it's mostly used online to avoid certain kinds of fraud for all the other reasons. We're very careful with CVVs as per contract agreement and they don't go near the database, only used during the verification and gone when the app fork exits. Credit card fraud is, to the processors, a game of percentages and cost/benefit. Sure one could have the CVV w/o the card, these days a big hazard are service people (e.g., restaurants) who can trivially snap both sides of your card with their phone, they often take your card away and come back later with the receipts and your card. In Europe and probably elsewhere it's very common for them to process your card with a hand-held device right in front of you which would make that more difficult. But any proposal to improve cc security has to reflect the cost/benefit across millions of transactions. If one isn't working with that data then they're only guessing. -- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
On October 10, 2018 at 17:58 SNaslund@medline.com (Naslund, Steve) wrote:
It only proves that you have seen the card at some point. Useless.
Steven Naslund Chicago IL
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card.
It's not useless, it protects against what it protects. Like dumpster-diving in the imprint days or if someone gets hold of all the credit card numbers + expirations (+ names, maybe) from your database. If you don't store CVVs (which is forbidden by contract) they won't have CVVs and sites which require them won't accept transactions. It's kind of like a PIN but yes too easily stolen. A friend used to write "ASK FOR PHOTO ID" in the signature portion of his credit cards and, I saw this, cashiers would look at it, look at his signature as if they were comparing, and say OK thank you! -- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
"Naslund, Steve" <SNaslund@medline.com> writes:
It only proves that you have seen the card at some point. Useless.
It doesn't even prove that much. There is nothing preventing a rogue online shop from storing and reusing the CVV you give them. Or selling your complete card details including zip code, CVV and whatever. In practice, the CVV is just 3 more digits in the card number. No security whatsoever in that. Bjørn
(this is probably OT now...)
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card.
Except that it doesn't serve that purpose. Anyone who ever had your card in their hands (e.g. waiters) can just write that down and use it later hence defeating the purpose of "physically having the card". (Call me paranoid but I usually use a black pen to make the numbers undreadable because of this, after my card (both sides) has been photocopied a number of times...) This has always been an amusing topic. At the end of the day it's a financial risk management call from the banks -- as long as they lose less money on the current system than the cost of fraud, things wiull not change. Of course, they try to push those costs onto others as much as possible, but that doesn't change the bottom line. Robert
Robert Kisteleki wrote:
(this is probably OT now...)
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card.
Except that it doesn't serve that purpose. Anyone who ever had your card in their hands (e.g. waiters) can just write that down and use it later hence defeating the purpose of "physically having the card".
But waiters don't know your ZIP code which is the other thing needed for online verification (in the U.S.) 3D Secure is good enough. It will probably be mandatory for payment processors sometime in the future. In the meantime, it just costs the industry less to cover fraud losses. -- S.C.
On October 11, 2018 at 13:41 sc@ottie.org (Scott Christopher) wrote:
Robert Kisteleki wrote:
(this is probably OT now...)
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card.
Except that it doesn't serve that purpose. Anyone who ever had your card in their hands (e.g. waiters) can just write that down and use it later hence defeating the purpose of "physically having the card".
But waiters don't know your ZIP code which is the other thing needed for online verification (in the U.S.)
So be wary if they ask you for photo id which likely has your zip code! But asking for photo id is a good thing for legitimate card holders, could reduce fraudulent in-person use of stolen cards. What a mess.
3D Secure is good enough. It will probably be mandatory for payment processors sometime in the future. In the meantime, it just costs the industry less to cover fraud losses.
-- S.C.
-- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
Once upon a time, bzs@theworld.com <bzs@theworld.com> said:
But asking for photo id is a good thing for legitimate card holders, could reduce fraudulent in-person use of stolen cards.
Requiring an ID is also a violation of the merchant agreements, at least for VISA and MasterCard (not sure about American Express), unless ID is otherwise required by law (like for age-limited products). I've walked out of stores that required an ID. -- Chris Adams <cma@cmadams.net>
On 11/Oct/18 21:31, Chris Adams wrote:
Requiring an ID is also a violation of the merchant agreements, at least for VISA and MasterCard (not sure about American Express), unless ID is otherwise required by law (like for age-limited products). I've walked out of stores that required an ID.
It has always been curious to me how/why the U.S., with one of the largest economies in the world, still do most card-based transactions as a swipe in lieu of a PIN-based approach. In South Africa (and most of southern Africa), all banks make the use of PIN's mandatory, for all types of cards. With the rest of Africa using credit cards more recently, I imagine they are also PIN-based. Europe also use PIN's, as far as I have experienced. Asia-Pac was swipe-based for a long time when I lived there, but I know places like Malaysia and Singapore have started a major PIN-based transaction drive in the past 3 years. 3D Secure for the online version of the transaction also means your card number and CVV number are less susceptible to fraud via restaurants and the like. Of course, this is not fool-proof, as both the merchant and bank need to support and mandate this, which is not well-done at a global level. Mark.
There are two parts of the problem. The first is the assumption of risk: the current model of operation in the US (like in other western economies) puts the onus of risk of misuse of the card on specific actors. When you change the basis from signature (fraud) to chip+pin (leak of knowledge) you have to change the legal basis. Remember, this is an economy where WRITING CHEQUES is still normal. Clearly, the legal basis of money transactions in the US is hugely complicated by savings and loan, credit unions, banks, state and federal law, taxes. We all have some of this worldwide, they have a LOT. Secondly, the cost basis. Who pays? In most of the world the regulator forced cost onto specific players because they could, and forced people to tool up because they could. But, the costs did have to get met. Some people paid more than others. In the US, for reasons not entirely unlike the first set, *making* people do things with cost incursion is remarkably difficult. Making the Walmart brothers re-fit every terminal, when they can go down to DC and buy votes to stop it happening, Making Bank of America spend money re-working its core finance models to suit online chip+pin when it can go down to Walmart and lean on the owners to go down to DC and buy votes... Seriously: Its not lack of clue. Its lack of intestinal political fortitude, and a very strange regulatory and federal/state model. On Thu, Nov 8, 2018 at 4:11 PM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 11/Oct/18 21:31, Chris Adams wrote:
Requiring an ID is also a violation of the merchant agreements, at least for VISA and MasterCard (not sure about American Express), unless ID is otherwise required by law (like for age-limited products). I've walked out of stores that required an ID.
It has always been curious to me how/why the U.S., with one of the largest economies in the world, still do most card-based transactions as a swipe in lieu of a PIN-based approach.
In South Africa (and most of southern Africa), all banks make the use of PIN's mandatory, for all types of cards. With the rest of Africa using credit cards more recently, I imagine they are also PIN-based.
Europe also use PIN's, as far as I have experienced.
Asia-Pac was swipe-based for a long time when I lived there, but I know places like Malaysia and Singapore have started a major PIN-based transaction drive in the past 3 years.
3D Secure for the online version of the transaction also means your card number and CVV number are less susceptible to fraud via restaurants and the like. Of course, this is not fool-proof, as both the merchant and bank need to support and mandate this, which is not well-done at a global level.
Mark.
On 8/Nov/18 11:16, George Michaelson wrote:
There are two parts of the problem. The first is the assumption of risk: the current model of operation in the US (like in other western economies) puts the onus of risk of misuse of the card on specific actors. When you change the basis from signature (fraud) to chip+pin (leak of knowledge) you have to change the legal basis. Remember, this is an economy where WRITING CHEQUES is still normal. Clearly, the legal basis of money transactions in the US is hugely complicated by savings and loan, credit unions, banks, state and federal law, taxes. We all have some of this worldwide, they have a LOT.
Secondly, the cost basis. Who pays? In most of the world the regulator forced cost onto specific players because they could, and forced people to tool up because they could. But, the costs did have to get met. Some people paid more than others. In the US, for reasons not entirely unlike the first set, *making* people do things with cost incursion is remarkably difficult. Making the Walmart brothers re-fit every terminal, when they can go down to DC and buy votes to stop it happening, Making Bank of America spend money re-working its core finance models to suit online chip+pin when it can go down to Walmart and lean on the owners to go down to DC and buy votes...
Seriously: Its not lack of clue. Its lack of intestinal political fortitude, and a very strange regulatory and federal/state model.
Shame, but I can see how this makes sense as to why things are the way they are. Speaking of "cost" as a motivator, in South Africa, most of the banks are now using extra fees as a way to force users to do their banking online (phone, laptop, app, e.t.c.). If you want to walk into a bank to deposit money, withdraw money, make a transfer, e.t.c., you pay for that service over and above, while the process costs you zero (0) when done online. This has led to banks now renovating banking halls into where there was once 23 tellers, you now have 1 service usher, 1 teller, 2 support agents and 20 self-service computers. I hope the U.S. does catch-up. If we were swipe-based here, we'd all be broke :-). I know a number of major merchants in the U.S. now use PIN's, and I always stick to those when I travel there. Mark.
Mark Tinka wrote:
I hope the U.S. does catch-up. If we were swipe-based here, we'd all be broke :-). I know a number of major merchants in the U.S. now use PIN's, and I always stick to those when I travel there.
In the U.S., pin codes are required for EFTPOS transactions (called debit) over interbank networks like Pulse, STAR, etc Swipe-and-sign (and now just swipe for small amounts) is for Visa, Mastercard, Discover transactions (called credit) Skimming and card fraud is actually uncommon in the U.S. these days, and the police are very effective at combating it. It's just cheaper for the industry to eat fraud losses than to "upgrade" systems. The transition to chip-based cards was a debacle. -- S.C.
Once upon a time, Scott Christopher <sc@ottie.org> said:
Swipe-and-sign (and now just swipe for small amounts) is for Visa, Mastercard, Discover transactions (called credit)
Signatures are no longer required for chip card transactions in the US, except I think for transactions where the auth is done on the amount before an added tip (restaurants).
Skimming and card fraud is actually uncommon in the U.S. these days, and the police are very effective at combating it. It's just cheaper for the industry to eat fraud losses than to "upgrade" systems. The transition to chip-based cards was a debacle.
Skimming is still highly active at gas pumps, where chip support was pushed off (current requirement I believe is late 2020, but may be delayed again). The skimmers get more creative all the time; they're getting inside pumps (possibly with help of low-paid station attendants, but also because of poor physical security) and installing the skimmer hardware out of sight. The hardware has Bluetooth, so the bad guys just pull up and get gas and someone in the car can retrieve the data (from multiple pumps even). -- Chris Adams <cma@cmadams.net>
Well, Older Pump station installation (and maybe new ones) use RS-232/442 to communicate in clear text with their controller into the building. Easy to tap to skim Track 1/Track2 of the CHD which is good to dups cards. Now to get the physical CVV you need a physical skimmer installed on top the pump which is where your Bluetooth come in action. With those you can dups and make "Card No Present" transaction (aka Internet). It is a risk/reward thing. PS: Lazyness is pretty much the greatest threat. EU/CAN/etc are all CHIP while some other economy still refuse to spend that extra $1 per card :( ----- Alain Hebert ahebert@pubnix.net PubNIX Inc. 50 boul. St-Charles P.O. Box 26770 Beaconsfield, Quebec H9W 6G7 Tel: 514-990-5911 http://www.pubnix.net Fax: 514-990-9443 On 11/08/18 22:50, Chris Adams wrote:
Once upon a time, Scott Christopher <sc@ottie.org> said:
Swipe-and-sign (and now just swipe for small amounts) is for Visa, Mastercard, Discover transactions (called credit) Signatures are no longer required for chip card transactions in the US, except I think for transactions where the auth is done on the amount before an added tip (restaurants).
Skimming and card fraud is actually uncommon in the U.S. these days, and the police are very effective at combating it. It's just cheaper for the industry to eat fraud losses than to "upgrade" systems. The transition to chip-based cards was a debacle. Skimming is still highly active at gas pumps, where chip support was pushed off (current requirement I believe is late 2020, but may be delayed again).
The skimmers get more creative all the time; they're getting inside pumps (possibly with help of low-paid station attendants, but also because of poor physical security) and installing the skimmer hardware out of sight. The hardware has Bluetooth, so the bad guys just pull up and get gas and someone in the car can retrieve the data (from multiple pumps even).
On 11/08/2018 07:50 PM, Chris Adams wrote:
Signatures are no longer required for chip card transactions in the US, except I think for transactions where the auth is done on the amount before an added tip (restaurants).
Signatures are required for chip card transactions above a certain dollar amount, with that dollar amount varying from merchant to merchant. I ran into this at the Sprint store when I used a chip card to pay $800+ for my company's overdue wireless bill, and I had to apply pen to paper by hand. And I didn't do my usual response to "sign here": draw a triangle and put "yield" in it.
Once upon a time, Stephen Satchell <list@satchell.net> said:
On 11/08/2018 07:50 PM, Chris Adams wrote:
Signatures are no longer required for chip card transactions in the US, except I think for transactions where the auth is done on the amount before an added tip (restaurants).
Signatures are required for chip card transactions above a certain dollar amount, with that dollar amount varying from merchant to merchant. I ran into this at the Sprint store when I used a chip card to pay $800+ for my company's overdue wireless bill, and I had to apply pen to paper by hand. And I didn't do my usual response to "sign here": draw a triangle and put "yield" in it.
That's just because Sprint wanted it, not the credit card company. For example with VISA, the signature is "optional" for chip transactions, no matter the amount, but the retailer can still require it if they want (because they want to annoy customers I guess?). https://www.theverge.com/2018/1/12/16884814/visa-chip-emv-signatures-north-a... -- Chris Adams <cma@cmadams.net>
I have a low-cost/high interest rate account at one of the Canadian bank and each "assisted" transaction is $5. Frank -----Original Message----- From: NANOG <nanog-bounces@nanog.org> On Behalf Of Mark Tinka Sent: Thursday, November 08, 2018 3:35 AM To: George Michaelson <ggm@algebras.org> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: CVV (was: Re: bloomberg on supermicro: sky is falling) <snip. Speaking of "cost" as a motivator, in South Africa, most of the banks are now using extra fees as a way to force users to do their banking online (phone, laptop, app, e.t.c.). If you want to walk into a bank to deposit money, withdraw money, make a transfer, e.t.c., you pay for that service over and above, while the process costs you zero (0) when done online. This has led to banks now renovating banking halls into where there was once 23 tellers, you now have 1 service usher, 1 teller, 2 support agents and 20 self-service computers. I hope the U.S. does catch-up. If we were swipe-based here, we'd all be broke :-). I know a number of major merchants in the U.S. now use PIN's, and I always stick to those when I travel there. Mark.
This is a confusing and off-topic discussion with respect to network engineering. But for completeness: Payments systems are architected by fraud rates, not by isolated security requirements or engineering mandates, as i think most network engineers can understand. The fraud rates in the US for credit card transactions were historically very, very low and being a large jurisdiction with a single national law enforcement branch (the FBI) enforcement was effective. Compare this to Europe in the 1980s when credit cards were accepted very few places. This was for two reasons: 1) the fraud rates were much, much higher, which created chargebacks for merchants that they preferred not to eat; 2) trans-national enforcement was virtually nonexistent. interpol had ~zero time to deal with credit card fraud. so the best european fraud rings always operated from a different country than where they perpetrated the fraud. when chip-and-pin was introduced, the point was actually twofold: A) security B) shifting liability to the consumer somewhat famously, even after chip-and-pin was proven compromised, UK banks continued to make consumers liable for all fraudulent transactions that were 'pin used'. this was very, very good for the adoption of credit cards in europe but it was very, very bad for a few people. banks, as usual, didn't are and made some decent money. So why did the US get pin-and-signature? Target. International fraud rings finally got wise to the ripe opportunity that was the soft underbelly of the US economy and figured out ways to perpetrate massive, trans-national fraud in the US. and as soon as that happened, the US got chips. the signature-vs-pin part is mostly about the fact that there are *still* low rates of fraud here as tracked by chargeback rates and as a result there's no real need to pay the cost of support to set everyone up with a pin. and that's what security is always all about: cost tradeoffs. people in countries where everyone has a pin have eaten that cost already and had to because the fraud rates were high enough to justify it. people in the US do not have PINs that they know and setting those up costs money and maintaining people's access to them costs money. so if that's not worth it, it doesn't get done. nor should it. i generally find it amusing when people from other countries mock the US for not having PINs. this is just another way of saying "my country has high fraud rates and yours appears not to." :-) . you can see this in the comment below "If we were swipe-based here, we'd all be broke :-).". the payments systems are architected to minimize cost and maximize adoption and they are usually at (or moving towards) some locally optimal point. the US is no exception in that. now, the checking/chequing system is a whole other, embarrassing beast and mocking that one is just the correct thing to do. :-) anyway, let's talk about networks, no? cheers, t On Thu, Nov 8, 2018, 19:07 Frank Bulk <frnkblk@iname.com wrote:
I have a low-cost/high interest rate account at one of the Canadian bank and each "assisted" transaction is $5.
Frank
-----Original Message----- From: NANOG <nanog-bounces@nanog.org> On Behalf Of Mark Tinka Sent: Thursday, November 08, 2018 3:35 AM To: George Michaelson <ggm@algebras.org> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: CVV (was: Re: bloomberg on supermicro: sky is falling)
<snip.
Speaking of "cost" as a motivator, in South Africa, most of the banks are now using extra fees as a way to force users to do their banking online (phone, laptop, app, e.t.c.). If you want to walk into a bank to deposit money, withdraw money, make a transfer, e.t.c., you pay for that service over and above, while the process costs you zero (0) when done online. This has led to banks now renovating banking halls into where there was once 23 tellers, you now have 1 service usher, 1 teller, 2 support agents and 20 self-service computers.
I hope the U.S. does catch-up. If we were swipe-based here, we'd all be broke :-). I know a number of major merchants in the U.S. now use PIN's, and I always stick to those when I travel there.
Mark.
Todd Underwood writes:
[interesting and plausible reasoning about why no chip&PIN in US] anyway, let's talk about networks, no?
This topic is obviously "a little" off-topic, but I find some contributions (like yours) relevant for understanding adoption dynamics (or not) of proposed security mechanisms on the Internet (RPKI, route filtering in general, DNSSEC etc.). In general the regulatory environment in the Internet is quite different from that of the financial sector. But I guess credit-card security trade-offs are still made mostly by private actors. (Maybe they sometimes discuss BGP security on their mailing lists :-) -- Simon.
On 9/Nov/18 02:22, Todd Underwood wrote:
i generally find it amusing when people from other countries mock the US for not having PINs. this is just another way of saying "my country has high fraud rates and yours appears not to." :-) . you can see this in the comment below "If we were swipe-based here, we'd all be broke :-).". the payments systems are architected to minimize cost and maximize adoption and they are usually at (or moving towards) some locally optimal point. the US is no exception in that.
That was me - and "low" (fraud rates) is not "zero" (fraud rates). Personally, I don't want to add to the statistic. The inconvenience isn't worth the bragging right :-)... Mark.
On Nov 8, 2018, at 1:11 AM, Mark Tinka <mark.tinka@seacom.mu> wrote: It has always been curious to me how/why the U.S., with one of the largest economies in the world, still do most card-based transactions as a swipe in lieu of a PIN-based approach.
That was true a few years ago, but it’s been at least a year since I’ve seen a swipe anywhere. The change happened quite quickly. It’s all been chip, or chip-and-pin, for at least a year. -Bill
On 9/Nov/18 20:26, Bill Woodcock wrote:
That was true a few years ago, but it’s been at least a year since I’ve seen a swipe anywhere. The change happened quite quickly. It’s all been chip, or chip-and-pin, for at least a year.
In the last 2 years, I've seen the rise of PIN-based transactions in the U.S., and this is great. But between San Diego, San Jose, San Francisco, Chicago, Hawaii and Seattle for my 2017/2018 U.S. visits, there are just about as many merchants supporting PIN's as there are that don't. Mark.
On October 11, 2018 at 10:17 robert@ripe.net (Robert Kisteleki) wrote:
(this is probably OT now...)
I'm pretty sure the "entire point" of inventing CVV was to prove you physically have the card.
Except that it doesn't serve that purpose. Anyone who ever had your card in their hands (e.g. waiters) can just write that down and use it later hence defeating the purpose of "physically having the card". (Call me paranoid but I usually use a black pen to make the numbers undreadable because of this, after my card (both sides) has been photocopied a number of times...)
What you're saying is they don't work as well as you might hope, not that they don't serve that purpose. If you snatched 5M credit cards numbers and expiraton dates but, as required by contract, there were no CVVs in that db how well would that work with sites which require a CVV for a transaction? Not well at all. So there's a purpose. Also, traditionally one's signature is on the back right next to that CVV for a merchant to compare against which leaves forgery a mere exercise in, well, forgery, since the example one has to reasonably match is right there. Which doesn't mean signatures don't work, it's just not much protection against anyone who can reasonably forge a signature. But many people can't or won't try, it discourages minor criminals like your boyfriend using your card surreptitously while you were sleeping. They're also some reasonable evidence that the transaction was done in person with the card in hand. I know some merchant contracts wouldn't allow forgiveness (who eats the fraud) for charges w/o a signature where their contract claims they only do in-person purchases which gets them a lower rate. There is a concern for merchant fraud also in all this, unfortunately that's very tempting. BUT IT'S ALL WORSE THAN THAT! When I had a book of checks stolen (and reported) several turned up used in major big box stores with information like driver's license number, date of birth, etc neatly written on them tho none of that info was mine. I doubt they went to the trouble of counterfeiting a driver's license, it's possible but this was small-time fraud. My suspicion was they were in cahoots with the cashier, simplest explanation, the cashier was a friend who probably got a cut. So anything in the presumed chain of events can often be suborned.
This has always been an amusing topic. At the end of the day it's a financial risk management call from the banks -- as long as they lose less money on the current system than the cost of fraud, things wiull not change. Of course, they try to push those costs onto others as much as possible, but that doesn't change the bottom line.
I agree with this. Quite a few years ago I was interviewed by a start-up manufacturer of a big parallel "mini" to head their OS effort. Something which came out in the conversation, which went on for hours! (very pleasant tho), was that a major credit card company had pledged in writing to buy $150M of their machines on day one of ship if they could run a set of their anti-fraud algorithms quickly enough (their spec) to be able to reject transactions in real time. The company had done forensics and I think the estimate was if they could have run those algorithms they would have saved them some big number like $50K/hour in fraud. But they couldn't run them fast enough to allow for reasonable transaction times. And then ya sit around the bar thinking you know how this or that startup is funded or why...that would not have been one of my guesses! -- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
Sure and with the Exp Date, CVV, and number printed on every card you are open to compromise every time you stay in the hotel or go to a restaurant where you hand someone your card. Worse yet, the only option if you are compromised is to change all your numbers and put the burden on your of notifying everyone and that evening you hand your card to the waiter and the cycle starts over. The system is so monumentally stupid it’s unbelievable. Steven Naslund Chicago IL
Well,
Once you get the Expiry Date (which is the most prevalent data that is not encoded with the CHD)
CVV is only 3 digits, we saw ppl using parallelizing tactics to find the correct sequence using acquirers around the world.
With the delays in the reporting pipeline, they have the time to completely abuse that CHD/Date/CVV before getting caught.
I understand that in some countries the common practice is that the waiter or clerk brings the card terminal to you or you go to it at the cashier's desk, and you insert or swipe it, so the card never leaves your hand. And you have to enter the PIN as well. This seems notably more secure against point-of-sale compromise. - Brian On Wed, Oct 10, 2018 at 04:01:07PM +0000, Naslund, Steve wrote:
Sure and with the Exp Date, CVV, and number printed on every card you are open to compromise every time you stay in the hotel or go to a restaurant where you hand someone your card. Worse yet, the only option if you are compromised is to change all your numbers and put the burden on your of notifying everyone and that evening you hand your card to the waiter and the cycle starts over. The system is so monumentally stupid it’s unbelievable.
Steven Naslund
Chicago IL
This is common in India but then chip and pin has been mandatory for a good few years, as has 2fa (vbv / mastercard secure code) for online transactions. Waiters would earlier ask for people's pins so they could go back and enter it - back when a lot of the POS terminals were connected to POTS lines rather than battery operated + with a GSM sim. That's stopped now as people grew more aware. On 10/10/18, 9:49 PM, "NANOG on behalf of Brian Kantor" <nanog-bounces@nanog.org on behalf of Brian@ampr.org> wrote: I understand that in some countries the common practice is that the waiter or clerk brings the card terminal to you or you go to it at the cashier's desk, and you insert or swipe it, so the card never leaves your hand. And you have to enter the PIN as well. This seems notably more secure against point-of-sale compromise. - Brian On Wed, Oct 10, 2018 at 04:01:07PM +0000, Naslund, Steve wrote: > Sure and with the Exp Date, CVV, and number printed on every card you are open to compromise every time you stay in the hotel or go to a restaurant where you hand someone your card. Worse yet, the only option if you are compromised is to change all your numbers and put the burden on your of notifying everyone and that evening you hand your card to the waiter and the cycle starts over. The system is so monumentally stupid it’s unbelievable. > > Steven Naslund > > Chicago IL
True and that should be mandatory but does not solve the telephone agent problem. Steven Naslund Chicago IL
I understand that in some countries the common practice is that the waiter or clerk brings the card terminal to you or you go to it at the cashier's desk, and you insert or swipe it, so the card never leaves your hand. And you have to enter the PIN as well. This seems notably more secure against point-of-sale compromise. - Brian
IVR credit card PIN entry is a thing For example - https://www.hdfcbank.com/personal/making-payments/security-measures/ivr-3d-s... On 10/10/18, 9:57 PM, "NANOG on behalf of Naslund, Steve" <nanog-bounces@nanog.org on behalf of SNaslund@medline.com> wrote: True and that should be mandatory but does not solve the telephone agent problem. Steven Naslund Chicago IL > I understand that in some countries the common practice is that the > waiter or clerk brings the card terminal to you or you go to it at the > cashier's desk, and you insert or swipe it, so the card never leaves > your hand. And you have to enter the PIN as well. This seems > notably more secure against point-of-sale compromise. > - Brian
It is good but has several inherent problems (other than almost no one using it). Your card number is static and so is your pin. If they get compromised, you are done. Changing token/pin resolve the static number problem completely, compromise of a used token has no impact whatsoever. Steven Naslund Chicago IL
IVR credit card PIN entry is a thing
For example - https://www.hdfcbank.com/personal/making-payments/security-measures/ivr-3d-s...
On Wed Oct 10, 2018 at 09:17:37AM -0700, Brian Kantor wrote:
I understand that in some countries the common practice is that the waiter or clerk brings the card terminal to you or you go to it at the cashier's desk, and you insert or swipe it, so the card never leaves your hand. And you have to enter the PIN as well. This seems notably more secure against point-of-sale compromise.
PIN is more secure but the device is wireless and may have been compromised. All (that I've seen) POS are now PIN based in UK. Internet use still asks for CVV sadly though verified by visa is still occasionally used but is only protecting the places you probably already trust. There have been cards with a OTP display but they didn't become popular. I try and use Apple pay where possible. Apple assure us that their account code and one time security codes prevent the attacker aquiring the card number/pin/cvv and any captured data can not be used to make another transaction. Really eveything should do at least this. brandon
On Wed, Oct 10, 2018 at 10:22 AM Naslund, Steve <SNaslund@medline.com> wrote:
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties. As long as your security policy is defaulted to "deny all" outbound that should not be difficult to accomplish.
Hi Steve, I respectfully disagree. Deny-all-permit-by-exception incurs a substantial manpower cost both in terms of increasing the number of people needed to do the job and in terms of the reducing quality of the people willing to do the job: deny-all is a more painful environment to work in and most of us have other options. As with all security choices, that cost has to be balanced against the risk-cost of an incident which would otherwise have been contained by the deny-all rule. Indeed, the most commonplace security error is spending more resources securing something than the risk-cost of an incident. By voluntarily spending the money you've basically done the attacker's damage for them! Except with the most sensitive of data, an IDS which alerts security when an internal server generates unexpected traffic can establish risk-costs much lower than the direct and indirect costs of a deny-all rule. Thus rejecting the deny-all approach as part of a balanced and well conceived security plan is not inherently an error and does not necessarily recommend firing anyone. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Dirtside Systems ......... Web: <http://www.dirtside.com/>
You are free to disagree all you want with the default deny-all policy but it is a DoD 5200.28-STD requirement and NSA Orange Book TCSEC requirement. It is baked into all approved secure operating systems including SELINUX so it is really not open for debate if you have meet these requirements. Remember we were talking about Intel agency systems here, not the general public. It is SUPPOSED to be painful to open things to the Internet in those environments. It needs to take an affirmative act to do so. It is a simple matter of knowing what each and every connection outside the network is there for. It also reveals application vulnerabilities and compromises as well as making it easy to identify apps that are compromised. In several of the corporate networks I have worked on, they had differing policies for different network zones. For example, you might allow your users out to anywhere on the Internet (at least for common public protocols like HTTP/HTTPS) but not allow any servers out to the Internet except where they are in a DMZ offering public services or destination required for support (like patching and remote updates). Seemed like good workable policy. Steven Naslund Chicago IL
Hi Steve,
I respectfully disagree.
Deny-all-permit-by-exception incurs a substantial manpower cost both in terms of increasing the number of people needed to do the job and in terms of the reducing quality of the people willing to do the job: deny-all is a more painful environment to work in and most of us have other options. As with all security choices, that cost has to be balanced against the risk-cost of an incident which would otherwise have been contained by the deny-all rule.
Indeed, the most commonplace security error is spending more resources securing something than the risk-cost of an incident. By voluntarily spending the money you've basically done the attacker's damage for them!
Except with the most sensitive of data, an IDS which alerts security when an internal server generates unexpected traffic can establish risk-costs much lower than the direct and indirect costs of a deny-all rule.
Thus rejecting the deny-all approach as part of a balanced and well conceived security plan is not inherently an error and does not necessarily recommend firing anyone.
Regards, Bill Herrin
On Wed, Oct 10, 2018 at 11:25 AM Naslund, Steve <SNaslund@medline.com> wrote:
You are free to disagree all you want with the default deny-all policy but it is a DoD 5200.28-STD requirement and NSA Orange Book TCSEC requirement.
And yet I got my DoD system ATOed my way earlier this year by demonstrating to the security controls assessment team that the cost of default-deny-all exceeded the risk cost of default-allow with IDS alerts on unexpected traffic. Because not spending more on a security implementation than the amount by which it reduces the risk cost, is a CORE SECURITY PRINCIPLE while default-deny-all is merely a standard policy. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Dirtside Systems ......... Web: <http://www.dirtside.com/>
If there was a waiver issued for your ATO, it would have had to have been issued by a department head or the OSD and approved by the DoD CIO after Director DISA provides a recommendation and it is mandatory that it be posted at https://gtg.csd.disa.mil. Please see this DoD Instruction http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/831001p.pdf (the waiver process is on page 23). If it did not go through that process, then it is not approved not matter what anyone told you. I know your opinion did not make it through that process. Want to tell us what system this is? Steven Naslund Chicago IL
And yet I got my DoD system ATOed my way earlier this year by demonstrating to the security controls assessment team that the cost of default-deny-all exceeded the risk cost of default-allow with IDS alerts on unexpected traffic.
Because not spending more on a security implementation than the amount by which it reduces the risk cost, is a CORE SECURITY PRINCIPLE while default-deny-all is merely a standard policy.
Regards, Bill Herrin
On Wed, Oct 10, 2018 at 1:06 PM Naslund, Steve <SNaslund@medline.com> wrote:
Want to tell us what system this is?
Yes, I want to give you explicit information about a government system in this public forum and you should encourage me to do so. I thought you said you had some skill in the security field? Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Dirtside Systems ......... Web: <http://www.dirtside.com/>
Mr Herrin, you are asking us to believe one or all of the following : 1. You believe that it is good security policy to NOT have a default DENY ALL policy in place on firewalls for DoD and Intelligence systems handling sensitive data. 2. You managed to convince DoD personnel of that fact and actually got them to approve an Authorization to Operate such a system based on cost savings. 3. You are just trolling to start a discussion. The reason I asked what system it is would be to question the authorities at DoD on who and why this was approved. If you don't want to disclose that then you are either trolling or don't want anyone to look into it. It won't be hard to determine if you actually had any government contracts since that is public data. There are very few systems whose EXISTENCE is actually classified, but you were the one that cited it as an example supporting your policy. If you cannot name the system then it doesn't support your argument very well does it. Completely unverifiable. In any case I believe the smart people here on NANOG can accept or reject your security advice based on the factors above. I'm done talking about this one. Steven Naslund
Want to tell us what system this is?
Yes, I want to give you explicit information about a government system in this public forum and you should encourage me to do so. I thought you said you had some skill in the security field?
Regards, Bill Herrin
To be fair, the idea that your security costs shouldn't outweigh potential harm really shouldn't be controversial. You don't spend a billion dollars to protect a million dollars worth of product. That's hardly trolling. On Wed, Oct 10, 2018 at 10:54 AM Naslund, Steve <SNaslund@medline.com> wrote:
Mr Herrin, you are asking us to believe one or all of the following :
1. You believe that it is good security policy to NOT have a default DENY ALL policy in place on firewalls for DoD and Intelligence systems handling sensitive data.
2. You managed to convince DoD personnel of that fact and actually got them to approve an Authorization to Operate such a system based on cost savings.
3. You are just trolling to start a discussion.
The reason I asked what system it is would be to question the authorities at DoD on who and why this was approved. If you don't want to disclose that then you are either trolling or don't want anyone to look into it. It won't be hard to determine if you actually had any government contracts since that is public data. There are very few systems whose EXISTENCE is actually classified, but you were the one that cited it as an example supporting your policy. If you cannot name the system then it doesn't support your argument very well does it. Completely unverifiable.
In any case I believe the smart people here on NANOG can accept or reject your security advice based on the factors above. I'm done talking about this one.
Steven Naslund
Want to tell us what system this is?
Yes, I want to give you explicit information about a government system in this public forum and you should encourage me to do so. I thought you said you had some skill in the security field?
Regards, Bill Herrin
-- 09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0
Remember we are talking about classified intelligence systems and large IT organization infrastructure (Google, Yahoo, Apple) here (in the original Supermicro post). That would be information whose unauthorized disclosure would cause grave or exceptional grave harm (definition of secret and top secret) to the National Security of the United States. Seems like that warrants a default deny all (which is DoD and NSA policy). I would argue that ANY datacenter server should be protected that way unless it is intended to be publicly accessible. Steven Naslund
To be fair, the idea that your security costs shouldn't outweigh potential harm really shouldn't be controversial. You don't spend a billion dollars to protect a million dollars worth of product.
That's hardly trolling.
If you're only talking about classified systems, sure. But it didn't sound to me like we were only talking exclusively about those kind of systems. On Wed, Oct 10, 2018 at 11:08 AM Naslund, Steve <SNaslund@medline.com> wrote:
Remember we are talking about classified intelligence systems and large IT organization infrastructure (Google, Yahoo, Apple) here (in the original Supermicro post).
That would be information whose unauthorized disclosure would cause grave or exceptional grave harm (definition of secret and top secret) to the National Security of the United States. Seems like that warrants a default deny all (which is DoD and NSA policy). I would argue that ANY datacenter server should be protected that way unless it is intended to be publicly accessible.
Steven Naslund
To be fair, the idea that your security costs shouldn't outweigh potential harm really shouldn't be controversial. You don't spend a billion dollars to protect a million dollars worth of product.
That's hardly trolling.
-- 09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0
On 10/10/18, Mike Hale <eyeronic.design@gmail.com> wrote:
To be fair, the idea that your security costs shouldn't outweigh potential harm really shouldn't be controversial. You don't spend a billion dollars to protect a million dollars worth of product.
The problem with that idea is that it's almost always implemented as your security costs shouldn't outweigh _your_ potential harm Regards, Lee
On Wed, Oct 10, 2018 at 10:54 AM Naslund, Steve <SNaslund@medline.com> wrote:
Mr Herrin, you are asking us to believe one or all of the following :
1. You believe that it is good security policy to NOT have a default DENY ALL policy in place on firewalls for DoD and Intelligence systems handling sensitive data.
2. You managed to convince DoD personnel of that fact and actually got them to approve an Authorization to Operate such a system based on cost savings.
3. You are just trolling to start a discussion.
The reason I asked what system it is would be to question the authorities at DoD on who and why this was approved. If you don't want to disclose that then you are either trolling or don't want anyone to look into it. It won't be hard to determine if you actually had any government contracts since that is public data. There are very few systems whose EXISTENCE is actually classified, but you were the one that cited it as an example supporting your policy. If you cannot name the system then it doesn't support your argument very well does it. Completely unverifiable.
In any case I believe the smart people here on NANOG can accept or reject your security advice based on the factors above. I'm done talking about this one.
Steven Naslund
Want to tell us what system this is?
Yes, I want to give you explicit information about a government system in this public forum and you should encourage me to do so. I thought you said you had some skill in the security field?
Regards, Bill Herrin
On Wed, Oct 10, 2018 at 1:53 PM Naslund, Steve <SNaslund@medline.com> wrote:
Mr Herrin, you are asking us to believe one or all of the following :
1. You believe that it is good security policy to NOT have a default DENY ALL policy in place on firewalls for DoD and Intelligence systems handling sensitive data.
Steve, I believe it's a good idea for every security control to trace to first principles not just as conceived but as implemented. Default-deny-all is not a first principle. If often traces. Often is not always. Treating often as always is the sort of lazy error that leads users to work around non-sensible security implementations, demolishing the security they would have provided.
2. You managed to convince DoD personnel of that fact and actually got them to approve an Authorization to Operate such a system based on cost savings.
You mischaracterize it as "cost savings" but that's essentially correct. I spent six months going through the 1100 controls they laid on me and where I thought a control would be destructive I provided a thorough analysis of the anticipated mission impact for both the control as written and the proposed alternate mitigation. The impact is far more than a dollar sign. Make it hard to use and you sap the system's utility to the mission. Make it hard to manage and you increase the probability of error, decreasing the system availability. And so on. Won some of the arguments. Lost others. Built a better system with happier users for the effort. You can believe that or not as you choose. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Dirtside Systems ......... Web: <http://www.dirtside.com/>
From: NANOG <nanog-bounces@nanog.org> On Behalf Of Naslund, Steve Sent: Wednesday, October 10, 2018 1:06 PM
If there was a waiver issued for your ATO, it would have had to have been issued by a department head or the OSD and approved by the DoD CIO after Director DISA provides a recommendation and it is mandatory that it be posted at https://gtg.csd.disa.mil. Please see this DoD Instruction http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/831001p.pdf (the waiver process is on page 23). If it did not go through that process, then it is not approved not matter what anyone told you. I know your opinion did not make it through that process.
That only applies to RMF systems where DSS is the AO on behalf of the DoD. For anything that falls outside DSS purview you can do whatever the COTR for the Cog is willing to sign off on. Even under RMF, MUSAs and isolated LANs have those requirements tailored out by default. IWANS and UWANS that don't have connectivity to anything but themselves are also NA for the firewall requirements. At the present, contractor systems that don't connect to a USG network aren't required to implement any of the STIGs other than base OS. I don't expect things to stay that way, but I haven't heard anything from DSS to indicate it'll be changing anytime in the near future. It's less difficult than it first appears to get ATO from a technical standpoint (the paperwork hell IA is buried under is an entirely different story, but I'm not them and have no desire to be). Jamie
Well, ( I'm sorry but I cannot resist ) Seriously mate, trolling this list using "deny-all is bad m'kay" is not a good idea. ----- Alain Hebert ahebert@pubnix.net PubNIX Inc. 50 boul. St-Charles P.O. Box 26770 Beaconsfield, Quebec H9W 6G7 Tel: 514-990-5911 http://www.pubnix.net Fax: 514-990-9443 On 10/10/18 11:09, William Herrin wrote:
On Wed, Oct 10, 2018 at 10:22 AM Naslund, Steve <SNaslund@medline.com> wrote:
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties. As long as your security policy is defaulted to "deny all" outbound that should not be difficult to accomplish. Hi Steve,
I respectfully disagree.
Deny-all-permit-by-exception incurs a substantial manpower cost both in terms of increasing the number of people needed to do the job and in terms of the reducing quality of the people willing to do the job: deny-all is a more painful environment to work in and most of us have other options. As with all security choices, that cost has to be balanced against the risk-cost of an incident which would otherwise have been contained by the deny-all rule.
Indeed, the most commonplace security error is spending more resources securing something than the risk-cost of an incident. By voluntarily spending the money you've basically done the attacker's damage for them!
Except with the most sensitive of data, an IDS which alerts security when an internal server generates unexpected traffic can establish risk-costs much lower than the direct and indirect costs of a deny-all rule.
Thus rejecting the deny-all approach as part of a balanced and well conceived security plan is not inherently an error and does not necessarily recommend firing anyone.
Regards, Bill Herrin
On Wed, Oct 10, 2018 at 02:21:40PM +0000, Naslund, Steve wrote:
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties.
I concur, and have been designing/building/running based on this premise for a long time. It's usually not very difficult or painful when starting fresh; it can be much more so when modifying an already-operational environment. But even in the latter case, it's worth the effort and expense: it much more than pays for itself the first time it stops something from getting out. The most difficult part of this process is often convincing people that it's sadly necessary. I say "sadly" because it wasn't also so, and that was a kinder, happier time. But that was then and this is now. And now the worst threat often comes from the inside. It also has three perhaps-not-quite-obvious benefits. First, it forces discipline. Things don't "just work", and that's a feature, not a bug. It requires thinking through what's required to make services functional and thus (hopefully) also thinking through what the potential consequences are. I'm no longer surprised how many chief technology officers don't actually know what their technology is doing (to borrow a phrase from Ranum) and are puzzled when they find out. The clarity provided by this approach removes that puzzlement. Second, it greatly reduces the extraneous noise that might make nefarious activity harder to spot. There's an entire market sector built around products that ferret out signal from noise; I find it easier not to allow the noise. Third, every attack we see coming in, every byte of abuse we see arriving, is the consequence of someone else *not* implementing default-deny and the collective cost of that across all operations is enormous. If we can avoid contributing to that, then we've done a small bit of good for everyone else. ---rsk
That would be one way, but a lot of the problem is unplanned cross-access. It's (relatively) easy to isolate network permissions and access at a single location, but once you have multi-site configurations it gets more complex. Especially when you have companies out there that consider VPN a reasonable way to handle secure data transfer cross-connects with vendors or clients. On 10/07/2018 10:53 PM, Naslund, Steve wrote:
You just need to fire any contractor that allows a server with sensitive data out to an unknown address on the Internet. Security 101.
Steven Naslund
From: Eric Kuhnke <eric.kuhnke@gmail.com>
many contractors *do* have sensitive data on their networks with a gateway out to the public Internet.
-- Daniel Taylor VP Operations Vocal Laboratories, Inc. dtaylor@vocalabs.com http://www.vocalabs.com/ (612)235-5711
On Mon, 08 Oct 2018 08:53:55 -0500, Daniel Taylor said:
Especially when you have companies out there that consider VPN a reasonable way to handle secure data transfer cross-connects with vendors or clients.
At some point, you get to balance any inherent security problems with the concept of using a VPN against the fact that while most VPN software has a reasonably robust point-n-drool interface to configure, most VPN alternatives are very much "some assembly required". Which is more likely? That some state-level actor finds a hole in your VPN software, or that somebody mis-configures your VPN alternative so it leaks keys and data all over the place?
The risks of VPN aren't in the VPN itself, they are in the continuous network connection architecture. 90%+ of VPN interconnects could be handled cleanly, safely, and reliably using HTTPS, without having to get internal network administration involved at all. And the risks of key exposure with HTTPS are exactly the same as the risks of having one end or the other of your VPN compromised. As it is, VPN means trusting the network admins at your peer company. On 10/08/2018 12:15 PM, valdis.kletnieks@vt.edu wrote:
On Mon, 08 Oct 2018 08:53:55 -0500, Daniel Taylor said:
Especially when you have companies out there that consider VPN a reasonable way to handle secure data transfer cross-connects with vendors or clients. At some point, you get to balance any inherent security problems with the concept of using a VPN against the fact that while most VPN software has a reasonably robust point-n-drool interface to configure, most VPN alternatives are very much "some assembly required".
Which is more likely? That some state-level actor finds a hole in your VPN software, or that somebody mis-configures your VPN alternative so it leaks keys and data all over the place?
participants (31)
-
Alain Hebert
-
Alfie Pates
-
Bill Woodcock
-
Bjørn Mork
-
Brandon Butterworth
-
Brian Kantor
-
Bryce Wilson
-
bzs@theworld.com
-
Chris Adams
-
Daniel Taylor
-
David Hubbard
-
Frank Bulk
-
George Michaelson
-
Jamie Bowden
-
Lee
-
Mark Tinka
-
Mike Hale
-
Naslund, Steve
-
Pete Carah
-
Randy Bush
-
Rich Kulawiec
-
Robert Kisteleki
-
Saku Ytti
-
Scott Christopher
-
Scott Weeks
-
Simon Leinen
-
Stephen Satchell
-
Suresh Ramasubramanian
-
Todd Underwood
-
valdis.kletnieks@vt.edu
-
William Herrin