[In the message entitled "Re: Nato warns of strike against cyber attackers" on Jun 8, 13:33, Owen DeLong writes:]
I realize your fond of punishing all of us to subsidize the ignorant, = but I would rather see those with compromised machines pay the bill for = letting their machines get compromised than have to subsidize their = ignorant or worse behavior.
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine. ISPs remain in the unique position of being able to identify the customer, the machine, and to verify the traffic. It can be done. --
Sent from my iToilet why you will penalize with fees the end customer that may not know that her system has been compromised because what she pays to Joe Antivirus/Security/Firewall/Crapware is not effective against Billy the nerd insecure code programmer ? No doubt ISPs can do something, but without additional regulation and safeguards that they wont be sued for sniffing or filtering traffic nothing will ever happen. Do we want more/any regulation ? who will oversee it ? On the other hand think as the Internet being a vast ocean where the bad guys keep dumping garbage, you can't control or filter the currents that are constantly changing and you neither can inspect every water molecule, then what do you do to find and penalize the ones that drop or permit their systems to drop garbage on the ocean ? My .02 Jorge
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine.
ISPs remain in the unique position of being able to identify the customer, the machine, and to verify the traffic. It can be done.
On Tue, 08 Jun 2010 22:01:35 CDT, Jorge Amodio said:
On the other hand think as the Internet being a vast ocean where the bad guys keep dumping garbage, you can't control or filter the currents that are constantly changing and you neither can inspect every water molecule, then what do you do to find and penalize the ones that drop or permit their systems to drop garbage on the ocean ?
Bad analogy. There's some plumes of oil in the Gulf of Mexico that are getting mapped out very well by only a few ships. You don't have to examine every molecule to find parts-per-million oil, or to figure out who's oil rig the oil came from. And you don't need to look at every packet to find abusive traffic either - in most cases, simply letting the rest of the net do the work for you and just reading your abuse@ mailbox and actually dealing with the reports is 95% of what's needed.
On the other hand think as the Internet being a vast ocean where the bad guys keep dumping garbage, you can't control or filter the currents that are constantly changing and you neither can inspect every water molecule, then what do you do to find and penalize the ones that drop or permit their systems to drop garbage on the ocean ?
Bad analogy. There's some plumes of oil in the Gulf of Mexico that are getting mapped out very well by only a few ships. You don't have to examine every molecule to find parts-per-million oil, or to figure out who's oil rig the oil came from.
May be, but that is a particular case where you can exactly finger point who made the mess and make him accountable and responsible to cleaning it. But it's another example that shows that companies make decisions based not on what is right or wrong to do but what is more or less profitable to do within a risk management context.
And you don't need to look at every packet to find abusive traffic either - in most cases, simply letting the rest of the net do the work for you and just reading your abuse@ mailbox and actually dealing with the reports is 95% of what's needed.
Agreed, but you still have no control about what happens on the other side of the ocean, and if you don't provide a liability waiver to the abuse@ guy they may have their hands tied by their legal department to do anything. I'll give you another bad analogy, for sure we need to keep an eye and deal with transport and distribution, but the only way to eradicate drugs (most unlikely because of the amount of $$$ it moves) is to go after production and particularly consume, meanwhile the only thing you can do is damage control and contention. If it is still so freaking easy for the crocks to have a profitable criminal biz on the net, they will find the workaround to keep making money while its easy. My point is, go hard after the crocks and fix the holes, things like why the heck access to the power grid control systems are accessible over the net from Hackertistan ? And if there is a real reason for it to be on the net put the necessary amount of money and technology to make it as secure as possible. Regards Jorge
On Jun 8, 2010, at 8:01 PM, Jorge Amodio wrote:
Sent from my iToilet
why you will penalize with fees the end customer that may not know that her system has been compromised because what she pays to Joe Antivirus/Security/Firewall/Crapware is not effective against Billy the nerd insecure code programmer ?
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
No doubt ISPs can do something, but without additional regulation and safeguards that they wont be sued for sniffing or filtering traffic nothing will ever happen. Do we want more/any regulation ? who will oversee it ?
Those safeguards are already in place. There are specific exemptions in the law for data collection related to maintaining the service and you'd be very hard pressed to claim that identifying and correcting malicious activity is not part of maintaining the service.
On the other hand think as the Internet being a vast ocean where the bad guys keep dumping garbage, you can't control or filter the currents that are constantly changing and you neither can inspect every water molecule, then what do you do to find and penalize the ones that drop or permit their systems to drop garbage on the ocean ?
Your initial premise is flawed, so the conclusion is equally flawed. The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted. Owen
My .02 Jorge
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine.
ISPs remain in the unique position of being able to identify the customer, the machine, and to verify the traffic. It can be done.
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
I see that you don't understand that.
The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted.
The reality is that things change. Forty-three years ago, you could still buy a car that didn't have seat belts. Thirty years ago, most people still didn't wear seat belts. Twenty years ago, air bags began appearing in large volume in passenger vehicles. Throughout this period, cars have been de-stiffened with crumple zones, etc., in order to make them safer for passengers in the event of a crash. Mandatory child seat laws have been enacted at various times throughout. A little more than ten years ago, air bags were mandatory. Ten years ago, LATCH clips for child safety seats became mandatory. We now have side impact air bags, etc. Generally speaking, we do not penalize car owners for owning an older car, and we've maybe only made them retrofit seat belts (but not air bags, crumple zones, etc) into them, despite the fact that some of those big old boats can be quite deadly to other drivers in today's more easily-damaged cars. We've increased auto safety by mandating better cars, and by penalizing users who fail to make use of the safety features. There is only so much "proper security" you can expect the average PC user to do. The average PC user expects to be able to check e-mail, view the web, edit some documents, and listen to some songs. The average car driver expects to be able to drive around and do things. You can try to mandate that the average car driver must change their own oil, just as you can try to mandate that the average computer must do what you've naively referred to as "proper security", but the reality is that grandma doesn't want to get under her car, doesn't have the knowledge or tools, and would rather spend $30 at SpeedyLube. If we can not make security a similarly easy target for the end-user, rather than telling them to "take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer," then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user. I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to. We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
No, but we can and do require cars to have functional brakes and minimum tread depths, and to be tested periodically. Obviously this is acceptable because the failure modes for cars are worse, but the proposed solution is less intrusive being after the fact. Excuse topposting, on mobile. "Joe Greco" <jgreco@ns.sol.net> wrote:
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
I see that you don't understand that.
The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted.
The reality is that things change. Forty-three years ago, you could still buy a car that didn't have seat belts. Thirty years ago, most people still didn't wear seat belts. Twenty years ago, air bags began appearing in large volume in passenger vehicles. Throughout this period, cars have been de-stiffened with crumple zones, etc., in order to make them safer for passengers in the event of a crash. Mandatory child seat laws have been enacted at various times throughout. A little more than ten years ago, air bags were mandatory. Ten years ago, LATCH clips for child safety seats became mandatory. We now have side impact air bags, etc.
Generally speaking, we do not penalize car owners for owning an older car, and we've maybe only made them retrofit seat belts (but not air bags, crumple zones, etc) into them, despite the fact that some of those big old boats can be quite deadly to other drivers in today's more easily-damaged cars. We've increased auto safety by mandating better cars, and by penalizing users who fail to make use of the safety features.
There is only so much "proper security" you can expect the average PC user to do. The average PC user expects to be able to check e-mail, view the web, edit some documents, and listen to some songs. The average car driver expects to be able to drive around and do things. You can try to mandate that the average car driver must change their own oil, just as you can try to mandate that the average computer must do what you've naively referred to as "proper security", but the reality is that grandma doesn't want to get under her car, doesn't have the knowledge or tools, and would rather spend $30 at SpeedyLube. If we can not make security a similarly easy target for the end-user, rather than telling them to "take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer," then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user.
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
No, but we can and do require cars to have functional brakes and minimum tread depths, and to be tested periodically.
Obviously this is acceptable because the failure modes for cars are worse, but the proposed solution is less intrusive being after the fact.
Grandma does not go check her tread depth or check her own brake pads and discs for wear. She lets the shop do that. I was hoping I didn't have to get pedantic and that people could differentiate between "I pay the shop a few bucks to do that for me" and "I take responsibility personally to drive my car in an appropriate fashion" (which includes things like "I take my car to the shop periodically for maintenance I don't have the skills to do myself"), but there we have it. My point: We haven't designed computers for end users appropriately. It is not the fault of the end user that they're driving around the crapmobile we've provided for them. If you go to the store to get a new computer, you get a choice of crapmobiles all with engines by the same company, unless you go to the fruit store, in which case you get a somewhat less obviously vulnerable engine by a different company. The users don't know how to take apart the engines and repair them, and the engines aren't usefully protected sufficiently to ensure that they don't get fouled, so you have a Problem. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jun 9, 2010, at 5:28 AM, Joe Greco wrote:
No, but we can and do require cars to have functional brakes and minimum tread depths, and to be tested periodically.
Obviously this is acceptable because the failure modes for cars are worse, but the proposed solution is less intrusive being after the fact.
Grandma does not go check her tread depth or check her own brake pads and discs for wear. She lets the shop do that. I was hoping I didn't have to get pedantic and that people could differentiate between "I pay the shop a few bucks to do that for me" and "I take responsibility personally to drive my car in an appropriate fashion" (which includes things like "I take my car to the shop periodically for maintenance I don't have the skills to do myself"), but there we have it.
Whether grandma measures the tread depth herself or takes it to the shop, the point is that grandma is expected to have tires with sufficient tread depth and working brakes when she operates the car. If not, she's liable. If she drives like the little old lady from Pasadena, she's liable for the accidents she causes.
My point: We haven't designed computers for end users appropriately. It is not the fault of the end user that they're driving around the crapmobile we've provided for them. If you go to the store to get a new computer, you get a choice of crapmobiles all with engines by the same company, unless you go to the fruit store, in which case you get a somewhat less obviously vulnerable engine by a different company. The users don't know how to take apart the engines and repair them, and the engines aren't usefully protected sufficiently to ensure that they don't get fouled, so you have a Problem.
The end user should be able to recover from the responsible manufacturer for the design flaws in the hardware/software they are driving. Agreed. That is how it works in cars, that's how it should work in computers. What I don't want to see which you are advocating... I don't want to see the end users who do take responsibility, drive well designed vehicles with proper seat belts and safety equipment, stay in their lane, and do not cause accidents held liable for the actions of others. Why should we penalize those that have done no wrong simply because they happen to be a minority? Owen
Grandma does not go check her tread depth or check her own brake pads and discs for wear. She lets the shop do that. I was hoping I didn't have to get pedantic and that people could differentiate between "I pay the shop a few bucks to do that for me" and "I take responsibility personally to drive my car in an appropriate fashion" (which includes things like "I take my car to the shop periodically for maintenance I don't have the skills to do myself"), but there we have it.
Whether grandma measures the tread depth herself or takes it to the shop, the point is that grandma is expected to have tires with sufficient tread depth and working brakes when she operates the car. If not, she's liable. If she drives like the little old lady from Pasadena, she's liable for the accidents she causes.
There is no "shop" that the average computer owner should take their computer to, and unlike a car, anything that might seem to require some periodic maintenance is typically automated (OS updates, virus updates, etc). There are places like NerdForce that you can take your computer to, but you're likely to be sold a load of crap, and you can even take the same computer to five different services and get wildly differing results (and wildly differing bills). There's no standardization, and part of *that* is due to the way we've allowed end user operating systems to be designed.
My point: We haven't designed computers for end users appropriately. It is not the fault of the end user that they're driving around the crapmobile we've provided for them. If you go to the store to get a new computer, you get a choice of crapmobiles all with engines by the same company, unless you go to the fruit store, in which case you get a somewhat less obviously vulnerable engine by a different company. The users don't know how to take apart the engines and repair them, and the engines aren't usefully protected sufficiently to ensure that they don't get fouled, so you have a Problem.
The end user should be able to recover from the responsible manufacturer for the design flaws in the hardware/software they are driving. Agreed. That is how it works in cars, that's how it should work in computers.
It doesn't; look at that wonderful EULA. Want to fix that? Be my guest, seriously.
What I don't want to see which you are advocating... I don't want to see the end users who do take responsibility, drive well designed vehicles with proper seat belts and safety equipment, stay in their lane, and do not cause accidents held liable for the actions of others. Why should we penalize those that have done no wrong simply because they happen to be a minority?
I agree, on the other hand, what about those people who genuinely didn't do anything wrong, and their computer still got Pwned?
From this perspective: Our technology sucks.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
What I don't want to see which you are advocating... I don't want to see the end users who do take responsibility, drive well designed vehicles with proper seat belts and safety equipment, stay in their lane, and do not cause accidents held liable for the actions of others. Why should we penalize those that have done no wrong simply because they happen to be a minority?
I agree, on the other hand, what about those people who genuinely didn't do anything wrong, and their computer still got Pwned?
Fiction. At the very least, if you connected a system to the network and it got Pwned, you were negligent in your behavior, if not malicious. Negligence is still wrong, even if not malice. Owen
What I don't want to see which you are advocating... I don't want to see the end users who do take responsibility, drive well designed vehicles with proper seat belts and safety equipment, stay in their lane, and do not cause accidents held liable for the actions of others. Why should we penalize those that have done no wrong simply because they happen to be a minority?
I agree, on the other hand, what about those people who genuinely didn't do anything wrong, and their computer still got Pwned?
Fiction.
At the very least, if you connected a system to the network and it got Pwned, you were negligent in your behavior, if not malicious. Negligence is still wrong, even if not malice.
So, just so we're clear here, I go to Best Buy, I buy a computer, I bring it home, plug it into my cablemodem, and am instantly Pwned by the non-updated Windows version on the drive plus the incessant cable modem scanning, resulting in a bot infection... therefore I am negligent? Do you actually think a judge would find that negligent, or is this just your own personal definition of negligence? Because I doubt that a judge, or even an ordinary person, could possibly consider it such. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 6/9/2010 12:17, Joe Greco wrote:
What I don't want to see which you are advocating... I don't want to see the end users who do take responsibility, drive well designed vehicles with proper seat belts and safety equipment, stay in their lane, and do not cause accidents held liable for the actions of others. Why should we penalize those that have done no wrong simply because they happen to be a minority?
I agree, on the other hand, what about those people who genuinely didn't do anything wrong, and their computer still got Pwned?
Fiction.
At the very least, if you connected a system to the network and it got Pwned, you were negligent in your behavior, if not malicious. Negligence is still wrong, even if not malice.
So, just so we're clear here, I go to Best Buy, I buy a computer, I bring it home, plug it into my cablemodem, and am instantly Pwned by the non-updated Windows version on the drive plus the incessant cable modem scanning, resulting in a bot infection... therefore I am negligent?
Do you actually think a judge would find that negligent, or is this just your own personal definition of negligence? Because I doubt that a judge, or even an ordinary person, could possibly consider it such.
One can argue (and I will) that there is indeed some culpability because the buyer bought the cheapest version of everything and connected it to a negligent provider's system. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
So, just so we're clear here, I go to Best Buy, I buy a computer, I bring it home, plug it into my cablemodem, and am instantly Pwned by the non-updated Windows version on the drive plus the incessant cable modem scanning, resulting in a bot infection... therefore I am negligent?
Do you actually think a judge would find that negligent, or is this just your own personal definition of negligence? Because I doubt that a judge, or even an ordinary person, could possibly consider it such.
One can argue (and I will) that there is indeed some culpability because the buyer bought the cheapest version of everything and connected it to a negligent provider's system.
Really? Because the *cheapest* version of everything seems to run the same OS as the most *expensive* version of everythiing. Best Buy -> Computers -> Desktop Computers -> Towers Only -> a Presario Sempron with Windows 7 Home Premium, $279. Best Buy -> Computers -> Desktop Computers -> Desktop Packages -> a Dell Intel Core i5 package with Windows 7 Home Premium, $859. So, since I mentioned Best Buy, but didn't mention anything about what was paid, I am hard pressed to imagine the basis for your claim, since the cheapest PC I was able to quickly locate runs the same OS as the most expensive PC I was able to quickly locate (it's of course possible that there are cheaper and more expensive at BB, as well as gear that does not run W7HP). Further, since the incumbent provider in many areas is also the *only* provider, I wonder what theory you use to hold the customer responsible for their choice of provider, or where they're supposed to get information on the "negligence" of a provider so that they can make informed choices of this sort. And are you really suggesting that people should expect to get Pwned if they buy an inexpensive computer, but not if they buy a better one? I can understand you saying "they can expect the hard drive to fail sooner" or "the fans will burn out faster", because that seems to be borne out by actual real world experience, but I wasn't aware that the security quality of Windows varied significantly based on the cost of the computer. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 09 Jun 2010 12:32:54 CDT, Larry Sheldon said:
On 6/9/2010 12:17, Joe Greco wrote:
So, just so we're clear here, I go to Best Buy, I buy a computer, I bring it home, plug it into my cablemodem, and am instantly Pwned by the non-updated Windows version on the drive plus the incessant cable modem scanning, resulting in a bot infection... therefore I am negligent?
Do you actually think a judge would find that negligent, or is this just your own personal definition of negligence? Because I doubt that a judge, or even an ordinary person, could possibly consider it such.
One can argue (and I will) that there is indeed some culpability because the buyer bought the cheapest version of everything and connected it to a negligent provider's system.
And the average consumer can avoid the culpability in this scenario, how, exactly? "If people place a nice chocky in their mouth, they don't want their cheeks pierced" http://orangecow.org/pythonet/sketches/crunchy.htm
Once upon a time, Alexander Harrowell <a.harrowell@gmail.com> said:
No, but we can and do require cars to have functional brakes and minimum tread depths, and to be tested periodically.
Not in this state. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On 6/9/2010 08:08, Chris Adams wrote:
Once upon a time, Alexander Harrowell <a.harrowell@gmail.com> said:
No, but we can and do require cars to have functional brakes and minimum tread depths, and to be tested periodically.
Not in this state.
You might not have the state inspection rip-off, but I'll bet that if your state accepts federal highway money, you have mechanical condition standards that include tires, brakes, seat belts and a lot of other things. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Wed, Jun 09, 2010, Larry Sheldon wrote:
You might not have the state inspection rip-off, but I'll bet that if your state accepts federal highway money, you have mechanical condition standards that include tires, brakes, seat belts and a lot of other things.
.. and a change in the minimum drinking age? Adrian (Before you go "That's not relevant to the discussion", think again. Hard.)
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
Great comments Joe, and I agree with you that there is a lot more that can be done and should be done, but there is a main difference with your recount about the auto industry, all those changes were pushed by evolving regulation and changes in the law and enforcement. Going back then to a previous question, do we want more/any regulation ? Cheers Jorge
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
Great comments Joe, and I agree with you that there is a lot more that can be done and should be done, but there is a main difference with your recount about the auto industry, all those changes were pushed by evolving regulation and changes in the law and enforcement.
Oh, good, you GOT my point.
Going back then to a previous question, do we want more/any regulation ?
We're going to get it, I think, because collectively we're too stupid to self-regulate. Locally, for example, we implement BCP38, we screen potential customers, and we have an abuse desk that will be happy to help. If you complain to us that you're getting packets from a customer here that contain the data octet 0x65, we'll put a stop to it (though you'll probably stop getting packets entirely), because we feel that it's being a good neighbour to not send things that we've been told are not wanted. Most network providers are in the unfortunate position of having allowed themselves to get too swamped and/or don't care to begin with. Running a dirty network is the norm, just as running Windows (sorry Gates) is the norm, just as running Internet Explorer is something of a norm, just as running with Administrator privs is the norm, etc. We've allowed horrible practices to become the norm. It's exceedingly hard to fix a bad norm. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 6/9/10 6:27 AM, Jorge Amodio wrote:
Going back then to a previous question, do we want more/any regulation ?
Laws and regulation exist because people can't behave civilly and be expected to respect the rights/boundries/property others. CAN-SPAM exists because the e-mail marketing business refused to self regulate and respect the wishes of consumers/administrators FDCPA exists because the debt collectors couldn't resist the temptation to harass and intimidate consumers, and behave ethically. It's just a matter of time, and really unavoidable. The thing is, these industries have no one to blame but themselves. In all cases, these laws/regulation only came into affect AFTER situations got out of control. Lately, the courts have been ruling that companies like LimeWire are responsible for their products being used for piracy/downloading because they knew what was going on, but were turning a blind eye. Why not apply the same standards to ISPs? If it can be shown that you had knowledge of specific abuse coming from your network, but for whatever reason, opted to ignore it and turn a blind eye, then you are responsible. When I see abuse from my network or am made aware of it, I isolate and drop on my edge the IPs in question, then investigate and respond. Most times, it takes me maybe 10-15 minutes to track down the user responsible, shut off their server or host, then terminate their stupid self. A little bit of effort goes a long way. But, if you refuse to put in the effort (I'm looking at you, GoDaddy Abuse Desk), then of course the problems won't go away. -- Brielle Bruns The Summit Open Source Development Group http://www.sosdg.org / http://www.ahbl.org
On Jun 9, 2010, at 8:26 AM, Brielle Bruns wrote:
On 6/9/10 6:27 AM, Jorge Amodio wrote:
Going back then to a previous question, do we want more/any regulation ?
Laws and regulation exist because people can't behave civilly and be expected to respect the rights/boundries/property others.
CAN-SPAM exists because the e-mail marketing business refused to self regulate and respect the wishes of consumers/administrators
Which is good, because it certainly eliminated most of the SPAM. -- NOT!
FDCPA exists because the debt collectors couldn't resist the temptation to harass and intimidate consumers, and behave ethically.
And of course, it has caused them all to do so, now, right? -- NOT!
It's just a matter of time, and really unavoidable. The thing is, these industries have no one to blame but themselves. In all cases, these laws/regulation only came into affect AFTER situations got out of control.
Software has been out of control for a long time and I hope that the gov't will start by ruling the "not responsible for our negligence or the damage it causes" clauses of software licenses invalid. That would actually be a major positive step because it would allow consumers to sue software manufacturers for their defects and the damages they cause leading to a radical change in the nature of how software developers approach responsibility for quality in their products. Right now, most consumer operating systems are "unsafe at any speed".
Lately, the courts have been ruling that companies like LimeWire are responsible for their products being used for piracy/downloading because they knew what was going on, but were turning a blind eye.
This is a positive step, IMHO, but, now companies like Apple and Micr0$0ft need to be held to similar standards.
Why not apply the same standards to ISPs? If it can be shown that you had knowledge of specific abuse coming from your network, but for whatever reason, opted to ignore it and turn a blind eye, then you are responsible.
I agree.
When I see abuse from my network or am made aware of it, I isolate and drop on my edge the IPs in question, then investigate and respond. Most times, it takes me maybe 10-15 minutes to track down the user responsible, shut off their server or host, then terminate their stupid self.
Yep.
A little bit of effort goes a long way. But, if you refuse to put in the effort (I'm looking at you, GoDaddy Abuse Desk), then of course the problems won't go away.
Agreed. Owen
On 6/9/2010 15:56, Owen DeLong wrote:
On Jun 9, 2010, at 8:26 AM, Brielle Bruns wrote:
On 6/9/10 6:27 AM, Jorge Amodio wrote:
Going back then to a previous question, do we want more/any regulation ?
Laws and regulation exist because people can't behave civilly and be expected to respect the rights/boundries/property others.
CAN-SPAM exists because the e-mail marketing business refused to self regulate and respect the wishes of consumers/administrators
Which is good, because it certainly eliminated most of the SPAM. -- NOT!
It is actually an outstanding example of something of something I spoke of here earlier. Without any exception that I know of, regulations are written to protect the entrenched. CAN-SPAM was written to protect spammers, not to prevent anything important to them. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Jun 9, 2010, at 2:05 PM, Larry Sheldon wrote:
On 6/9/2010 15:56, Owen DeLong wrote:
On Jun 9, 2010, at 8:26 AM, Brielle Bruns wrote:
On 6/9/10 6:27 AM, Jorge Amodio wrote:
Going back then to a previous question, do we want more/any regulation ?
Laws and regulation exist because people can't behave civilly and be expected to respect the rights/boundries/property others.
CAN-SPAM exists because the e-mail marketing business refused to self regulate and respect the wishes of consumers/administrators
Which is good, because it certainly eliminated most of the SPAM. -- NOT!
It is actually an outstanding example of something of something I spoke of here earlier.
Without any exception that I know of, regulations are written to protect the entrenched. CAN-SPAM was written to protect spammers, not to prevent anything important to them.
Actually, as much as it would make so much more sense if that were the case, it simply isn't true. CAN-SPAM was written to be a compromise that was supposed to allow consumers to opt out of receiving SPAM and prevent SPAMMERs from sending unwanted messages. Sadly, of course, it hasn't done either one. Owen
-- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner.
Freedom under a constitutional republic is a well armed lamb contesting the vote.
Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca
ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
Owen DeLong wrote:
Software has been out of control for a long time and I hope that the gov't will start by ruling the "not responsible for our negligence or the damage it causes" clauses of software licenses invalid.
The beauty of my "attractive nuisance" argument is that the EULA doesn't shield Microsoft from the damage their software causes to a 3rd party such as the ISP who has to deal with the botnet infections of their customers. jc
On Jun 9, 2010, at 11:05 PM, JC Dill wrote:
Owen DeLong wrote:
Software has been out of control for a long time and I hope that the gov't will start by ruling the "not responsible for our negligence or the damage it causes" clauses of software licenses invalid.
The beauty of my "attractive nuisance" argument is that the EULA doesn't shield Microsoft from the damage their software causes to a 3rd party such as the ISP who has to deal with the botnet infections of their customers. jc
Yep... Much the same as my suggestion merely involves applying the same product liability standards as every other industry faces to software. Owen
On 6/9/10 2:56 PM, Owen DeLong wrote:
On Jun 9, 2010, at 8:26 AM, Brielle Bruns wrote:
On 6/9/10 6:27 AM, Jorge Amodio wrote:
Going back then to a previous question, do we want more/any regulation ?
Laws and regulation exist because people can't behave civilly and be expected to respect the rights/boundries/property others.
CAN-SPAM exists because the e-mail marketing business refused to self regulate and respect the wishes of consumers/administrators
Which is good, because it certainly eliminated most of the SPAM. -- NOT!
FDCPA exists because the debt collectors couldn't resist the temptation to harass and intimidate consumers, and behave ethically.
And of course, it has caused them all to do so, now, right? -- NOT!
These may not solve all problems, but it does give victims (at least in the case of debt collectors) the ability to club them in the face in court a few times to the tune of a thousand bucks or so an incident. Nothing is more satisfying then being able to offer a debt collector the option to settle for $X amount. :)
Lately, the courts have been ruling that companies like LimeWire are responsible for their products being used for piracy/downloading because they knew what was going on, but were turning a blind eye.
This is a positive step, IMHO, but, now companies like Apple and Micr0$0ft need to be held to similar standards.
Problem is, Microsoft and Apple, though being lax in their coding practices, can't entirely help it. Open Source software has the same problems, but do you really think that we should be charging Linus every time a Linux box is owned? There comes a point where a program is so large and expansive that holes/exploits is a fact of life.
Why not apply the same standards to ISPs? If it can be shown that you had knowledge of specific abuse coming from your network, but for whatever reason, opted to ignore it and turn a blind eye, then you are responsible.
I agree.
When I see abuse from my network or am made aware of it, I isolate and drop on my edge the IPs in question, then investigate and respond. Most times, it takes me maybe 10-15 minutes to track down the user responsible, shut off their server or host, then terminate their stupid self.
Yep.
A little bit of effort goes a long way. But, if you refuse to put in the effort (I'm looking at you, GoDaddy Abuse Desk), then of course the problems won't go away.
Agreed.
Now if only we could get certain providers to put some effort into it... -- Brielle Bruns The Summit Open Source Development Group http://www.sosdg.org / http://www.ahbl.org
Going back then to a previous question, do we want more/any regulation ?
Yes. All vulnerable industries should have their use of network communications regulated. This means all power stations, electricity line operators, dam gate operators, etc. They should all be required to meet a standard of practice for secure network communications, air gap between SCADA networks and all other networks, and annual network inspections to ensure compliance. If any organization operates an infrastructure which could be vulnerable to cyberattack that would damage the country in which they operate, that organization needs to be regulated to ensure that their networks cannot be exploited for cyberattack purposes. That is the correct and measured response which does not involve the military except possibly in a security advisory role, and which is within the powers of governments. I would expect that the increased awareness of network security that resulted would pay dividends in business and home use of networks. --Michael Dillon
I would expect that the increased awareness of network security that resulted would pay dividends in business and home use of networks.
I'd expect a lot of nice business for audit firms with the right government connections, and another checklist with a magic acronym that has everything to do with security theatre and nothing to do with either actual security or the reality of operating a network. But perhaps I'm jaded from dealing with current auditors. Regards, Tim.
Tim Franklin wrote:
and another checklist with a magic acronym that has everything to do with security theatre and nothing to do with either actual security or the reality of operating a network. Checklists come in handy in fact if many were followed (BCP checklists, appropriate industry standard fw, system rules) the net would be a cleaner place. What I've seen by many responses are feet dragging: "Ah why bother it won't do nothing to stop it..." Without even trying. It all begins with one's own network. The entire concept of peering was built on trust of the peer. Would you knowingly allow someone to share your hallway without taking precautionary measures or at least a vigilant eye. What happens when you see something out of the norm, do you continue to allow them without saying anything waiting for your neighbor to speak. In doing so, how can you be assured the individual won't try to creep up on your property.
// JC Dill wrote: Yes, ISPs are going to have to "handle" the problem. But, IMHO the root cause of the problem starts in Redmond, and ISPs should sue Redmond for the lack of suitable security in their product, rendering it an attractive nuisance and requiring ISPs to clean up after Redmond's mess. It's not fair to expect ISPs to shoulder this burden, and it's not fair to pass on the cost to customers as a blanket surcharge (and it won't work from a business standpoint) as not all customer use Microsoft's virus-vector software. And it's not really fair to expect the end customer to shoulder this burden when it's Microsoft's fault for failing to properly secure their software. But end user customers don't have the resources to sue Microsoft, and then there's that whole EULA problem. ISPs who are NOT a party to the EULA between Microsoft and the user, but who are impacted by Microsoft's shoddy security can (IMHO) make a valid claim that Microsoft created an attractive nuisance (improperly secured software), and should be held accountable for the vandal's use thereof, used to access and steal resources (bandwidth, etc.) from the ISP thru the ISP's customers infested Windows computer. // More finger pointing here. Should MS now sue Adobe for shoddy coding because Adobe's PDF reader caused a compromise (improperly secured software). Let's take it from the top down for a moment and focus on what is going on. Operating systems are insecure it doesn't matter if it was produced by a company in Redmond or hacked together on IRC. ANY operating system that is in an attacking state (dishing out malware, attacking other machines) is doing so via a network. If slash when you see it, do you shrug it off and say not my problem, its because of someone's lack of oversight in Redmond when you have the capability to stop it. ISP's don't "have to" handle the problem, they SHOULD handle the problem. -- =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ J. Oquendo SGFA, SGFE, C|EH, CNDA, CHFI, OSCP, CPT "It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you'll do things differently." - Warren Buffett 227C 5D35 7DCB 0893 95AA 4771 1DCE 1FD1 5CCD 6B5E http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x5CCD6B5E
J. Oquendo wrote:
More finger pointing here.
You say that like it's a bad thing. I'm pointing fingers at the company that has a long history of selling software with shoddy security (including releasing newer versions with restored vulnerabilities that were found and "fixed" years earlier), and then passing the buck on fixing the issues it causes by hiding behind their EULA. Their EULA protects Microsoft from their own customers, but it does NOT protect Microsoft from the effects the damage causes on OTHERS who are not parties to the EULA. This is where "attractive nuisance" comes in.
ISP's don't "have to" handle the problem, they SHOULD handle the problem.
This whole thread is about ISPs not handling the problem and allowing the problem to affect others beyond the ISP. In this case we could claim the ISP is also allowing an attractive nuisance to damage others and hold that ISP responsible for the damage that extends outside their network. However, we don't need a legal framework to solve THAT problem - we can address it with appropriate network blocks etc. (UDP-style) jc
On Thu, 10 Jun 2010 12:27:18 BST, Michael Dillon said:
If any organization operates an infrastructure which could be vulnerable to cyberattack that would damage the country in which they operate, that organization needs to be regulated to ensure that their networks cannot be exploited for cyberattack purposes.
s/cannot be/minimize the risk of/ And "would damage the country" is a very fuzzy concept that you really don't want to go anywhere near. Remember Microsoft arguing that a Federal judge shouldn't impose an injunction that was going to make them miss a ship date, on the grounds that the resulting delay would cause lost productivity at customer sites and harm the economy? (Mind you, I thought MS was making a good case they *should* be regulated, if their ship dates actually had that much influence.. ;)
And "would damage the country" is a very fuzzy concept that you really don't want to go anywhere near.
I wasn't drafting legislation; I was introducing a concept. I would expect that actual legislation would explicitly list which industries were subject to such regulation. Otherwise it might include all Internet PoPs and datacenters which would be rather dumb. --Michael Dillon
On Jun 9, 2010, at 5:02 AM, Joe Greco wrote:
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
I see that you don't understand that.
Seems to me that you are the one not understanding... I can't refinance my mortgage right now to take advantage of the current interest rates. Why? Because irresponsible people got into loans they couldn't afford and engaged in speculative transactions. Their failure resulted in a huge drop in value to my house which brought me below the magic 80% loan to value ratio, which, because of said same bad actors became a legal restriction instead of a target number around which lenders had some flexibility. So, because I had a house I could afford and a reasonable mortgage, I'm now getting penalized by paying higher taxes to cover mortgage absorptions, reductions, and modifications for these irresponsible people. I'm getting penalized by paying higher interest rates because due to the damage they did to my property value and the laws they forced to be created, I can't refinance. I'm mad as hell and frankly, I don't want to take it any more. Do you see that? Do you still think I don't have a legitimate point on this? I'm tired of subsidizing stupidity and bad actors. It's too expensive. I don't want to do it any more. We already have too many stupid people and bad actors. We really don't need to subsidize or encourage the creation of more.
The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted.
The reality is that things change. Forty-three years ago, you could still buy a car that didn't have seat belts. Thirty years ago, most people still didn't wear seat belts. Twenty years ago, air bags began appearing in large volume in passenger vehicles. Throughout this period, cars have been de-stiffened with crumple zones, etc., in order to make them safer for passengers in the event of a crash. Mandatory child seat laws have been enacted at various times throughout. A little more than ten years ago, air bags were mandatory. Ten years ago, LATCH clips for child safety seats became mandatory. We now have side impact air bags, etc.
Sure.
Generally speaking, we do not penalize car owners for owning an older car, and we've maybe only made them retrofit seat belts (but not air bags, crumple zones, etc) into them, despite the fact that some of those big old boats can be quite deadly to other drivers in today's more easily-damaged cars. We've increased auto safety by mandating better cars, and by penalizing users who fail to make use of the safety features.
Right, but, owners of older cars are primarily placing themselves at risk, not others. In this case, it's a question of others putting me at risk. That, generally, isn't tolerated.
There is only so much "proper security" you can expect the average PC user to do. The average PC user expects to be able to check e-mail, view the web, edit some documents, and listen to some songs. The average car driver expects to be able to drive around and do things. You can try to mandate that the average car driver must change their own oil, just as you can try to mandate that the average computer must do what you've naively referred to as "proper security", but the reality is that grandma doesn't want to get under her car, doesn't have the knowledge or tools, and would rather spend $30 at SpeedyLube. If we can not make security a similarly easy target for the end-user, rather than telling them to "take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer," then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user.
I disagree. It used to be that anyone could drive a car. Today, you need to take instruction on driving and pass a test showing you are competent to operate a motor vehicle before you are allowed to drive legally. Things change, as you say. I have no problem with the same requirement being added to attaching a computer to the network. If you drive a car in a reckless manner so as to endanger others, you are criminally liable for violating the safe driving laws as well as civilly liable for the damages you cause. Why should operating an unsafe computer be any different?
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
I'm not out to target specific products. Yes, I'll celebrate the death of our favorite convicted felon in Redmond, but, that's not the point. I don't have a CompSci degree specializing in that stuff and I seem to be able to run clean systems. I don't have a CompSci degree at all. It's not that hard to run clean systems, actually. Mostly it takes not being willing to click yes to every download and exercising minimal judgment about which web sites you choose to trust. The point is that if I run a clean system, why should I have to pay a subsidy to those that do not? I'm tired of this mentality that says let's penalize the good actors to subsidize the bad actors. I'm tired of it with mortgages. I'm tired of it with businesses. I'm tired of watching the government, time after time, reward bad behavior and punish good behavior and then wonder why they get more bad and less good behavior.
We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff.
I'm not opposed to making operating systems and applications safer. As I said, just as with cars, the manufacturers should be held liable by the consumers. However, the consumer that is operating the car that plows a group of pedestrians is liable to the pedestrians. The manufacturer is usually liable to the operator through subrogation. Owen
I'm not opposed to making operating systems and applications safer. As I said, just as with cars, the manufacturers should be held liable by the consumers. However, the consumer that is operating the car that plows a group of pedestrians is liable to the pedestrians. The manufacturer is usually liable to the operator through subrogation.
That's why at least in the US by *regulation* you must have insurance to be able to operate a car, instead of mitigating the safety issues that represents a teenager texting while driving we deal with the consequences. Perhaps we have to call the insurance industry to come up with something. Cheers Jorge
Once upon a time, Jorge Amodio <jmamodio@gmail.com> said:
That's why at least in the US by *regulation* you must have insurance to be able to operate a car, instead of mitigating the safety issues that represents a teenager texting while driving we deal with the consequences.
The insurance requirement is a state-by-state thing. It was only added here a few years ago, and I don't think it is universal. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On Jun 9, 2010, at 6:09 AM, Chris Adams wrote:
Once upon a time, Jorge Amodio <jmamodio@gmail.com> said:
That's why at least in the US by *regulation* you must have insurance to be able to operate a car, instead of mitigating the safety issues that represents a teenager texting while driving we deal with the consequences.
The insurance requirement is a state-by-state thing. It was only added here a few years ago, and I don't think it is universal.
I believe at least 48, if not 50 states now have compulsory financial responsibility laws. However, even if you didn't have insurance, that never exempted you from liability, it just made you less likely to be able to meet your obligations under that liability. Owen
On 6/9/2010 08:09, Chris Adams wrote:
Once upon a time, Jorge Amodio <jmamodio@gmail.com> said:
That's why at least in the US by *regulation* you must have insurance to be able to operate a car, instead of mitigating the safety issues that represents a teenager texting while driving we deal with the consequences.
The insurance requirement is a state-by-state thing. It was only added here a few years ago, and I don't think it is universal.
Similar answer as the one for the brakes and tires thing. Implementation may vary from state to state, just like the mechanical standards thing. When last I lived in California, there was no "insurance" requirement but there was a "proof of financial responsibility" requirement that was most easily (for most people) by carrying insurance to certain standards for Public Liability and Property Damage. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Jun 9, 2010, at 5:02 AM, Joe Greco wrote:
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
I see that you don't understand that.
Seems to me that you are the one not understanding...
I can't refinance my mortgage right now to take advantage of the current interest rates. Why? Because irresponsible people got into loans they couldn't afford and engaged in speculative transactions. Their failure resulted in a huge drop in value to my house which brought me below the magic 80% loan to value ratio, which, because of said same bad actors became a legal restriction instead of a target number around which lenders had some flexibility. So, because I had a house I could afford and a reasonable mortgage, I'm now getting penalized by paying higher taxes to cover mortgage absorptions, reductions, and modifications for these irresponsible people. I'm getting penalized by paying higher interest rates because due to the damage they did to my property value and the laws they forced to be created, I can't refinance.
I'm mad as hell and frankly, I don't want to take it any more.
Do you see that? Do you still think I don't have a legitimate point on this?
I'm tired of subsidizing stupidity and bad actors. It's too expensive. I don't want to do it any more. We already have too many stupid people and bad actors. We really don't need to subsidize or encourage the creation of more.
A doesn't really seem connected to B.
The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted.
The reality is that things change. Forty-three years ago, you could still buy a car that didn't have seat belts. Thirty years ago, most people still didn't wear seat belts. Twenty years ago, air bags began appearing in large volume in passenger vehicles. Throughout this period, cars have been de-stiffened with crumple zones, etc., in order to make them safer for passengers in the event of a crash. Mandatory child seat laws have been enacted at various times throughout. A little more than ten years ago, air bags were mandatory. Ten years ago, LATCH clips for child safety seats became mandatory. We now have side impact air bags, etc.
Sure.
Generally speaking, we do not penalize car owners for owning an older car, and we've maybe only made them retrofit seat belts (but not air bags, crumple zones, etc) into them, despite the fact that some of those big old boats can be quite deadly to other drivers in today's more easily-damaged cars. We've increased auto safety by mandating better cars, and by penalizing users who fail to make use of the safety features.
Right, but, owners of older cars are primarily placing themselves at risk, not others.
I am pretty sure I saw stats that suggested that old cars that crashed into new cars did substantially more damage to the new car and its occupants than an equivalent crash between two new cars, something to do with the old car not absorbing about half the impact into its own (nonexistent) crumple zones, though there are obvious deficiencies in the protection afforded to the occupants of the old car as well...
In this case, it's a question of others putting me at risk. That, generally, isn't tolerated.
There is only so much "proper security" you can expect the average PC user to do. The average PC user expects to be able to check e-mail, view the web, edit some documents, and listen to some songs. The average car driver expects to be able to drive around and do things. You can try to mandate that the average car driver must change their own oil, just as you can try to mandate that the average computer must do what you've naively referred to as "proper security", but the reality is that grandma doesn't want to get under her car, doesn't have the knowledge or tools, and would rather spend $30 at SpeedyLube. If we can not make security a similarly easy target for the end-user, rather than telling them to "take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer," then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user.
I disagree. It used to be that anyone could drive a car. Today, you need to take instruction on driving and pass a test showing you are competent to operate a motor vehicle before you are allowed to drive legally.
Things change, as you say. I have no problem with the same requirement being added to attaching a computer to the network.
If you drive a car in a reckless manner so as to endanger others, you are criminally liable for violating the safe driving laws as well as civilly liable for the damages you cause. Why should operating an unsafe computer be any different?
Generally speaking, because the computer is unsafe by design, and most of the problems we're discussing are not "driving the car in a reckless manner." I do not live in mortal fear that I am going to steer my car into the median and it's going to jump over into oncoming traffic and ram into an oncoming semi, because that's simply not something I'd do, and it's not something the car designers expected would be a regular thing to do. On the other hand, I do live in mortal fear of opening a PDF document on a Windows machine, something that both Adobe and Microsoft deliberately engineered to be as easy and trivial as possible, and which millions of people do on a daily and regular basis, but which nonetheless can have the undesirable side effect of infecting my computer with the latest stealth exploit, at least if I read the news correctly. As a Windows user, I *am* *expected* to open web documents and go browsing around. The Internet has been deliberately designed with millions upon millions of domains and web sites; it's ridiculous to suggest that users should be aware that visiting a particular web site is likely to be harmful, especially given that we can't even keep servers safe, and some legitimate high-volume web sites have even been known to serve up bad stuff.
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
I'm not out to target specific products. Yes, I'll celebrate the death of our favorite convicted felon in Redmond, but, that's not the point.
I don't have a CompSci degree specializing in that stuff and I seem to be able to run clean systems. I don't have a CompSci degree at all. It's not that hard to run clean systems, actually. Mostly it takes not being willing to click yes to every download and exercising minimal judgment about which web sites you choose to trust.
It takes an understanding of how it all works behind the scenes in order to understand what all those silly "Yes/No" prompts mean; that whole mechanism is part of what I mean when I say "defective by design." Why is it okay to click "Yes" when a website asks if we want to install "Flash" or "Silverlight" but it's not okay to click "Yes" when a website asks if we want to install "DodgyCodec"? How do you explain that to your grandmother?
The point is that if I run a clean system, why should I have to pay a subsidy to those that do not? I'm tired of this mentality that says let's penalize the good actors to subsidize the bad actors. I'm tired of it with mortgages. I'm tired of it with businesses. I'm tired of watching the government, time after time, reward bad behavior and punish good behavior and then wonder why they get more bad and less good behavior.
Hey, I agree. Look, we run a clean network here. I have the same gripes. We see all sorts of probe traffic and crap, why should we bother being clean? Why should we have to go to extra work to defend against networks that aren't?
We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff.
I'm not opposed to making operating systems and applications safer. As I said, just as with cars, the manufacturers should be held liable by the consumers. However, the consumer that is operating the car that plows a group of pedestrians is liable to the pedestrians. The manufacturer is usually liable to the operator through subrogation.
Which would mean anything if we had computer users that were deliberately injuring or killing people with their computers. Unfortunately, I'd say that most sick computers are more akin to those awful oil-burning, smog- generating, black-smoke-belching cars. You don't have much of a private right of action against the guy that drives by you and blasts a wave of awful black particulate matter out his exhaust at you. We've handled a lot of that through mandatory emissions inspections (not sure how universal that is). Regulation, in that case, seems to be a generally positive effect. I don't see any simple solutions, regardless. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jun 9, 2010, at 6:17 AM, Joe Greco wrote:
On Jun 9, 2010, at 5:02 AM, Joe Greco wrote:
So? If said end customer is operating a network-connected system without sufficient knowledge to properly maintain it and prevent it from doing mischief to the rest of the network, why should the rest of us subsidize her negligence? I don't see where making her pay is a bad thing.
I see that you don't understand that.
Seems to me that you are the one not understanding...
I can't refinance my mortgage right now to take advantage of the current interest rates. Why? Because irresponsible people got into loans they couldn't afford and engaged in speculative transactions. Their failure resulted in a huge drop in value to my house which brought me below the magic 80% loan to value ratio, which, because of said same bad actors became a legal restriction instead of a target number around which lenders had some flexibility. So, because I had a house I could afford and a reasonable mortgage, I'm now getting penalized by paying higher taxes to cover mortgage absorptions, reductions, and modifications for these irresponsible people. I'm getting penalized by paying higher interest rates because due to the damage they did to my property value and the laws they forced to be created, I can't refinance.
I'm mad as hell and frankly, I don't want to take it any more.
Do you see that? Do you still think I don't have a legitimate point on this?
I'm tired of subsidizing stupidity and bad actors. It's too expensive. I don't want to do it any more. We already have too many stupid people and bad actors. We really don't need to subsidize or encourage the creation of more.
A doesn't really seem connected to B.
Proof that you still don't get it. Punishing those that are responsible by making them pay for the behavior of those who fail to take responsibility IS a major problem. A and B are both examples of such a process.
The internet may be a vast ocean where bad guys keep dumping garbage, but, if software vendors stopped building highly exploitable code and ISPs started disconnecting abusing systems rapidly, it would have a major effect on the constantly changing currents. If abuse departments were fully funded by cleanup fees charged to negligent users who failed to secure their systems properly, it would both incentivize users to do proper security _AND_ provide for more responsive abuse departments as issues are reduced and their budget scales linearly with the amount of abuse being conducted.
The reality is that things change. Forty-three years ago, you could still buy a car that didn't have seat belts. Thirty years ago, most people still didn't wear seat belts. Twenty years ago, air bags began appearing in large volume in passenger vehicles. Throughout this period, cars have been de-stiffened with crumple zones, etc., in order to make them safer for passengers in the event of a crash. Mandatory child seat laws have been enacted at various times throughout. A little more than ten years ago, air bags were mandatory. Ten years ago, LATCH clips for child safety seats became mandatory. We now have side impact air bags, etc.
Sure.
Generally speaking, we do not penalize car owners for owning an older car, and we've maybe only made them retrofit seat belts (but not air bags, crumple zones, etc) into them, despite the fact that some of those big old boats can be quite deadly to other drivers in today's more easily-damaged cars. We've increased auto safety by mandating better cars, and by penalizing users who fail to make use of the safety features.
Right, but, owners of older cars are primarily placing themselves at risk, not others.
I am pretty sure I saw stats that suggested that old cars that crashed into new cars did substantially more damage to the new car and its occupants than an equivalent crash between two new cars, something to do with the old car not absorbing about half the impact into its own (nonexistent) crumple zones, though there are obvious deficiencies in the protection afforded to the occupants of the old car as well...
Old cars without crumple zones tend to do more damage to new cars with crumple zones. Occupants of new cars tend to receive less damage because the crumple zones absorb some of the energy while occupants of older cars receive more of the energy transferred directly to them due to the higher stiffness of the older car. At least in the studies I have read.
In this case, it's a question of others putting me at risk. That, generally, isn't tolerated.
There is only so much "proper security" you can expect the average PC user to do. The average PC user expects to be able to check e-mail, view the web, edit some documents, and listen to some songs. The average car driver expects to be able to drive around and do things. You can try to mandate that the average car driver must change their own oil, just as you can try to mandate that the average computer must do what you've naively referred to as "proper security", but the reality is that grandma doesn't want to get under her car, doesn't have the knowledge or tools, and would rather spend $30 at SpeedyLube. If we can not make security a similarly easy target for the end-user, rather than telling them to "take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer," then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user.
I disagree. It used to be that anyone could drive a car. Today, you need to take instruction on driving and pass a test showing you are competent to operate a motor vehicle before you are allowed to drive legally.
Things change, as you say. I have no problem with the same requirement being added to attaching a computer to the network.
If you drive a car in a reckless manner so as to endanger others, you are criminally liable for violating the safe driving laws as well as civilly liable for the damages you cause. Why should operating an unsafe computer be any different?
Generally speaking, because the computer is unsafe by design, and most of the problems we're discussing are not "driving the car in a reckless manner." I do not live in mortal fear that I am going to steer my car into the median and it's going to jump over into oncoming traffic and ram into an oncoming semi, because that's simply not something I'd do, and it's not something the car designers expected would be a regular thing to do. On the other hand, I do live in mortal fear of opening a PDF document on a Windows machine, something that both Adobe and Microsoft deliberately engineered to be as easy and trivial as possible, and which millions of people do on a daily and regular basis, but which nonetheless can have the undesirable side effect of infecting my computer with the latest stealth exploit, at least if I read the news correctly.
I don't agree with your premise. Yes, some operating systems are unsafe by design, but, not all. As I said, you should be accountable for the behavior of your computer. If you can show that the behavior was the result of faulty software, then, you should be able to recover from the manufacturer of that software (assuming you paid a professional for your software). Just as a driver of a car with a stuck accelerator due to manufacturer defect is liable to the pedestrians they plow, and, the manufacturer is liable to the driver, I see no reason not to have a similar liability chain for software. Strangely, I don't live in mortal fear of opening a PDF document on my Macs or Linux systems. As such, I don't see why we should all be punished for the fact that you chose to buy software from the morons in Redmond. A bad choice made by a majority of people is still a bad choice. (Note: You are the one who singled out Micr0$0ft first.)
As a Windows user, I *am* *expected* to open web documents and go browsing around. The Internet has been deliberately designed with millions upon millions of domains and web sites; it's ridiculous to suggest that users should be aware that visiting a particular web site is likely to be harmful, especially given that we can't even keep servers safe, and some legitimate high-volume web sites have even been known to serve up bad stuff.
I assume all web sites are potentially harmful unless I have good reason to believe otherwise. Why shouldn't everyone be expected to behave in a similar manner? Seems to me that is the only rational approach. Don't you tell your kids not to talk to strangers? Isn't this sort of the same thing?
I'm all fine with noting that certain products are particularly awful. However, we have to be aware that users are simply not going to be required to go get a CompSci degree specializing in risk management and virus cleansing prior to being allowed to buy a computer. This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default, our networks need to be more resilient to threats, ranging from simple things such as BCP38 and automatic detection of certain obvious violations, to more comprehensive things such as mandatory virus scanning by e-mail providers, etc., ... there's a lot that could be done, that most on the technology side of things have been unwilling to commit to.
I'm not out to target specific products. Yes, I'll celebrate the death of our favorite convicted felon in Redmond, but, that's not the point.
I don't have a CompSci degree specializing in that stuff and I seem to be able to run clean systems. I don't have a CompSci degree at all. It's not that hard to run clean systems, actually. Mostly it takes not being willing to click yes to every download and exercising minimal judgment about which web sites you choose to trust.
It takes an understanding of how it all works behind the scenes in order to understand what all those silly "Yes/No" prompts mean; that whole mechanism is part of what I mean when I say "defective by design."
Agreed. Interestingly, I don't have very many of those prompts on my Mac, and, when I do, it seems to me that I have very little need to understand what is going on behind the scenes to make an intelligent choice in response. Generally it says "You are about to open an application that you downloaded from a web site. Are you sure you want to do this? If you aren't sure you can trust the website, you should say no."
Why is it okay to click "Yes" when a website asks if we want to install "Flash" or "Silverlight" but it's not okay to click "Yes" when a website asks if we want to install "DodgyCodec"? How do you explain that to your grandmother?
Poor choices of examples... I'm not sure it is OK to click yes for Flash. It's pretty obviously a huge vulnerability. However, I usually tell people to make that decision along the lines of how much they think they should trust the website. Micr0$0ft starts at -10. Adobe starts at -5. $randomsite starts at -50. Paypal starts at 0. Apple starts at 2. as an example of some of my trust levels.
The point is that if I run a clean system, why should I have to pay a subsidy to those that do not? I'm tired of this mentality that says let's penalize the good actors to subsidize the bad actors. I'm tired of it with mortgages. I'm tired of it with businesses. I'm tired of watching the government, time after time, reward bad behavior and punish good behavior and then wonder why they get more bad and less good behavior.
Hey, I agree. Look, we run a clean network here. I have the same gripes. We see all sorts of probe traffic and crap, why should we bother being clean? Why should we have to go to extra work to defend against networks that aren't?
I'm not saying "why should I bother being clean?" I think I should bother being clean because it should be the minimal obligation to society if you connect to the network. I'm saying why should we accept and be forced to pay subsidies to those who ignore that responsibility? I'm saying that we should have accountability and the ability to recover our costs from those that aren't. You'd be surprised how fast that would reduce the number of those that aren't.
We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff.
I'm not opposed to making operating systems and applications safer. As I said, just as with cars, the manufacturers should be held liable by the consumers. However, the consumer that is operating the car that plows a group of pedestrians is liable to the pedestrians. The manufacturer is usually liable to the operator through subrogation.
Which would mean anything if we had computer users that were deliberately injuring or killing people with their computers. Unfortunately, I'd say that most sick computers are more akin to those awful oil-burning, smog- generating, black-smoke-belching cars. You don't have much of a private right of action against the guy that drives by you and blasts a wave of awful black particulate matter out his exhaust at you. We've handled a lot of that through mandatory emissions inspections (not sure how universal that is). Regulation, in that case, seems to be a generally positive effect.
Nope... Even if the consumer plows the pedestrians because of a defect in the vehicle, the pedestrians generally sue the driver who then goes after the manufacturer through subrogation. If it wasn't a defect in the car, then, the manufacturer has no liability, but, whether deliberate or negligent, the driver still does.
I don't see any simple solutions, regardless.
A proper chain of liability wouldn't be too difficult and would go a long way to solving the problem. A few users who paid the price of clicking yes in the wrong place would serve as a good lesson for the majority of users. A few users successfully getting their costs reimbursed by Micr0$0ft would lead to major changes in Micr0$0ft's approach to software development. Global "charge everyone a security fee" proposals will only preserve the status quo. Heck, McAfee and Norton are arguably implementations of just that sort of thing. Owen
I am pretty sure I saw stats that suggested that old cars that crashed into new cars did substantially more damage to the new car and its occupants than an equivalent crash between two new cars, something to do with the old car not absorbing about half the impact into its own (nonexistent) crumple zones, though there are obvious deficiencies in the protection afforded to the occupants of the old car as well...
Old cars without crumple zones tend to do more damage to new cars with crumple zones. Occupants of new cars tend to receive less damage because the crumple zones absorb some of the energy while occupants of older cars receive more of the energy transferred directly to them due to the higher stiffness of the older car.
At least in the studies I have read.
I'm talking about the difference between the levels of damage to a new car where you have a crash between an old and new car, and a crash between two new cars. The evidence that an old car is more lethal to its occupants is well known. We were discussing damage inflicted upon others, so that is not relevant.
Generally speaking, because the computer is unsafe by design, and most of the problems we're discussing are not "driving the car in a reckless manner." I do not live in mortal fear that I am going to steer my car into the median and it's going to jump over into oncoming traffic and ram into an oncoming semi, because that's simply not something I'd do, and it's not something the car designers expected would be a regular thing to do. On the other hand, I do live in mortal fear of opening a PDF document on a Windows machine, something that both Adobe and Microsoft deliberately engineered to be as easy and trivial as possible, and which millions of people do on a daily and regular basis, but which nonetheless can have the undesirable side effect of infecting my computer with the latest stealth exploit, at least if I read the news correctly.
I don't agree with your premise. Yes, some operating systems are unsafe by design, but, not all. As I said, you should be accountable for the behavior of your computer. If you can show that the behavior was the result of faulty software, then, you should be able to recover from the manufacturer of that software (assuming you paid a professional for your software).
That is a nice theory, but does not play out in practice. If you are suggesting that part of the solution to the overall problem is to legislate such liability, overriding any EULA's in the process, we can certainly discuss that.
Just as a driver of a car with a stuck accelerator due to manufacturer defect is liable to the pedestrians they plow, and, the manufacturer is liable to the driver, I see no reason not to have a similar liability chain for software.
Doesn't exist at this time, see EULA.
Strangely, I don't live in mortal fear of opening a PDF document on my Macs or Linux systems. As such, I don't see why we should all be punished for the fact that you chose to buy software from the morons in Redmond. A bad choice made by a majority of people is still a bad choice. (Note: You are the one who singled out Micr0$0ft first.)
The latest Adobe vulnerability applies to pretty much all platforms. It is, in this case, a Flash vulnerability, but others have been PDF. You can use an alternative Flash or PDF player, of course, but that's not a guarantee, it's just lowering the risk.
As a Windows user, I *am* *expected* to open web documents and go browsing around. The Internet has been deliberately designed with millions upon millions of domains and web sites; it's ridiculous to suggest that users should be aware that visiting a particular web site is likely to be harmful, especially given that we can't even keep servers safe, and some legitimate high-volume web sites have even been known to serve up bad stuff.
I assume all web sites are potentially harmful unless I have good reason to believe otherwise. Why shouldn't everyone be expected to behave in a similar manner?
Seems to me that is the only rational approach. Don't you tell your kids not to talk to strangers? Isn't this sort of the same thing?
I haven't been a child for many years. Generally speaking, I expect to be able to talk to another person without significant risk. What you suggest makes sense from a security point of view, but many people are only able to identify a small handful of websites as being ones they "know". If you're suggesting that people should never visit other websites, then that really limits the usefulness of the Internet. Why shouldn't it be, instead, that web browsers are made to be safe and invulnerable?
I'm not out to target specific products. Yes, I'll celebrate the death of our favorite convicted felon in Redmond, but, that's not the point.
I don't have a CompSci degree specializing in that stuff and I seem to be able to run clean systems. I don't have a CompSci degree at all. It's not that hard to run clean systems, actually. Mostly it takes not being willing to click yes to every download and exercising minimal judgment about which web sites you choose to trust.
It takes an understanding of how it all works behind the scenes in order to understand what all those silly "Yes/No" prompts mean; that whole mechanism is part of what I mean when I say "defective by design."
Agreed. Interestingly, I don't have very many of those prompts on my Mac, and, when I do, it seems to me that I have very little need to understand what is going on behind the scenes to make an intelligent choice in response. Generally it says "You are about to open an application that you downloaded from a web site. Are you sure you want to do this? If you aren't sure you can trust the website, you should say no."
Yes, but we're not discussing you and your Mac, we're discussing Grandma and the Windows box her son bought her for Christmas last year.
Why is it okay to click "Yes" when a website asks if we want to install "Flash" or "Silverlight" but it's not okay to click "Yes" when a website asks if we want to install "DodgyCodec"? How do you explain that to your grandmother?
Poor choices of examples... I'm not sure it is OK to click yes for Flash. It's pretty obviously a huge vulnerability.
Yet it's so clearly required to view a large percentage of the web (at least to hear the iPhone/iPad users grumble). And "everybody has it."
However, I usually tell people to make that decision along the lines of how much they think they should trust the website. Micr0$0ft starts at -10. Adobe starts at -5. $randomsite starts at -50. Paypal starts at 0. Apple starts at 2. as an example of some of my trust levels.
The point is that if I run a clean system, why should I have to pay a subsidy to those that do not? I'm tired of this mentality that says let's penalize the good actors to subsidize the bad actors. I'm tired of it with mortgages. I'm tired of it with businesses. I'm tired of watching the government, time after time, reward bad behavior and punish good behavior and then wonder why they get more bad and less good behavior.
Hey, I agree. Look, we run a clean network here. I have the same gripes. We see all sorts of probe traffic and crap, why should we bother being clean? Why should we have to go to extra work to defend against networks that aren't?
I'm not saying "why should I bother being clean?" I think I should bother being clean because it should be the minimal obligation to society if you connect to the network. I'm saying why should we accept and be forced to pay subsidies to those who ignore that responsibility? I'm saying that we should have accountability and the ability to recover our costs from those that aren't. You'd be surprised how fast that would reduce the number of those that aren't.
If there was some reasonable and fair manner to do that, maybe. However, as it stands, end users are left holding that bag, and absent some mechanism to allow them to recover costs from their software vendor, it strikes me as just as unfair as when we're left holding the bag.
We can make their Internet cars safer for them - but we largely haven't. Now we can all look forward to misguided government efforts to mandate some of this stuff.
I'm not opposed to making operating systems and applications safer. As I said, just as with cars, the manufacturers should be held liable by the consumers. However, the consumer that is operating the car that plows a group of pedestrians is liable to the pedestrians. The manufacturer is usually liable to the operator through subrogation.
Which would mean anything if we had computer users that were deliberately injuring or killing people with their computers. Unfortunately, I'd say that most sick computers are more akin to those awful oil-burning, smog- generating, black-smoke-belching cars. You don't have much of a private right of action against the guy that drives by you and blasts a wave of awful black particulate matter out his exhaust at you. We've handled a lot of that through mandatory emissions inspections (not sure how universal that is). Regulation, in that case, seems to be a generally positive effect.
Nope... Even if the consumer plows the pedestrians because of a defect in the vehicle, the pedestrians generally sue the driver who then goes after the manufacturer through subrogation.
If it wasn't a defect in the car, then, the manufacturer has no liability, but, whether deliberate or negligent, the driver still does.
Again, though, we just don't have that situation.
I don't see any simple solutions, regardless.
A proper chain of liability wouldn't be too difficult and would go a long way to solving the problem.
A few users who paid the price of clicking yes in the wrong place would serve as a good lesson for the majority of users.
Would they? Would they really?
A few users successfully getting their costs reimbursed by Micr0$0ft would lead to major changes in Micr0$0ft's approach to software development.
Except that won't happen as it stands.
Global "charge everyone a security fee" proposals will only preserve the status quo. Heck, McAfee and Norton are arguably implementations of just that sort of thing.
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 2010-06-09 at 07:02 -0500, Joe Greco wrote:
There is only so much "proper security" you can expect the average PC user to do.
Sure - but if their computer, as a result of their ignorance, starts belching out spam, ISPs should be able at very least to counteract the problem. For example, by disconnecting that user and telling them why they have been disconnected. Why should it be the ISP's duty to silently absorb the blows? Why should the user have no responsibility here? To carry your analogy a bit too far, if someone is roaming the streets in a beat-up jalopy with wobbly wheels, no lights, no brakes, no mirrors, and sideswiping parked cars, is it up to the city to somehow clear the way for that driver? No - the car is taken off the road and the driver told to fix it or get a new one. If the problem appears to be the driver rather than the vehicle, the driver is told they cannot drive until they have obtained a Clue. If the user, as a result of their computer being zombified or whatever, has to
"take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer,"
...then that's the user's problem. They can solve it with insurance (appropriate policies will come into being), or they can solve it by becoming more knowledgeable, or they can solve it by hiring know how. But it is *their* problem. The fact that it is the user's problem will drive the industry to solve that problem, because anywhere there is a problem there is a market for a solution.
then we - as the people who have designed and provided technology - have failed, and we are trying to pass off responsibility for our collective failure onto the end user.
I think what's being called for is not total abdication of responsibility - just some sharing of the responsibility.
This implies that our operating systems need to be more secure, way more secure, our applications need to be less permissive, probably way less permissive, probably even sandboxed by default
Yep! And the fastest way to get more secure systems is to make consumers accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level.
We can make their Internet cars safer for them - but we largely haven't.
I'm not sure that the word "we" is appropriate here. Who is "we"? How can (say) network operators be held responsible for (say) a weakness in Adobe Flash? At that level too, the consumer needs comeback - on the providers of weak software. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/~kauer/ +61-428-957160 (mob) GPG fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156 Old fingerprint: 07F3 1DF9 9D45 8BCD 7DD5 00CE 4A44 6A03 F43A 7DEF
On Wed, 2010-06-09 at 07:02 -0500, Joe Greco wrote:
There is only so much "proper security" you can expect the average PC use= r to do.
Sure - but if their computer, as a result of their ignorance, starts belching out spam, ISPs should be able at very least to counteract the problem. For example, by disconnecting that user and telling them why they have been disconnected. Why should it be the ISP's duty to silently absorb the blows? Why should the user have no responsibility here?
Primarily because the product that they've been given to use is defective by design. I'm not even saying "no responsibility"; I'm just arguing that we have to be realistic about our expectations of the level of responsibility users will have. At this point, we're teaching computers to children in elementary school, and kids in second and third grade are being expected to submit homework to teachers via e-mail. How is that supposed to play out for the single mom with a latchkey kid? Let's be realistic here. It's the computer that ought to be safer. We can expect modest improvements on the part of users, sure, but to place it all on them is simply a fantastic display of incredible naivete.
To carry your analogy a bit too far, if someone is roaming the streets in a beat-up jalopy with wobbly wheels, no lights, no brakes, no mirrors, and sideswiping parked cars, is it up to the city to somehow clear the way for that driver? No - the car is taken off the road and the driver told to fix it or get a new one. If the problem appears to be the driver rather than the vehicle, the driver is told they cannot drive until they have obtained a Clue.
Generally speaking, nobody wants to be the cop that makes that call. Theoretically an ISP *might* be able to do that, but most are unwilling, and those of us that do actually play BOFH run the risk of losing customers to a sewerISP that doesn't.
If the user, as a result of their computer being zombified or whatever, has to
"take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer,"
...then that's the user's problem. They can solve it with insurance (appropriate policies will come into being), or they can solve it by becoming more knowledgeable, or they can solve it by hiring know how. But it is *their* problem. The fact that it is the user's problem will drive the industry to solve that problem, because anywhere there is a problem there is a market for a solution.
That shows an incredible lack of understanding of how the market actually works. It's nice in theory. We (as technical people) have caused this problem because we've failed to design computers and networks that are resistant to this sort of thing. Trying to pin it on the users is of course easy, because users (generally speaking) are "stupid" and are "at fault" for not doing "enough" to "secure" their own systems, but that's a ridiculous smugness on our part.
then we - as the people who have designed and provided=20 technology - have failed, and we are trying to pass off responsibility=20 for our collective failure onto the end user.
I think what's being called for is not total abdication of responsibility - just some sharing of the responsibility.
I'm fine with that, but as long as we keep handing loaded guns without any reasonably-identifiable safeties to the end users, we can expect to keep getting shot at now and then.
This implies that our operating systems need to be more secure, way more secure, our applicatio= ns need to be less permissive, probably way less permissive, probably even sandboxed by default
Yep! And the fastest way to get more secure systems is to make consumers accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level.
Again, that shows an incredible lack of understanding of how the market actually works. It's still nice in theory. We would be better off short-circuiting that mechanism; for example, how about we simply mandate that browsers must be isolated from their underlying operating systems? Do you really think that the game of telephone works? Are we really going to be able to hold customers accountable? And if we do, are they really going to put vendor feet to the fire? Or is Microsoft just going to laugh and point at their EULA, and say, "our legal department will bankrupt you, you silly little twerp"? Everyone has carefully made it clear that they're not liable to the users, so the users are left holding the bag, and nobody who's actually responsible is able to be held responsible by the end users.
We can make their Internet cars safer for them - but we largely haven't.
I'm not sure that the word "we" is appropriate here. Who is "we"? How can (say) network operators be held responsible for (say) a weakness in Adobe Flash? At that level too, the consumer needs comeback - on the providers of weak software.
Yes, "we" needs to include all the technical stakeholders, and "we" as network operators ought to be able to tell "we" the website operators to tell "we" the web designers to stop using Flash if it's that big a liability. This, of course, fails for the same reasons that expecting end users to hold vendors responsible does, but there are a lot less of us technical stakeholders than there are end users, so if we really want to play that sort of game, we should try it here at home first. What's good for the goose, and all that ... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jun 9, 2010, at 6:50 AM, Joe Greco wrote:
On Wed, 2010-06-09 at 07:02 -0500, Joe Greco wrote:
There is only so much "proper security" you can expect the average PC use= r to do.
Sure - but if their computer, as a result of their ignorance, starts belching out spam, ISPs should be able at very least to counteract the problem. For example, by disconnecting that user and telling them why they have been disconnected. Why should it be the ISP's duty to silently absorb the blows? Why should the user have no responsibility here?
Primarily because the product that they've been given to use is defective by design. I'm not even saying "no responsibility"; I'm just arguing that we have to be realistic about our expectations of the level of responsibility users will have. At this point, we're teaching computers to children in elementary school, and kids in second and third grade are being expected to submit homework to teachers via e-mail. How is that supposed to play out for the single mom with a latchkey kid? Let's be realistic here. It's the computer that ought to be safer. We can expect modest improvements on the part of users, sure, but to place it all on them is simply a fantastic display of incredible naivete.
I don't think that is what is being proposed. What is being proposed is that in order for this to work legally in the framework that exists in the current law is to create a chain of liability. Let's use the example of a third party check which should be fairly familiar to everyone. A writes a check to B who endorses it to C who deposits it. The check bounces. C cannot sue A. C must sue B. B can then recover from A. So, to make this work realistically, the end user (latchkey mom in your example) has a computer and little Suzie opens MakeMeSpam.exe and next thing you know, that computer is using her full 7Mbps uplink from $CABLECO to deliver all the spam it can deliver at that speed. Some target of said spam calls up $CABLECO and $CABLECO turns off LatchKeyMom's service. The spam targets can (if they choose) go after LatchKeyMom ($CABLECO would be liable if they hadn't disconnected LatchKeyMom promptly), but, they probably won't if LatchKeyMom isn't a persistent problem. LatchKeyMom can go after the makers of MakeMeSpam.exe and also can go after the makers of her OS, etc. if she has a case that their design was negligent and contributed to the problem. Yes, it's complex, but, it is the only mechanism the law provides for the transfer of liability. You can't leap-frog the process and have the SPAM victims going directly after LatchKeyMom's OS Vendor because there's no relationship there to provide a legal link of liability.
To carry your analogy a bit too far, if someone is roaming the streets in a beat-up jalopy with wobbly wheels, no lights, no brakes, no mirrors, and sideswiping parked cars, is it up to the city to somehow clear the way for that driver? No - the car is taken off the road and the driver told to fix it or get a new one. If the problem appears to be the driver rather than the vehicle, the driver is told they cannot drive until they have obtained a Clue.
Generally speaking, nobody wants to be the cop that makes that call. Theoretically an ISP *might* be able to do that, but most are unwilling, and those of us that do actually play BOFH run the risk of losing customers to a sewerISP that doesn't.
Whether anyone wants to be the cop or not, someone has to be the cop. The point is that SewerISPs need to be held liable (hence my proposal for ISP liability outside of a 24 hour grace period from notification). If SewerISP has to pay the costs of failing to address abuse from their customers, SewerISP will either stop running a cesspool, or, they will go bankrupt and become a self-rectifying problem.
If the user, as a result of their computer being zombified or whatever, has to
"take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer,"
...then that's the user's problem. They can solve it with insurance (appropriate policies will come into being), or they can solve it by becoming more knowledgeable, or they can solve it by hiring know how. But it is *their* problem. The fact that it is the user's problem will drive the industry to solve that problem, because anywhere there is a problem there is a market for a solution.
That shows an incredible lack of understanding of how the market actually works. It's nice in theory.
No, it shows how broken current market practice is. What we are saying is that some relatively minor application of existing law to the computer market would correct this brokenness.
We (as technical people) have caused this problem because we've failed to design computers and networks that are resistant to this sort of thing. Trying to pin it on the users is of course easy, because users (generally speaking) are "stupid" and are "at fault" for not doing "enough" to "secure" their own systems, but that's a ridiculous smugness on our part.
You keep saying "WE" as if the majority of people on this list have anything to do with the design or construction of these systems. We do not. We are mostly network operators. However, again, if the end user is held liable, the end user is then in a position to hold the manufacturer/vendors that they received defective systems from liable. It does exactly what you are saying needs to happen, just without exempting irresponsible users from their share of the pain which seems to be a central part of your theory. If I leave my credit card laying around in an airport, I'm liable for part of the pain up until the point where I report my credit card lost. Why should irresponsible computer usage be any different?
then we - as the people who have designed and provided=20 technology - have failed, and we are trying to pass off responsibility=20 for our collective failure onto the end user.
I think what's being called for is not total abdication of responsibility - just some sharing of the responsibility.
I'm fine with that, but as long as we keep handing loaded guns without any reasonably-identifiable safeties to the end users, we can expect to keep getting shot at now and then.
Going back to my being perfectly willing to have a licensing process for attaching a system to a network. I have no problem with requiring gun-safety courses as a condition of gun ownership. I have no problem with requiring network security/safety courses as a condition of owning a network-attached system.
This implies that our operating systems need to be more secure, way more secure, our applicatio= ns need to be less permissive, probably way less permissive, probably even sandboxed by default
Yep! And the fastest way to get more secure systems is to make consumers accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level.
Again, that shows an incredible lack of understanding of how the market actually works. It's still nice in theory.
No... It shows a need for the market to change.
We would be better off short-circuiting that mechanism; for example, how about we simply mandate that browsers must be isolated from their underlying operating systems? Do you really think that the game of telephone works? Are we really going to be able to hold customers accountable? And if we do, are they really going to put vendor feet to the fire? Or is Microsoft just going to laugh and point at their EULA, and say, "our legal department will bankrupt you, you silly little twerp"?
Yes, the game of telephone works all the time. It's how the entire legal system of liability works in the united States. Yes, we need some legal changes to make it work. For example, we need regulation which prevents EULA clauses exempting manufacturers from liability for their erros from having any force of law. What a crock it is that those clauses actually work. Imagine if your car came with a disclaimer in the sales agreement that said the manufacturer had no liability if their accelerator stuck and you plowed a field of pedestrians as a result. Do you think the court would ever consider upholding such a provision? Never.
Everyone has carefully made it clear that they're not liable to the users, so the users are left holding the bag, and nobody who's actually responsible is able to be held responsible by the end users.
Yes, those "we're not liable for our negligence" clauses need to be removed from legal effect. Agreed. Owen
Yes, it's complex, but, it is the only mechanism the law provides for the transfer of liability. You can't leap-frog the process and have the SPAM victims going directly after LatchKeyMom's OS Vendor because there's no relationship there to provide a legal link of liability.
This leads to an incredibly Rube-Goldberg-like setup to solve the problem; if that's the case, even if the issue of EULA's leaving end users holding the bag were resolved, this would not be much of an incentive to vendors to fix the problem.
To carry your analogy a bit too far, if someone is roaming the streets in a beat-up jalopy with wobbly wheels, no lights, no brakes, no mirrors, and sideswiping parked cars, is it up to the city to somehow clear the way for that driver? No - the car is taken off the road and the driver told to fix it or get a new one. If the problem appears to be the driver rather than the vehicle, the driver is told they cannot drive until they have obtained a Clue.
Generally speaking, nobody wants to be the cop that makes that call. Theoretically an ISP *might* be able to do that, but most are unwilling, and those of us that do actually play BOFH run the risk of losing customers to a sewerISP that doesn't.
Whether anyone wants to be the cop or not, someone has to be the cop.
The point is that SewerISPs need to be held liable (hence my proposal for ISP liability outside of a 24 hour grace period from notification).
If SewerISP has to pay the costs of failing to address abuse from their customers, SewerISP will either stop running a cesspool, or, they will go bankrupt and become a self-rectifying problem.
In the meantime, CleanISP is bleeding customers to SewerISP, rewarding SewerISP. And tomorrow there's another SewerISP.
If the user, as a result of their computer being zombified or whatever, has to
"take it in to NerdForce and spend some random amount between $50 and twice the cost of a new computer,"
...then that's the user's problem. They can solve it with insurance (appropriate policies will come into being), or they can solve it by becoming more knowledgeable, or they can solve it by hiring know how. But it is *their* problem. The fact that it is the user's problem will drive the industry to solve that problem, because anywhere there is a problem there is a market for a solution.
That shows an incredible lack of understanding of how the market actually works. It's nice in theory.
No, it shows how broken current market practice is. What we are saying is that some relatively minor application of existing law to the computer market would correct this brokenness.
That's like saying going to the moon is a relatively minor application of rocket science.
We (as technical people) have caused this problem because we've failed to design computers and networks that are resistant to this sort of thing. Trying to pin it on the users is of course easy, because users (generally speaking) are "stupid" and are "at fault" for not doing "enough" to "secure" their own systems, but that's a ridiculous smugness on our part.
You keep saying "WE" as if the majority of people on this list have anything to do with the design or construction of these systems. We do not. We are mostly network operators.
I keep saying "we" as opposed to "them" because "we" are part of the problem, and "they" are simply end users. "We" can (and, from past experience with the membership of this list, does) include members of the networking community, hardware community, software community, developers, and other related interests. "We" have done a poor job of designing technology that "they" can understand, comprehend, and just use, which is, when it comes right down to it, all they want to be able to do.
However, again, if the end user is held liable, the end user is then in a position to hold the manufacturer/vendors that they received defective systems from liable.
The hell they are. Why don't you READ that nice EULA you accepted when you bought that Mac.
It does exactly what you are saying needs to happen, just without exempting irresponsible users from their share of the pain which seems to be a central part of your theory.
If I leave my credit card laying around in an airport, I'm liable for part of the pain up until the point where I report my credit card lost. Why should irresponsible computer usage be any different?
Because the average person would consider that to be dangerous, and the average person would not consider opening an e-mail in their e-mail client to be dangerous, except that it is.
then we - as the people who have designed and provided=20 technology - have failed, and we are trying to pass off responsibility=20 for our collective failure onto the end user.
I think what's being called for is not total abdication of responsibility - just some sharing of the responsibility.
I'm fine with that, but as long as we keep handing loaded guns without any reasonably-identifiable safeties to the end users, we can expect to keep getting shot at now and then.
Going back to my being perfectly willing to have a licensing process for attaching a system to a network.
I have no problem with requiring gun-safety courses as a condition of gun ownership. I have no problem with requiring network security/safety courses as a condition of owning a network-attached system.
That seems a little extreme. How about just making a device that's safe for people to use, and does what they need? Look at the fantastic inroads that devices like the iPhone and iPad have made. Closed ecosystem, low risk, but enough functionality that many users accept (and even love) them. Not saying they're entirely safe, but the point to ponder is that it *is* possible to offer devices that seem to have a lower risk factor.
This implies that our operating systems need to be more secure, way more secure, our applicatio= ns need to be less permissive, probably way less permissive, probably even sandboxed by default
Yep! And the fastest way to get more secure systems is to make consumers accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level.
Again, that shows an incredible lack of understanding of how the market actually works. It's still nice in theory.
No... It shows a need for the market to change.
And that, too, shows an incredible lack of understanding of how the market actually works. All the wishful thinking in the world does not result in the market changing.
We would be better off short-circuiting that mechanism; for example, how about we simply mandate that browsers must be isolated from their underlying operating systems? Do you really think that the game of telephone works? Are we really going to be able to hold customers accountable? And if we do, are they really going to put vendor feet to the fire? Or is Microsoft just going to laugh and point at their EULA, and say, "our legal department will bankrupt you, you silly little twerp"?
Yes, the game of telephone works all the time. It's how the entire legal system of liability works in the united States. Yes, we need some legal changes to make it work. For example, we need regulation which prevents EULA clauses exempting manufacturers from liability for their erros from having any force of law. What a crock it is that those clauses actually work.
You're not going to get it. If you try, every software manufacturer in the US is going to be up in arms, saying that it'll put them out of business (in some cases, probably rightly so). Even limiting liability to the cost of the product isn't going to work, the end user still gets hung with the cost. And no, the game of telephone doesn't work. Most consumers simply do not have the time to pursue issues or the expertise to know they've been screwed, which is why we so many class action suits - the very proof that the game of telephone you suggest doesn't work.
Imagine if your car came with a disclaimer in the sales agreement that said the manufacturer had no liability if their accelerator stuck and you plowed a field of pedestrians as a result. Do you think the court would ever consider upholding such a provision? Never.
But again, computers don't "plow" a "field of pedestrians" when they get infected, so you're really failing to offer a meaningful comparison here. The only way you'll get a computer to do that is to toss one out a tenth story window above a crowded sidewalk, and even there, I doubt a judge will hold Dell or Microsoft liable.
Everyone has carefully made it clear that they're not liable to the users, so the users are left holding the bag, and nobody who's actually responsible is able to be held responsible by the end users.
Yes, those "we're not liable for our negligence" clauses need to be removed from legal effect. Agreed.
I certainly agree that this is a problem, but I'm also fairly certain I won't see it resolved in that way in my lifetime. This doesn't seem to be a useful discussion at this point. I think we agree that there's a problem, but I don't really see fixing the liability laws as likely to happen, and attempting to hold people responsible for the actions of their computers has been difficult even for the MPAA/RIAA. What you're suggesting is even more Rube Goldberg, and as a way to address the issue of software quality, would appear to be a spectacular EPIC FAIL based on the influence of the software industry and the resulting effects of what you suggest. They'll simply state that it would be a total disaster (for them), and they'll successfully lobby any such reform into the ground. More likely, in my opinion, is an evolution away from the "personal computer" model we've had until now, towards a more abstract form of computing that emphasizes network-based ("cloud") computing, where your device simply holds a few apps and some minor configuration, but the heavy lifting is all done elsewhere. This, too, has many issues associated with it, but from a security standpoint, there becomes a manageable number of parties to hold accountable when something goes awry, and much less of a chance for users to do something unanticipated with their devices. Apple seems to be making inroads in that area. As far as network operations goes, it seems that the best thing to do is to try to assist infected customers, but that's a hard cost to swallow. I don't really see what other realistic conclusion can be drawn, however. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 2010-06-09 at 08:50 -0500, Joe Greco wrote:
Primarily because the product that they've been given to use is defective by design.
Indeed. So one approach is to remove the protection such defective designs currently enjoy.
supposed to play out for the single mom with a latchkey kid? Let's be realistic here. It's the computer that ought to be safer.
Fine. Agreed. Now what mechanisms do you suggest for achieving that? Technical suggestions are no good, because noone will implement them unless they have to, or unless implementing them in some way improves the product so it sells better.
modest improvements on the part of users, sure, but to place it all on them is simply a fantastic display of incredible naivete.
Indeed. And certainly not something I'd advocate. at least not without making sure that they, in turn, could pass the responsibility on.
That shows an incredible lack of understanding of how the market actually works. It's nice in theory.
It would be a lot more pleasant discussing things with you if you understood that people may disagree with you without necessarily being naive or stupid.
We (as technical people) have caused this problem because we've failed to design computers and networks that are resistant to this sort of thing.
And why did we do that? What allowed us to get away with it? Answer: Inadequate application of ordinary product liability law to the producers of software. Acceptance of ridiculous EULAs that in any sane legal system would not be worth the cellophane they are printed behind. And so forth. I know the ecosystem that arose around software is more complicated than that, but you get the idea.
Trying to pin it on the users is of course easy, because users (generally speaking) are "stupid" and are "at fault" for not doing "enough" to "secure" their own systems, but that's a ridiculous smugness on our part.
You're right. And again, I am not advocating that. People are always going to be stupid (or ignorant, which is not the same thing as stupid). The trick is to give them a way out - whether it's insurance, education or effective legal remedy. That way they can choose how to handle the risk that *they* represent - in computers just as in any other realm of life.
I'm fine with that, but as long as we keep handing loaded guns without any reasonably-identifiable safeties to the end users, we can expect to keep getting shot at now and then.
You keep stating the problem, where what others are trying to do is frame a solution. Right now we are just absorbing the impact; that is not sustainable, as long as the people providing the avenues of attack (through ignorance or whatever) have no obligation at all to do better.
Yep! And the fastest way to get more secure systems is to make consumers accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level.
Again, that shows an incredible lack of understanding of how the market actually works. It's still nice in theory.
There are whole industries built around vehicular safety. There are numerous varieties of insurance that protect people - at every level - from their own failures. Where there is no accountability in a human system, failure is practically guaranteed - whether in the form of tyranny, monopoly, danger to life and limb or whatever. The idea of accountability and the drive to attain it forms the basis of most legal and democratic systems, and of uncountable numbers of smaller systems in democratic societies. Now, what were you saying about "theory"?
Do you really think that the game of telephone works? Are we really going to be able to hold customers accountable? And if we do, are they really going to put vendor feet to the fire? Or is Microsoft just going to laugh and point at their EULA, and say, "our legal department will bankrupt you, you silly little twerp"?
Please, read more carefully. "At every level". If the consumer is made responsible, they must simultaneously get some avenue of recourse. Those ridiculous EULAs should be the first things against the wall :-)
Everyone has carefully made it clear that they're not liable to the users, so the users are left holding the bag, and nobody who's actually responsible is able to be held responsible by the end users.
Correct. That is the current situation, and it needs to be altered. On the one hand consumers benefit because they will finally have recourse for defective software, but with that gain comes increased responsibility.
Yes, "we" needs to include all the technical stakeholders, and "we" as network operators ought to be able to tell "we" the website operators to tell "we" the web designers to stop using Flash if it's that big a liability. This, of course, fails for the same reasons that expecting end users to hold vendors responsible does, but there are a lot less of us technical stakeholders than there are end users, so if we really want to play that sort of game, we should try it here at home first.
Try what? Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/~kauer/ +61-428-957160 (mob) GPG fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156 Old fingerprint: 07F3 1DF9 9D45 8BCD 7DD5 00CE 4A44 6A03 F43A 7DEF
--=-sFVAwQY0p26r8nFOk9Ww Content-Type: text/plain Content-Transfer-Encoding: quoted-printable
On Wed, 2010-06-09 at 08:50 -0500, Joe Greco wrote:
Primarily because the product that they've been given to use is defective by design.
Indeed. So one approach is to remove the protection such defective designs currently enjoy.
That's not going to happen (but I'll be happy to be proven wrong). As it stands, were software manufacturers to be held liable for the damages caused by their products, think of what would happen. How much does it cost for NerdForce to disinfect a computer? How many man-hours did that MS-SQL Slammer worm cost us? How much is lost when a website is down? What legislator is going to vote for software liability reforms that will ruin major software companies? When their own staff and experts will be willing to state that outcome, in no uncertain terms? What are the outcomes here? We pass such legislation, it doesn't magically fix things. It just means that companies like Adobe and Microsoft are suddenly on the hook for huge liabilities if they continue to sell their current products. Do we expect them to *stop* selling Windows, etc.,?
supposed to play out for the single mom with a latchkey kid? Let's be realistic here. It's the computer that ought to be safer.
Fine. Agreed. Now what mechanisms do you suggest for achieving that? Technical suggestions are no good, because noone will implement them unless they have to, or unless implementing them in some way improves the product so it sells better.
That's the problem, isn't it. If we were serious about it, we could approach the problem differently: rather than trying to tackle it from a marketplace point of view, perhaps we could instead tackle it from a regulatory point of view. Could we mandate that the next generation of browsers must have certain qualities? It's an interesting discussion, and in some way parallels the car safety examples I provided earlier.
modest improvements on the part of users, sure, but to place it all on=20 them is simply a fantastic display of incredible naivete.
Indeed. And certainly not something I'd advocate. at least not without making sure that they, in turn, could pass the responsibility on.
That shows an incredible lack of understanding of how the market actually works. It's nice in theory.
It would be a lot more pleasant discussing things with you if you understood that people may disagree with you without necessarily being naive or stupid.
It's not a pleasant discussion, because in all visible directions are pure suck. I'll call naive when I see it.
We (as technical people) have caused this problem because we've failed to= =20 design computers and networks that are resistant to this sort of thing.
And why did we do that? What allowed us to get away with it? Answer: Inadequate application of ordinary product liability law to the producers of software. Acceptance of ridiculous EULAs that in any sane legal system would not be worth the cellophane they are printed behind. And so forth. I know the ecosystem that arose around software is more complicated than that, but you get the idea.
I certainly agree, but it isn't going to be wished away in a minute. To do so would effectively destroy some major technology companies.
Trying to pin it on the users is of course easy, because users (generally speaking) are "stupid" and are "at fault" for not doing "enough" to "secure" their own systems, but that's a ridiculous smugness on our part.
You're right. And again, I am not advocating that. People are always going to be stupid (or ignorant, which is not the same thing as stupid). The trick is to give them a way out - whether it's insurance, education or effective legal remedy. That way they can choose how to handle the risk that *they* represent - in computers just as in any other realm of life.
Actually, IRL, we've been largely successful in making much safer cars. It's by no means a complete solution, but it seems to be the best case scenario at this time. Software is devilishly hard to make safer, of course, and companies with a decade of legacy sludge being dragged along for the ride do not have it easy. (I really do feel sorry for Microsoft in a way) That's one of the reasons I had predicted more appliance-like computers, and now they seem to be appearing in the form of app-running devices like the iPad. From a network operator's point of view, that's just great, because the chance of a user being able to do something bad to the device is greatly reduced.
I'm fine with that, but as long as we keep handing loaded guns without=20 any reasonably-identifiable safeties to the end users, we can expect to keep getting shot at now and then.
You keep stating the problem, where what others are trying to do is frame a solution. Right now we are just absorbing the impact; that is not sustainable, as long as the people providing the avenues of attack (through ignorance or whatever) have no obligation at all to do better.
Right, but rewriting the product liability laws to hold software vendors accountable, by proxying through the end user, is kind of a crazy solution, and one that would appear not to be workable. Was there another solution being framed that I missed?
Yep! And the fastest way to get more secure systems is to make consumer= s accountable, so that they demand accountability from their vendors. And so it goes, all the way up the chain. Make people accountable. At every level. =20 Again, that shows an incredible lack of understanding of how the market actually works. It's still nice in theory.
There are whole industries built around vehicular safety. There are numerous varieties of insurance that protect people - at every level - from their own failures.
Where there is no accountability in a human system, failure is practically guaranteed - whether in the form of tyranny, monopoly, danger to life and limb or whatever. The idea of accountability and the drive to attain it forms the basis of most legal and democratic systems, and of uncountable numbers of smaller systems in democratic societies. Now, what were you saying about "theory"?
That's nice. How much accountability should one have for having visited a web site that was broken into by Russian script kiddies, though? And we're not talking about driving a PC through a field of pedestrians, as someone else so colorfully put it. Who is going to "insure" me against the possibility that Russian script kiddies sent me a virus via Flash on some web site, and even now are trying to break into British intel via my computer, so one fine day the FBI comes a'knockin'? How do I even find out what happened, when I'm in jail for a year for "hacking the Brits"? That's got to be one hell of an insurance plan.
Do you really think that the game of
telephone works? Are we really going to be able to hold customers accountable? And if we do, are they really going to put vendor feet to the fire? Or is Microsoft just going to laugh and point at their EULA, and say, "our legal department will bankrupt you, you silly little twerp"= ?
Please, read more carefully. "At every level". If the consumer is made responsible, they must simultaneously get some avenue of recourse. Those ridiculous EULAs should be the first things against the wall :-)
Should be? Fine. Will be? Not fine. You won't manage to sell that to me without a lot of convincing. And if you can't get rid of those EULA's, we're back in the land of "end user holding the bag." So feel free to convince me of why Microsoft, Apple, Adobe, etc., are all going to just sit idly by while their EULA protections are legislated away.
Everyone has carefully made it clear that they're not liable to the users= , so the users are left holding the bag, and nobody who's actually responsible is able to be held responsible by the end users.
Correct. That is the current situation, and it needs to be altered. On the one hand consumers benefit because they will finally have recourse for defective software, but with that gain comes increased responsibility.
Yes, "we" needs to include all the technical stakeholders, and "we" as network operators ought to be able to tell "we" the website operators to tell "we" the web designers to stop using Flash if it's that big a liability. This, of course, fails for the same reasons that expecting end users to hold vendors responsible does, but there are a lot less of us technical stakeholders than there are end users, so if we really want to play that sort of game, we should try it here at home first.
Try what?
Go tell every webmaster who is hosting Flash on your network that it's now prohibited, as a security risk, due to the bulletin issued last week, and that any website hosting Flash on your network a week from now will be null routed. And then follow through. I mean, really, if we can't do that, we're just shoveling the responsibility off to the poor victim end-users. I'm just trying to frame this in a way that people can understand. It's great to say "end users should be responsible" and "end users need to be security-conscious." However, are we, as network operators, willing to be equally responsible and security-conscious? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 2010-06-09 at 12:08 -0500, Joe Greco wrote:
That's not going to happen (but I'll be happy to be proven wrong).
Oh, there are so many things that are "not going to happen", aren't there? And because of that we shouldn't even bother suggesting regulation as a solution to anything because "the big companies" won't let it happen? It took a few decades, but eventually people figured out that tobacco killed people, and some of the biggest financial interests in the world ended up being legislated against. That process is not finished, the rearguard action is not played out, but the setup is not the cosy little "we'll do whatever we want and you can't stop us" that we had in the fifties. The Mafia in Italy seemed indomitable a few decades ago. It had the whole country (and large chunks of the US and other countries) in its grip, apparently unchallengeable. But the Mafia in Italy is now dying under the weight of courageous police and judges and a legal system that in spite of itself tries to do the will of the people. Little by little the changes were made, little by little the structures the Mafia depended upon were taken away. Including, most importantly, the belief amongst Italians that the Mafia was untouchable. Your argument seems to be "if we do X, it won't work". This is true for almost any X, because our field, like many other specialist fields, is a kind of ecosystem. Many factors have reached a kind of equilibrium, and it's really hard to look at any one factor and say "fix that" without seeing how so many other factors would work against the change. Try thinking about what *could* happen rather than what *can't* happen.
What legislator is going to vote for software liability reforms that will ruin major software companies? When their own staff and experts will be willing to state that outcome, in no uncertain terms?
Why do you assume these laws will ruin anyone? Noone is seeking to destroy software companies, any more than the people who demanded accountability from auto manufacturers or pharmaceutical companies wanted to put them out of business. People want cars and medicine, and are prepared to pay for them. But if the car is defective or the medicine proves harmful, people want recourse in law. Same for software. When the company screws up, people should be able to take them to court and have a realistic chance of success if their grievance is real. It is that simple. Yet when we read of yet another buffer overflow exploit in a Microsoft product we just sigh and update our virus checkers, because Microsoft has *zero* obligation in law to produce software that has no such flaws. There is no other product group I know of where a known *class* of defect would be permitted to continue to exist without very serious liability issues arising.
What are the outcomes here? We pass such legislation, it doesn't magically fix things. It just means that companies like Adobe and Microsoft are suddenly on the hook for huge liabilities if they continue to sell their current products. Do we expect them to *stop* selling Windows, etc.,?
You assume it all happens at once. You assume the change will be large. You assume there is no grace period. You assume a lot, then act as if it must be so.
That's the problem, isn't it. If we were serious about it, we could approach the problem differently: rather than trying to tackle it from a marketplace point of view, perhaps we could instead tackle it from a regulatory point of view. Could we mandate that the next generation of browsers must have certain qualities? It's an interesting discussion, and in some way parallels the car safety examples I provided earlier.
Mandating specific qualities in that sense leads to legislation that is out of date before the ink is dry. No - you mandate only that products must be fit for their intended purpose, and you declare void any attempts to contract away this requirement. Just like with other products! And then you let the system and the market work out the rest.
I certainly agree, but it isn't going to be wished away in a minute. To do so would effectively destroy some major technology companies.
You do a great line in straw men. Who said it would take a minute? Not I. Not anyone. People are just trying to point out that while it may be difficult, it's not impossible. We are also trying to point out the places where effective positive change could be made.
in a way) That's one of the reasons I had predicted more appliance-like computers, and now they seem to be appearing in the form of app-running devices like the iPad. From a network operator's point of view, that's just great, because the chance of a user being able to do something bad to the device is greatly reduced.
There is no reduction in the chance that the manufacturer will screw up, making their product vulnerable to attack. But even if all iPads turn out to be totally crackable, Apple will still have no obligation at all to fix it. Appliance computers do not address the real problem, which is lack of accountability.
Right, but rewriting the product liability laws to hold software vendors accountable, by proxying through the end user, is kind of a crazy solution, and one that would appear not to be workable. Was there another solution being framed that I missed?
No, it's not crazy. Regulation that empowers consumers is one of the fastest ways to better, safer products. Did you ever see a toy with a two-page shrink wrap contract making you the consumer absolutely liable for any fault the toy might have or any damage it might cause? No? What about kitchen appliances? The list of areas where consumer law has generated better, safer products is long. You say it "appears not to be workable" but have offered not a single argument as to why not. Remember, by the way, that in the context of computing, I'm not suggesting consumer empowerment should be a one-way street. I'm saying that the consumer gets the power to demand that software and hardware be fit for purpose. In return, the consumer too must become accountable.
That's nice. How much accountability should one have for having visited a web site that was broken into by Russian script kiddies, though? And we're not talking about driving a PC through a field of pedestrians, as someone else so colorfully put it. Who is going to "insure" me against the possibility that Russian script kiddies sent me a virus via Flash on some web site, and even now are trying to break into British intel via my computer, so one fine day the FBI comes a'knockin'? How do I even find out what happened, when I'm in jail for a year for "hacking the Brits"? That's got to be one hell of an insurance plan.
Once again you demand that everything be fixed in one fell swoop. How did visiting the web site cause me to get a virus? Did I download it? My bad. Did the browser have a vulnerability? Browser manufacturer's bad. Flash vulnerability? Adobe's bad. FBI - can they prove intent? Why are you so set against people having to face the consequences of their actions (or inactions)? What is so wrong with Adobe having to produce software that DOES NOT expose users to attack?
So feel free to convince me of why Microsoft, Apple, Adobe, etc., are all going to just sit idly by while their EULA protections are legislated away.
Microsoft et al do not actually own your country. You do. I don't expect them to sit idly by. Like all corporate citizens, they will attempt to protect their own interests above all other considerations. But because they do not own the country, and because their position is ethically and practically untenable, they will ultimately fail.
Go tell every webmaster who is hosting Flash on your network that it's now prohibited, as a security risk, due to the bulletin issued last week, and that any website hosting Flash on your network a week from now will be null routed. And then follow through.
Have you done that? If not, why not?
It's great to say "end users should be responsible" and "end users need to be security-conscious."
Except that's NOT what I am saying. I am saying they need to be *accountable*. As do network operators, software vendors, hardware vendors and so on.
However, are we, as network operators, willing to be equally responsible and security-conscious?
Dunno. As long as it's voluntary there will be little substantive change. Make network operators accountable, and the change will come. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/~kauer/ +61-428-957160 (mob) GPG fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156 Old fingerprint: 07F3 1DF9 9D45 8BCD 7DD5 00CE 4A44 6A03 F43A 7DEF
On 6/9/2010 14:37, Karl Auer wrote: [good stuff]
Try thinking about what *could* happen rather than what *can't* happen.
Even better: Think "here is what I can do". And then do it. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On 6/9/2010 14:37, Karl Auer wrote: [good stuff]
Try thinking about what *could* happen rather than what *can't* happen.
Even better: Think "here is what I can do". And then do it.
Some of us already do: Implement BCP38 Implement spam scanning for e-mail Have a responsive abuse desk Reload - not repair - any compromised systems Sponsor resources to combat spam many more etc. Some of us have been doing what you suggest for so long that we've become a bit skeptical and cynical about it all, especially when we see that in the last decade, BCP38 filtering still isn't prevalent, abuse desks are commonly considered to be black holes, and people still talk about disinfecting a virus-laden computer. There is only so much you can do, short of getting out a Clue by Four and going around hitting people with it. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 6/9/2010 18:04, Joe Greco wrote:
On 6/9/2010 14:37, Karl Auer wrote: [good stuff]
Try thinking about what *could* happen rather than what *can't* happen.
Even better: Think "here is what I can do". And then do it.
Some of us already do:
Implement BCP38 Implement spam scanning for e-mail Have a responsive abuse desk Reload - not repair - any compromised systems Sponsor resources to combat spam many more etc.
Some of us have been doing what you suggest for so long that we've become a bit skeptical and cynical about it all, especially when we see that in the last decade, BCP38 filtering still isn't prevalent, abuse desks are commonly considered to be black holes, and people still talk about disinfecting a virus-laden computer.
There is only so much you can do, short of getting out a Clue by Four and going around hitting people with it.
I am sorry nto report that doing the right thing rarely gets any ink. But it is still the right thing, and you have to keep doing it--if for no reason better than being able to live with your self. Thanks for what you do. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
Cyber Threats Yes, But Is It Cyber War? http://www.circleid.com/posts/20100609_cyber_threats_yes_but_is_it_cyberwar/ -J
On Thu, Jun 10, 2010 at 4:22 AM, Jorge Amodio <jmamodio@gmail.com> wrote:
Cyber Threats Yes, But Is It Cyber War? http://www.circleid.com/posts/20100609_cyber_threats_yes_but_is_it_cyberwar/
-J
Cyber war is something made up by the security industry to save it from going bankrupt because the traditional profit vectors such as virus and worm authors aren't releasing threats to the web anymore because the motivation for the hackers has changed from fun to money. You've got folks now trying to artificially ramp up cyber security as a national security agenda now to create a new profit vector now that the traditional threats don't exist anymore. "How do we ramp up cyber security as a national security agenda, something the next president has to worry about?" "How do we get cyber security as the top headline on CNN and Fox News so that cyber security is something The White House works on?" http://www.youtube.com/watch?v=FSUPTZVlkyU The response to this video was "It Shouldn't Take a 9/11 to Fix Cybersecurity (But it Might)" http://www.youtube.com/watch?v=cojeP3kJBug&feature=watch_response I highlighted these suspicious videos on Full-disclosure mailing list but they didn't seem to think there was anything wrong. I also sent them to MI5 via their web form but I've had no reply from them. Andrew http://sites.google.com/site/n3td3v/
On Wed, 2010-06-09 at 12:08 -0500, Joe Greco wrote:
That's not going to happen (but I'll be happy to be proven wrong).
Oh, there are so many things that are "not going to happen", aren't there? And because of that we shouldn't even bother suggesting regulation as a solution to anything because "the big companies" won't let it happen?
Thankfully, I'm going to stop reading this right here, because you're attributing to me something I didn't say. I said that rewriting the liability laws to outlaw draconian EULA's wasn't going to happen. I'm fairly certain that regulation, on the other hand, is likely to be the solution that ends up working, and I said so much earlier. So since I'm not interested in rehashing the issues for you, I'm going to go take the evening off. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Dave Rand wrote:
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine.
"I urge all my competitors to do that." The problem isn't that this is a bad idea, the problem is that it's a bad idea to be the first to do it. You want to be the last to do it. You want all other companies to do it first - to charge their customers more (while you don't charge more and take away some of their business) to pay for this cost. It only works if everyone has to charge their customers, and the change (from no surcharge to mandatory charge) will have to happen universally and at the same time - which will never happen. Welcome to the anarchy. jc
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, Jun 8, 2010 at 9:06 PM, JC Dill <jcdill.lists@gmail.com> wrote:
Dave Rand wrote:
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine.
"I urge all my competitors to do that."
The problem isn't that this is a bad idea, the problem is that it's a bad idea to be the first to do it. You want to be the last to do it. You want all other companies to do it first - to charge their customers more (while you don't charge more and take away some of their business) to pay for this cost.
It only works if everyone has to charge their customers, and the change (from no surcharge to mandatory charge) will have to happen universally and at the same time - which will never happen. Welcome to the anarchy.
Again, you can all continue to dance around and ignore the problem & chance the probability that the U.S. Government will step in and force you to do it. Pick your poison. - - ferg -----BEGIN PGP SIGNATURE----- Version: PGP Desktop 9.5.3 (Build 5003) wj8DBQFMDxcQq1pz9mNUZTMRAgFRAKDX0N+DYck8tiOyRPMJ2E31fq0vEQCfVJEp dQuZqomm/Z42gZRgzshlLsc= =mRrQ -----END PGP SIGNATURE----- -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawgster(at)gmail.com ferg's tech blog: http://fergdawg.blogspot.com/
On 6/8/2010 23:22, Paul Ferguson wrote:
Again, you can all continue to dance around and ignore the problem & chance the probability that the U.S. Government will step in and force you to do it.
Pick your poison.
Or the world government will (note misspelled "NATO" in the Subject:). -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On Jun 8, 2010, at 9:06 PM, JC Dill wrote:
Dave Rand wrote:
I'm fond of getting the issues addressed by getting the ISPs to be involved with the problem. If that means users get charged "clean up" fees instead of a "security" fee, that's fine.
"I urge all my competitors to do that."
The problem isn't that this is a bad idea, the problem is that it's a bad idea to be the first to do it. You want to be the last to do it. You want all other companies to do it first - to charge their customers more (while you don't charge more and take away some of their business) to pay for this cost.
Heck, at this point, I'd be OK with it being a regulatory issue. Perhaps we need regulators to step in and put forth something like the following: 1. An ISP who receives an abuse complaint against one of their customers shall not be held liable for damages to the complainant or other third parties IF: A. Said ISP investigates and takes remedial action for valid complaints within 24 hours of receipt of said complaint. B. Said ISP responds to said abuse complaint within 4 hours of their determination including the determination made and what, if any, remedial action was taken. and C. If the complaint was legitimate, the remedial action taken by said ISP causes the reported abuse to stop. 2. Any ISP who takes remedial action against one of their customers as outlined in the previous section shall charge their customer a fee which shall not be less than $100 and not more than the ISP's full costs of investigation and remedial action. I'm not saying I necessarily like the idea of more regulation, but, if we as an industry are unwilling to solve this because of the above competitive concerns, then, perhaps that is what is necessary to get us to act. Owen
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet? The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc. jc
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, Jun 8, 2010 at 11:11 PM, JC Dill <jcdill.lists@gmail.com> wrote:
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet?
The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc.
Exactly, which is the problem we are foretelling. If you guys can't wrap your brains around the problem, and can't come up with suitable solutions to abate criminal activity, then the hammer drops in a way which none of us will appreciate. I think that is pretty clear. The U.S. Government doesn't care about ISPs in The Netherlands or Christmas Islands, because it is not within their jurisdiction. But you are. That is the entire point. Hello. - - ferg -----BEGIN PGP SIGNATURE----- Version: PGP Desktop 9.5.3 (Build 5003) wj8DBQFMDzIxq1pz9mNUZTMRArlJAKDT6D467QFOadfq8iPXD8uT7YJcRgCdHbuY YVMk4psTJ342HUr5UPgCa0Q= =D/iK -----END PGP SIGNATURE----- -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawgster(at)gmail.com ferg's tech blog: http://fergdawg.blogspot.com/
On Jun 8, 2010, at 11:11 PM, JC Dill wrote:
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet?
The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc.
The reason we have these problems is because NO government is taking action. If each government took the action I suggested locally against the ISPs in their region, it would be just as effective. In fact, the more governments that take the action I suggested, the more effective it would be. Owen
On 6/9/2010 06:11, Owen DeLong wrote:
On Jun 8, 2010, at 11:11 PM, JC Dill wrote:
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet?
The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc.
What happ3ens if you replace the word "government" with the word "person"? (And since the cost is the only thing that matters, how much does "government" cost? I suppose that is something somebody else should worry about too.)
The reason we have these problems is because NO government is taking action. If each government took the action I suggested locally against the ISPs in their region, it would be just as effective. In fact, the more governments that take the action I suggested, the more effective it would be.
It is my strongly held belief that with my substitution a lot would get done and at a much lower individual cost. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On 6/9/2010 01:11, JC Dill wrote:
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet?
Doesn't matter as long as it enables radial outbound finger pointing.
The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc.
Or in the US. But what we see here is what is what is wrong with "regulation"--the regulated specify the regulation, primarily to protect the economic interests of the entrenched. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
Larry Sheldon wrote:
On 6/9/2010 01:11, JC Dill wrote:
Owen DeLong wrote:
Heck, at this point, I'd be OK with it being a regulatory issue.
What entity do you see as having any possibility of effective regulatory control over the internet?
Doesn't matter as long as it enables radial outbound finger pointing.
It does matter because THERE IS NO SUCH ENTITY.
The reason we have these problems to begin with is because there is no way for people (or government regulators) in the US to control ISPs in eastern Europe etc.
Or in the US. But what we see here is what is what is wrong with "regulation"--the regulated specify the regulation, primarily to protect the economic interests of the entrenched.
IMHO it is impossible to regulate the internet as a whole. It is built out of too many different unregulated fragments (IP registries, domain registries, ASs, Tier 1 networks, smaller networks, etc.) and there will never be enough willingness for the unregulated entities to voluntarily become regulated - if some of them agree to become regulated then others will tout their unregulated (and cheaper) services. IMHO it would require a massive effort of great firewalls (such as China has in place) to *begin* to force regulation on the internet as a whole. jc
On 6/9/2010 13:35, JC Dill wrote:
IMHO it is impossible to regulate the internet as a whole. Exactly so.
That is precisely why you don't want somebody else to attempt it. The only hope is for everybody to take personal responsibility for their little piece of it. -- Somebody should have said: A democracy is two wolves and a lamb voting on what to have for dinner. Freedom under a constitutional republic is a well armed lamb contesting the vote. Requiescas in pace o email Ex turpi causa non oritur actio Eppure si rinfresca ICBM Targeting Information: http://tinyurl.com/4sqczs http://tinyurl.com/7tp8ml
On 6/9/2010 1:43 PM, Larry Sheldon wrote:
On 6/9/2010 13:35, JC Dill wrote:
IMHO it is impossible to regulate the internet as a whole. Exactly so.
That is precisely why you don't want somebody else to attempt it.
The only hope is for everybody to take personal responsibility for their little piece of it.
This situation has led to the growth of blacklists, and whitelists of all sorts. These, at least have some potential to drive dollars to hosts/providers with better records of behavior. Not a silver bullet.. and not without controversy. And of course the cost is paid by victims up-front. Law and order in the wild west.. Ken -- Ken Anderson Pacific Internet - http://www.pacific.net
participants (18)
-
Adrian Chadd
-
Alexander Harrowell
-
andrew.wallace
-
Brielle Bruns
-
Chris Adams
-
dlr@bungi.com
-
J. Oquendo
-
JC Dill
-
Joe Greco
-
Jorge Amodio
-
Karl Auer
-
Ken A
-
Larry Sheldon
-
Michael Dillon
-
Owen DeLong
-
Paul Ferguson
-
Tim Franklin
-
Valdis.Kletnieks@vt.edu