Users Aren’t Always the Best Judges of Risk

As is the case with lots of categories of news, little in the mainstream tech journalism follows the really important stories. More often, coverage focuses on new product announcements and sensationalism. One important topic, however, does appear frequently in mainstream tech news, namely privacy. There is broad awareness that our new digital lifestyle brings with it a host of potential problems in keeping ourselves at our information secure. Poor management of privacy and information use by some of the industry’s biggest players (like Google), recent congressional debates on the Cyber Intelligence Sharing and Protection Act (CISPA) and Facebook’s impending IPO have kept these issues in the limelight.

But, of course, there will always be people who push back against prevailing winds.  Reflecting on some Talks that he recently attended, Steve Wildstrom suggests that the over-arching angle in the effort to deal (updated) with privacy is misplaced.

When you listen to politicians such as, say Senator Al Franken (D-Minn.), you get the impression that Americans are cowering at the assault on their privacy by Facebook and Google.  ”The more dominant these companies become over the sectors in which they operate, the less incentive they have to respect your privacy,” Franken told the American Bar Assn. “When companies become so dominant that they can violate their users’ privacy without worrying about market pressure, all that’s left is the incentive to get more and more information about you.”

But based on what I heard from an admittedly limited sample in Seattle, that’s not where real peoples’ concerns lie. Their worries about online privacy are less corporate and much more personal and intimate. Privacy concerns are real, but the policy responses can seem like solutions in search of problems.

The clear implication here My reading of this is here is that because people don’t worry about privacy in the form of data collection, we don’t need to have policies to protect it we should focus regulatory and industry attention on privacy elsewhere.  Privacy policies are aimed at things people aren’t concerned with.

Hmmm. Seems to me that there is a bit of a fallacy going on here.  As I read it, Wildstrom’s suggestion is that (updated) people’s level of acceptance of a danger indicates whether or not a danger is real.  Few people worry about their cars blowing up.  By this logic, the NTSB should get rid of it’s safety regulations. If people are worried about it, it doesn’t exist.  That just doesn’t follow.

Okay, so let’s back up a bit for some context.   In their textbooks on engineering ethics, Schinzinger and Martin offer what I think are is a really helpful framework for thinking about the way in which the things that we create interact with the rest of our lives. They begin with the observation that all engineering is an experiment.  Human beings desire to create things to improve our lives– that’s what engineering is all about. The trick is that everything we create changes the world in some way. Hopefully, it does bring the improvement sought. But with everything that is created, there are potentials for negative impacts as well. Buildings fall, cars crash, keyboards end up hurting our hands. There are unintended consequences to go along with all of our great creations.

Now, the fact that there are both good and bad consequences to all of our creations does not stop us from building new things. Rather it drives us to  do a couple of things.  On the engineering side (as I’ve noted previously) it drives engineers to create processes of continued evaluation and revision aimed at minimizing the negative impacts of products. On the informational side, it requires those who have created products to inform users of the dangers of using their products.  In the medical world, this is called informed consent, And it is one of the backbones of medical ethics.  With products, we often see this in the legal disclaimers: this cup contains hot coffee. But it is also part of the informal process of communication between vendors and customers. There is a reason, for instance, that when you buy a bike the salesperson asks you if you need a helmet. It’s not just up selling. We are communicating to you that these things are dangerous.

To put it another way, when we use something, we participate in an ongoing experiment that may bring benefits, but that also has risks. We expect makers to work hard to demise risks, but also to inform us of the ones that exist. From there, it’s our choice as to whether or not we use something.

So far so good. This is all very rational and fair.  is at this point, however, that we need to add a third factor into the equation–and this is where Wildstrom’s argument gets hung up:  people acclimatize to risk. The reality is that there are lots of dangerous things in the world.  Buildings fall, cars crash, keyboards end up hurting our hands.  Yet we learn to live with the risks.  Why?  Well, it seems to me that there are lots of reasons. For instance:

  • Statistical evidence: the odds of a building falling on you are very small.
  • Overwhelming benefit: even if we do get harmed, the benefits outweigh the danger
  • Practicality: if we don’t, it becomes impossible to live a productive life.
  • Familiarity: after a while, we just don’t notice it anymore

Most of the time, this is something that we do without even thinking.  And when it breaks down, it can be dramatic.  Maybe you’re one of the many people who has acclimatized to lots of risks, but can’t let it go when you get on a plane. Even though the statistics are clear—planes crash far less often than cars—the spectacular way in which failures occur in aviation make us wonder if the benefit is worth the risk.  Indeed, acclimatization to the risks of everyday life is so normal that we  even have an informal term for people who haven’t developed it: neurotic. In a broader sense, much of what we call trust seems to me to focus in on this issue of risk. Trust is created when we can accept the risks because we believe that the other party does all they can to eliminate it.

The key thing here to remember is that there is no necessary correlation between judgment of risk and actual riskiness.   In 2010, 32,885 people were killed and 2.24 million were injured on America’s roads, yet very, very few people worry about driving as a risky proposition (at least until your child turns 16).  People are much more fearful of flying, which is statistically less risky.  There is a disconnect between peoples’ perception of risk and its riskiness.
We could list all sorts of examples of this.  Before Eric Scholsser wrote Fast Food Nation, few people were aware of the broad array of risks involved with industrial fast food.  Before AIG collapsed, few people were aware of the significant financial risks involved with certain kinds of computer model driven financial products (like credit default swaps).  Before the collapse of the Hyatt Skywalk in 1981, few people were aware of the risks that lay beneath their feet.  Even things that people don’t report as risky can be horribly risky.

Which brings us back to Wildstrom’s argument: people don’t worry about general privacy so we don’t need to have policies to protect it. It should be clear by now why this doesn’t make a whole lot of sense.  Muslim students didn’t expect to be surveilled electronically without probable cause.  Nor, do most people worry that the security certificates that are supposed to make the web secure.  When certificate company DigiNotar was hacked last year, Wildstrom reported it being particularly dangerous. As he put it:

…This is another serious warning shot telling us that major improvements are needed in internet security.

It’s one you probably didn’t even know about, but that’s my point (and his at the time): there are lots of threats that people don’t know about, but that are quite dangerous.

In large measure, that’s why we have institutions like government: to make sure that somebody’s taking care of the things that none of us want to pay attention to. From local building codes and health departments to the National Highway Traffic Safety Administration and the Consumer Product Safety Commission, we need people behind the scenes making sure that somebody is paying attention to the way in which the common good can be compromised by failure to attend to avoidable risks. This is nothing new, something that is generally well accepted.

Where there are demonstrable risks that people frequently become acclimatized to, institutions are sometimes the only way to manage the risks of daily living.

Of course, that’s not to say that we should ignore the things that people are concerned about.   Great insights often come out of peoples’ every day experiences and observations.  But the fact that we have quickly become acclimatized to some significant risks does not make them any less risky—or any less critical to resolve.

Leave a comment

2 Comments

  1. You are putting some words in my mouth. I never said that policies to protect privacy are not required because I don’t believe that. I did say that people are more concerned about what is done with information they volunteer through posts than the data that is quietly collected. And I cited Senator Franken because he is given to apocalyptic statements on the subject. The full context of Franken’s comments was a rather tortured argument that data collection is a form of antitrust violation, a formula for wasting a lot of the texpayers’ dollars.

    Reply
    • Steve,

      Thanks for reading. I appreciate the comment, and have updated a bit in light of your correction. Hopefully I made it a bit clearer where your text leaves off and my interpretation takes up. I agree with you that Senator Franken’s argumentation here (and in previous cases) is rather tortured. He’s trying to use existing legislation to deal with new problems. I can understand his intent, but it’s not a good fit.

      That being said, I don’t think that the underlying premise that you state at the end of the second paragraph is the case. I don’t think that people necessarily “have a nuanced and generally accurate view of the state of online privacy.” The students that I teach on a regular basis—those “digital natives”—have never heard of Comodo or DigiNotar. Nor do the people I’ve worked with in humanities departments at several universities. Nor the non-technical people that I ask about these sorts of things in my day to day life.

      Admittedly, though, perhaps I too am generalizing off of a similarly small, non-random sample too.

      Best,

      Jim

      Reply

Leave a comment