Last week, Josh Constine over at Tech Crunch wrote an interesting piece on online data security. It is a worthwhile read (even if a bit flawed). His thesis was that innovation is being hampered by public over-reaction to potential problems with the security of private data at social networking sites and in apps. Constine seems frustrated that Despite their own research that showed little danger of data compromise on Facebook apps, The Wall Street Journal still wrote “a hit piece” that warned people about privacy dangers at the site. Privacy concerns are, it seems, a fiction foisted upon an ignorant public by an corrupt media.
To support his argument, Constine offers a few contentions:
- Offline merchants have lots of your data. Heck, businesses send things to your house all the time.
- There are lots of controls in sites like Facebook to limit who sees what when.
- You can always opt out.
- If really bad stuff were to happen, then we will deal with those few issues.
As a result, people’s concerns about privacy are misplaced.
On the face of it, this is all true. Yet, it avoids some present realities.
- Online and offline possession of data are very different. Bad behavior offline happens, but is often limited. Once your data is out there online, there is no getting it back.
- There are lots of controls for privacy on sites like Facebook, but they are often very confusing, hard to find, and needlessly complex. Add to the complexity the fact that Facebook in particular changes their privacy settings systems on a regular basis, and it is often difficult to know exactly what does what and where. Constine notes that most privacy concerns “stem from users making too much data about them publicly available.” Some of that is a direct result of opaque privacy settings systems.
- Opting out is often not a viable option. What student traveling abroad is going to get off Facebook when it serves as a primary way of staying in contact with people back home?
- Don’t we want to prevent crime rather than simply clean up after it?
To his credit, Constine does mention that there are legitimate privacy concerns out there. Good. And he is definitely right that the media does go too far at times. Yet, there seems to be a lack of nuance in argument being made here. There is more than just fearmongering at stake here, namely his fundamental moral assumptions.
To whit, discussing a Facebook app that was not allowed to use cell phone numbers and addresses, he complains:
Now we still can’t choose whether to grant Facebook apps information that on and offline marketers ask us for all the time. So instead of human connection and faster shopping that could help the economy, fear trumped innovation and we got no improvements.
Ah. So that’s it. There was a company that didn’t get to do what it wanted, so he’s frustrated. But why would he assume that a company should simply get to do whatever it wants?
At issue here are the values that underly our technological society. The common view is that people should not be restricted in their attempts to innovate. Their creations should be considered beneficial until proven otherwise. Newness is of sufficient benefit to justify a product, but if there is any doubt, the potential for economic gain of the new product trumps reasonable caution about liabilities. In this view, anyone who wants to slow the pace of progress demonstrates a lack of reason and respect for the rights of the entrepeneur. Throw in a little bit of patriotic “help the economy in our time of need” and the argument is won.
Having worked as a programmer, I understand his excitement about creating new stuff that has great potential. Yet, new is not the only value. Nor is “faster shopping”.
It seems to me that software engineers and internet startups should start having to think through their systems in the same robust ways that other kinds of engineers have to. Basic engineering ethics requires that people who create products do robust testing to make sure that their systems don’t harm the people that use them. Among engineers there is an emphasis on the common good: in order for a product to be successful, it needs to enhance the world of both the consumer and the company, and be reasonably certain not to open up users to undue harm. Harm may still exist, but if so, it must be consented to by the consumer and by the society in which the product is sold.
And that’s where Constintine’s argument breaks down. Simply put, lots of intelligent people want Facebook to protect privacy. Lots of intelligent people think that its better to hold back on those cell phone numbers. They aren’t just ignorant. How many of us hate, hate, hate getting junk main at home and calls from telemarketers at dinner (because that “Do Not Call” list didn’t solve that problem)? But we can’t redo that one. So we don’t want to make that mistake in the new online world.
Common good would suggest that innovation is important to society. But it also would suggest innovation is not the goal of society. Human development is the goal of society. Sometimes slowing innovation is the best way to ensure that human development is not compromised. It’s not so bad to slow down. Slowing down was, for instance, critical to reducing the dangers in medical technology innovation over the course of the twentieth century. While big pharma would say we move too slow, avoiding mistakes like those that that happened with Thalydomide has helped, not hurt, society. Isn’t slowing down what thoughtful people from Tibet to Walden have been saying forever?
I agree wholeheartedly with Constine when he says, “it’s time we start thinking critically about what makes us uncomfortable.” I just think he underplays the important role discomfort plays in telling us when we are sick.