One of the key problems with innovating is that you frequently run into problems that were totally unforeseen. That’s why engineering practice—driven my robust professional ethics commitments—require that systems be tested and reevaluated both during the creation phase and after deployment in order to catch problems as they arise. This process drives companies to do physical product recalls and software updates. Innovation requires thoughtful consideration of possibilities, good and bad. It also requires a boatload of humility, because you will make mistakes.
Failing to account adequately for unforeseen consequences is one thing. Failing to account for known negative consequences is something altogether different.
In the best case scenario, failing to account for known negative consequences is misleading the consumer who expects a product to be reliable and “as described.” In the worst case scenario, it is needlessly endangering people’s lives. Consumer protection and product liability laws exist to provide disincentives for these sorts of practices. But at a deeper level, the economy depends upon the ability of customers to trust companies. Without an adequate level of trust that products will function as described, people won’t spend money to purchase things.
Unfortunately, it is not always so easy to distinguish the unfortunate, but morally acceptable from the unfortunately, and morally problematic. In part, that’s because it can be difficult to obtain sufficient facts to discern whether or not the creators knew of potential issues. Not every case has a memo or email (as in the Pinto case) that confesses to the facts.
It is also challenging because there are cases in which innovators should have known better or done better, but seem to have missed something. That’s why we have legal categories like negligence that assign culpability, but in a different way. Catholics would use the term “sin of omission.”
Which brings us to some recent privacy related issues Apple and Google.
With Apple, you have an innovative company that was developing a new product type (the tablet). Last fall, people found that, much to their chagrin, the operating system was collecting personal information like location, even though users said that they didn’t want to share this information. According to Apple, this information was collected but not shared, and provision for this collection was clearly stated in the end user licensing agreement that each user accepted. This week, a judge said that the discovery portion of a class action law suit against Apple over this data collection can move forward.
With Apple, you have an innovative company that was iterating a new product type (online ads). Last year, Google placed browser cookies for use by Apple’s Safari web browser that collected personal information for use by Google. According to Bloomberg,
The cookies allowed Google to bypass Safari’s built-in privacy protections to aim targeted advertising at users of Safari on computers, laptops, iPhones and iPads.
The Federal Trade Commission is looking to fine Google $10 million over this, in part because it circumvented user privacy, but also because they had been warned about this; Google signed a consent decree with the commission last year about online privacy.
In both cases, remedies were swift. Both Apple and Google rectified the situation. Apple locked down the data and improved its communication about its policies. Goggle removed the cookies.
So, the question is, which categories do these fall into? Are these cases of unforseen consequences, consequences that should have been forseen, or consequences well known, but ignored?
In Apple’s case, I tend to think it is on the former end rather than the latter. iOS was collecting data that it said in the license agreement it would collect, and that seems to be useful for improving performance through caching. It was not sharing this data beyond the OS. It failed, however, on two accounts. First, it failed to secure the data well enough. A skilled hacker found the data, which should have been better protected and encrypted. Second, it failed to communicate well enough with users exactly what it meant when it said that it would not share user location data meant. Apple seems to have focused on the word “share.” They agreed not to share the data. Users focused on the “location data” part of the phrase, assuming that when they denied sharing the data, Apple wouldn’t collect it in the first place. This mismatch in interpretation seems to be a primary cause of the problem. It entirely plausible that we would see missteps like this as innovation spreads in a relatively new sector like mobile computing. Mistakes will be made. My guess is that the people who wrote the EULA and permissions dialogue boxes thought what they meant was totally clear. And, while it doesn’t excuse the issue, from what can be seen so far, there are no reports that the collection of data caused any actual harms.
In Google’s case, I tend to think that the problem is more on the latter end rather than the former. Google makes its money by selling ads. Or, to be more specific, Google makes lots of money by selling highly targeted ads because it can collect massive amounts of user data. They make their money by finding out things about you. Some of the things they know are public, others private. They knew that collecting private data was a problem, but created a cookie system that could bypass browser privacy settings. According to Bloomberg
Google said at the time that it “didn’t anticipate this would happen” and that it was removing the files since discovering the slip.
Hmmm. Perhaps. But given the issues they have had on a regular basis with privacy for a while—not the least of which a consent decree—wouldn’t you think that Google’s excellent engineers would have noticed the potential for security problems? Google’s engineers give every impression of being very good at what they do. Somehow, it seems hard to believe that this just slipped by.
In either case, though, it suggests that both companies need to work to cultivate the imaginations of their staff. Part of the job of a good engineer or alpha test team is to figure out all of the way in which things can go wrong. You have to imagine every scenario, and then figure out how to protect against it. As a programmer, I would sometimes spend time banging on my colleagues’ apps, trying to break them. Those weeks were exercises in frustration, not because I hated breaking my friends’ work, but because after a couple of days, it was hard to figure out new ways that people might mess things up. It was always a challenge to my creativity.
Perhaps that part of the imagination—how we can get things wrong—is something that needs more attention. Because we will get things wrong. The more we can avoid it, the better of we will be.