Consider four news stories from the past year. First, the Wall Street Journal reported last November that Google had quietly entered the medical records business through an agreement with the health-care provider Ascension. This machine-learning venture—code-named “Project Nightingale”—harvested names, test results, diagnoses, hospital admissions, and prescription histories from more than 2,600 hospitals across twenty-one states without informing any of the patients or physicians whose records these were. Second, in March of the current year, Washington’s governor, Jay Inslee, signed a law regulating the use of facial recognition within the state. Hailed by some as model legislation, the law was written by a state senator who is in management at Microsoft. Third, in April, an Israeli spyware firm, NSO, alleged in court filings that, in 2017, WhatsApp (a Facebook-owned messaging service) attempted to buy NSO’s surveillance technology in order to spy on Apple devices. Finally, Apple had its own surveillance embarrassment in the summer of 2019, when a whistleblower revealed that the company had been farming out Siri recordings—with or without the user’s intentional activation of the virtual assistant—to contractors around the world for sound quality tests. Among the recorded material: confidential medical information, drug deals, and the intimacies of couples between the sheets.
All of these examples come from a file I started to keep after reading Shoshana Zuboff’s The Age of Surveillance Capitalism, in which Zuboff argues that surveillance capitalism represents “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales.” It is a big book, an important book, and a conscious attempt—successful in my view—to write a magnum opus on the way capitalism has been quickly transformed as a result of the digital revolution. Your hospital records, your facial features, your Internet activity, call history, and app usage, your most private utterances: All of these facets of human life have become data that Big Tech can mine. As Zuboff shows, this captured data allows Big Tech to assemble digital pictures of what you like, where you go, who you know, who you are. That shadowy portrait then becomes the basis of predictions about where you’ll go next, what you’ll do, buy, and want next.
Zuboff goes even further than prediction, though. She draws on the behavioral science of B.F. Skinner to argue that Big Tech’s ambition, ultimately, is “guaranteed outcomes,” in which its products not only forecast our next steps but also lead them. You may or may not feel controlled by Google’s search results, YouTube’s recommendations about what to watch next, or Amazon’s suggestions about what might interest you; but we must grant that the architecture of our decision making is shaped by these platforms, which, moreover, are working not just from what we explicitly tell them but also from all of the “bread crumbs” we leave behind us as we search, play, write, and snap.
Among the implications of this economic model is a drive among the major players for ever more data streams to sharpen their predictions—against the rival claims of other data gatherers. Facebook’s attempt to break into Apple’s data hoard is a case in point. So is Project Nightingale. Its stated purpose is laudable: to help health-care providers determine what ails a patient, which drugs to prescribe, even which doctors should be assigned to a case. The trouble is that Google is also gathering information on all of the named parties through other channels.
Writing in The Atlantic about the project and Google’s acquisition of FitBit the same month, Sidney Fussell wisely observed that “suddenly the company that had logged all our late-night searches about prescriptions and symptoms would potentially also have access to our heart rates and step counts.” Google has claimed that there are safeguards in place to prevent data from one side of its business from slipping over into another. But such claims have no substance. The business model depends on knowing more about you. An insurance company might appreciate getting tips from Google not only about people of your age, sex, race, or occupation—but also you, specifically.
You may now be thinking, “This must be stopped!” The difficulty of our current circumstances, however, is aptly demonstrated by the state of Washington’s facial recognition law. Microsoft actively lobbied for the law, which is widely seen as a win for the company, being at once a defeat of those who sought an outright ban and a model on which to base similar legislation in other states. But Microsoft also stands ready to provide surveillance technology that fits within the law’s parameters.
Government ties to Big Tech run deep. Over the last eighteen years, Big Tech executives have repeatedly served as government advisers and government workers have gone to work for Big Tech. Zuboff writes that the Google Transparency Project, for example, “found that by April 2016, one hundred and ninety-seven individuals had migrated from the government into the Googlesphere, and sixty-one had moved in the other direction. Among these, twenty-two White House officials went to work for Google, and thirty-one Googlesphere employees joined the White House or federal financial boards with direct relevance to Google’s business.” That a state senator is employed by Big Tech is a notable development only in terms of its efficiency.
The Age of Surveillance Capitalism ends with a call to action: We must, Zuboff argues, cry out “No more!” just as the people of East Berlin once did. Only then can we “reclaim the digital future as humanity’s home.” It’s a fitting end to the book, and a sentiment I support. But if there was some reason for hope last year as government scrutiny of Big Tech seemed to be ramping up, the pandemic has raised concerns that a fresh bargain is being struck between government and Big Tech. As the columnist Lionel Laurent has observed for Bloomberg Businessweek, “The current crisis has emphasized…how much of what the tech industry’s billionaire-run corporations provide resembles essential public, or quasi-public, goods and services.” The crisis has revealed how much we rely on these companies—to deliver food to us, to keep us informed, to provide the infrastructure for many industries to function now remotely, to divert us. The timing of the pandemic couldn’t be worse. We may want to cry out “No more!” but our clicking says, “More, more, more.”
As I write these words, Google and Apple have just announced a rare collaboration: Within a few weeks, the companies will release application program interfaces (APIs, the building blocks of apps) that will allow developers to build “contact-tracing apps” that will keep track of all the phones in the vicinity of your own device, regardless of which platform they use. Thus, anyone who falls ill who has recently been in your immediate vicinity can instantly and anonymously alert you. In the coming months, moreover, the companies plan to create “a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms” (as Apple’s announcement explains). In short, contact tracing will become a built-in feature of your phone’s operating system, something that will require you to make a deliberate decision to opt out should you have concerns.
These endeavors are appealing in numerous ways. Now the digital devices we tote with us everywhere we go can be used to help combat the virus. The anonymity of the app promises to make it easier to alert others who may be affected. Indeed, it makes it possible to reach out to people whom one seemingly wouldn’t be able to contact otherwise. Several commentators, including other groups building rival apps, have applauded the collaboration, as well as the companies’ assurances that privacy is a top priority. Can we trust them?