At a recent conference on public health, nutrition expert Kelly Brownell tried to explain our new food environment by making some striking comparisons. First, he contrasted the coca leaf—chewed for pain relief for thousands of years by indigenous people in South America, with little ill effect—with cocaine, a highly addictive, mind-altering substance. Then he contrasted a cob of corn with a highly processed piece of candy derived from corn syrup. Nutritious in its natural state, the concentrated sugar in corn can spark unhealthy, even addictive behaviors once poured into candy. With corn and with coca, the dose makes the poison, as Paracelsus put it. And in the modern era of “food science,” dozens of analysts may be spending millions of dollars just to perfect the “mouthfeel” and flavor profile of a single brand of chips.11xKelly Brownell, remarks at a panel, “Health Behaviors: Tobacco, Obesity, and Children,” Conference on Public Health in the Shadow of the First Amendment, Yale University, New Haven, CT, October 17, 2014; Michael Moss, Salt, Sugar, Fat (New York: Random House, 2013).
Should we be surprised, then, that Americans are losing the battle of the bulge? Indeed, the real wonder is not that two-thirds of the US population is overweight, but that one-third remains “normal,” to use an adjective that makes sense only in relation to an earlier era’s norms.22xCenters for Disease Control and Prevention, “Fast Stats: Obesity and Overweight” (last updated May 14, 2014); http://www.cdc.gov/nchs/fastats/obesity-overweight.htm.
For many technology enthusiasts, the answer to the obesity epidemic—and many other problems—lies in computational countermeasures to the wiles of the food scientists.33xEvgeny Morozov, To Save Everything, Click Here (New York: Public Affairs, 2013). App developers are pioneering behavioristic interventions to make calorie counting and exercise prompts automatic.44xDavid H. Freedman, “The Perfected Self,” The Atlantic (June 2012), at http://www.theatlantic.com/magazine/archive/2012/06/the-perfected-self/308970/. For example, users of a new gadget, the Pavlok wristband, can program it to give them an electronic shock if they miss exercise targets. But can such stimuli break through the blooming, buzzing distractions of instant gratification on offer in so many rival games and apps? Moreover, is there another way of conceptualizing our relationship to our surroundings than as a suboptimal system of stimulus and response?
Some of our subtlest, most incisive cultural critics have offered alternatives. Rather than acquiesce to our manipulability, they urge us to become more conscious of its sources—be they intrusive advertisements or computers that we (think we) control. For example, Sherry Turkle, founder and director of the MIT Initiative on Technology and Self, sees excessive engagement with gadgets as a substitution of the “machinic” for the human—the “cheap date” of robotized interaction standing in for the more unpredictable but ultimately challenging and rewarding negotiation of friendship, love, and collegiality. In The Glass Cage, Nicholas Carr critiques the replacement of human skill with computer mediation that, while initially liberating, threatens to sap the reserves of ingenuity and creativity that enabled the computation in the first place.55xNicholas Carr, The Glass Cage: Automation and Us (New York: Norton, 2014).
Beyond the psychological, there is a political dimension, too. Legal theorist and Georgetown University law professor Julie Cohen warns of the dangers of “modulation,” which enables advertisers, media executives, political consultants, and intelligence operatives to deploy opaque algorithms to monitor and manipulate behavior. Cultural critic Rob Horning ups the ante on the concerns of Cohen and Turkle with a series of essays dissecting feedback loops among surveillance entities, the capture of important information, and self-readjusting computational interventions designed to channel behavior and thought into ever-narrower channels. Horning also criticizes Carr for failing to emphasize the almost irresistible economic logic behind algorithmic self-making—at first for competitive advantage, then, ultimately, for survival.66xRob Horning, “Notes on The Glass Cage”; at http://robhorningtni.tumblr.com/post/97819458580/notes-on-the-glass-cage.
To negotiate contemporary algorithms of reputation and search—ranging from resumé optimization on LinkedIn to strategic Facebook status updates to OkCupid profile grooming—we are increasingly called on to adopt an algorithmic self, one well practiced in strategic self-promotion. This algorithmic selfhood may be critical to finding job opportunities (or even maintaining a reliable circle of friends and family) in an era of accelerating social change. But it can also become self-defeating. Consider, for instance, the self-promoter whose status updates on Facebook or LinkedIn gradually tip from informative to annoying. Or the search engine−optimizing website whose tactics become a bit too aggressive, thereby causing it to run afoul of Google’s web spam team and consequently sink into obscurity. The algorithms remain stubbornly opaque amid rapidly changing social norms. A cyber-vertigo results, as we are pressed to promote our algorithmic selves but puzzled over the best way to do so.
This is not an entirely new problem: We have always competed for better deals, for popularity, for prominence as an authority or a desirable person. But just as our metabolic systems may be ill adapted to a world of cheap, hidden sugar, the social cues and instinctive emotional responses that we’ve developed over evolutionary time are not adequate guides to the platforms on which our algorithmic selves now must compete and cooperate. To navigate them properly, we need the help of thoughtful observers who can understand today’s strategies of self-making within a larger historical and normative context.
Sherry Turkle has written often and well on human-computer interaction, urging greater caution as we become increasingly reliant on robotics. Her close observation of vulnerable populations reveals just how profound the impact of simulacra can be:
Children approach a Furby or a My Real Baby and explore what it means to think of these creatures as alive or “sort of alive”; elders in a nursing home play with the robot Paro and grapple with how to characterize this creature that presents itself as a baby seal. They move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?”77xSherry Turkle, “Artificial Intelligence at Fifty: From Building Intelligence to Nurturing Sociabilities.” Paper presented at the Dartmouth Artificial Intelligence Conference, Hanover, NH, July 15, 2006, http://www.mit.edu/~sturkle/ai@50.html.
As any fan of the 2001 movie A.I. knows, these are profound issues in themselves. Turkle worries about a society where children no longer appreciate the difference between the born and the made and where busy adults leave their aging parents with an array of sophisticated toys instead of visiting them themselves.
The Paro robot, for instance, is designed to look and act like a white baby seal but to serve human functions. Its designer claims that it can “provide three types of effects: psychological, such as relaxation and motivation, physiological, such as improvement in vital signs, and social effects such as instigating communication among inpatients and caregivers.”88x“What is a Mental Commitment Robot?” Paro Robots USA, accessed December 8, 2014; http://www.paro.jp/english/about.html. Videos and studies document the seal’s positive effects on the mood of the chronically lonely. But Turkle suggests that the innovation may just excuse neglect. Why visit Grandma, some might rationalize, when a robotic animal companion is available?
Defenders of the Paro point to the practical need for this type of innovation, given the loneliness of many institutionalized elderly people. Even pet therapists can only visit for a few hours at a time. If there really is no alternative, no human or animal available to show concern or affection, isn’t the Paro better than nothing? To the extent that the “ages of man” come full circle to infancy in dotage, could not the Paro be seen as a high-tech version of the Velveteen Rabbit? Moreover, the Paro isn’t substituting for real animal companionship for the vast majority of us, the defenders argue, but only for a small segment of the population whose care needs could easily overwhelm private means and public coffers.
Breaking the Spell of Mesmeric Technologies
But we need to resist convenient rationalization here. Robotic caregiving makes far more sense in a society where the adult children of the elderly are under constant pressure to work more or to engage in “helicopter parenting” to keep their own children on track. The “sandwich generation” has to sacrifice something. If, by contrast, productivity gains were better distributed (and converted, at least in part, to more leisure time rather than money), demand for robots in elder care would likely diminish. So, too, would the robotic seal appear a far less comparatively appealing presence if care workers themselves were more professionalized and attached—two qualities that are hard to expect from a poorly paid, precarious, and frequently contingent work force.
The diffusion of innovations like the Paro is due less to the existence of the device itself than to the need it serves in a certain sociotechnical system: Particular political economies can either encourage or discourage the robotic colonization of caregiving. The fate of Alzheimer’s sufferers, to come to my larger point, is not entirely different from that of healthy working adults facing algorithmic systems. We, too, are routinely manipulated by devices and are often blind to their ultimate ends. The personalization of devices can also isolate us, as communities, neighborhoods, churches, and even families disintegrate behind our hypnotic fascination with whatever images, sounds, and text algorithms determine will maximize our “time-on-machine.”
The 2013 film Her is an extraordinary evocation of an increasingly likely future in which the billions of conversations captured by telephone companies (or Google, or the National Security Agency) are used to design an operating system (OS) that almost perfectly simulates a witty, supportive lover or a devoted friend. The film presents the OS on a wondrous journey of self-discovery, joyfully discovering the philosophy of Alan Watts as it “bonds” with other artificial intelligences. A more realistic plot would suggest how the OS reflected the will of the investors and managers who financed and built it. We need some common, clear awareness of whom the algorithms behind the screen truly serve before we accept their pervasive presence in our lives.
We also need to recognize the crude opportunism behind some efforts to elevate the status of algorithms. Have you ever been left in a kind of suspended state as a friend rifles through e-mail messages or texts someone? (Of course you have!) Presently, the distraction is interpreted as rudeness. But if we accept the friend’s designation of an operating system as his “girlfriend” or “wife,” all bets are off.
If manners are “small morals,” increasingly frequent reveries of constant partial attention represent a shift in our ethical orientation—toward an intense connection with a cyber-network, and away from the presence of those around us. The devices become an excuse for constant distraction. They engender a “new narcissism”—not mere self-concern, but narcissism in the more technical sense, of a personality so fragile that it is in constant need of shoring up.
Technologically driven emotional support systems can demand back as much as they give. To tap in to them, we increasingly find ourselves on a “positional treadmill” where various devices and apps become necessities we neglect at our peril. Fail to check Facebook regularly enough, and you may miss out on important news about your friends or even job opportunities.
Used at first to achieve particular ends, the new technologies of connection are not merely instrumental to, but constitutive of, our ends. They change how we think and reinforce certain character traits. When devices such as the iPhone are heralded as life changing, we may well be participating in a tech culture that simultaneously enables “social networking” and displaces real-world friendships.
Modulated Selfhood
Beneath the surface of Internet policy disputes, there is a deeper, even ontological set of orientations to technology. On one side are advocates of “mastery,” who try to resurrect old legal principles and public values to order cyberspace. On the other are adepts of “attunement,” who caution the legal systematizers. When the “masters” propose a new constraint on the network, the “attuners” tend to parry with calls for humility. Law should adapt itself to the emergent order online, they say, should respect its inner music, its patterns of information exchange and hierarchy.99xI borrow the terms “mastery” and “attunement” from William Connolly’s political theory in Identity/Difference: Democratic Negotiations of Political Paradox (University of Minnesota Press, 2002). Internet policy meetings frequently feature tensions between advocates of attuning law to new tech, and those who want the reverse.
Both mastery and attunement can map to generally “conservative” or “progressive” policy positions. In privacy policy, the “masters” are often progressive, trying to impose some fair information practices on a Wild West of data brokers. The “attuners” are usually “free market” advocates, disciples of Friedrich Hayek who want to see spontaneous order online. Given the importance of intermediaries, attuners can be either privacy advocates (vis-à-vis government) or detractors (with respect to rules for companies). One year, they may press Congress not to force cable companies to track and stop music file-sharing; the next, they may fight for “deregulation” that permits the same companies to degrade quality of service for those deemed pirates by automated detection systems. As corporate media interests strike more deals with intermediaries, the politics of “attunement” have become increasingly neoliberal. The “online order” to which policymakers are told they must adapt is one comprehensively ordered by giant firms.
Friedrich Hayek’s influence on our legal order in general—and on Internet law in particular—is underappreciated. A law student cannot leave school without imbibing Hayek’s views about unintended consequences, perverse incentives, and the clumsiness of bureaucracies compared with the nimbleness of communities and markets. Those libertarian maxims are sometimes a useful corrective to statist overreach. But Hayek (in The Road to Serfdom) and many of his followers did attunement a disservice by tying it too closely to particular conservative political agendas.
In Configuring the Networked Self, Julie Cohen takes on the would-be “masters” of the Internet, although not from a libertarian position.1010xJulie Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale, 2012). Cohen’s normative framework is eclectic, situated, and particularist. She adopts no sweeping philosophical desiderata to unify her treatment of data and content online. Nor do economic measures of efficiency and utility motivate her project. Cohen’s Networked Self is a book that takes online subjectivity and community seriously, in both established and emergent forms. In it, Cohen cautions against either public or private entities trying too hard to monitor and control information flows online. She does so not in the name of fairness, welfare, utility, or deontology, but in the name of play—or, more expansively, of recognizing the value of intrinsically worthwhile, “pursued-for-their-own sake” activities on the Net. Grounded in cultural theory and thick descriptions of life online, Cohen’s work should lead thinkers within law—and well outside it—to reconsider how they think about critical problems in the design and regulation of technology.
As Cohen observes, most legal scholars have, within the framework of liberal political theory, framed privacy problems as those of rational choosers (not having a chance to obtain precisely the level of information control they want) and romantic dissenters (afraid to express well-formed, oppositional political views). She wants to open up the conversation by describing the ways in which information control and monitoring hurt other “selves”: namely, socially constructed selves who do not merely try to insert themselves into markets and political processes, but who are constantly being influenced by the world around them.
Surveillance apparatus—be they public or private, or (as increasingly is the case) an inscrutable mix of the two—are set up not just to stop certain obviously bad outcomes (like identity theft or terrorism) but to create certain kinds of people. As Cohen observes, “Surveillance employs a two-fold dynamic of containerization and affective modulation in order to pursue large-scale behavioral modification.”1111xIbid. Even without the crude efforts of metrics firms such as Klout or “risk scores” assigned by the Transportation Security Administration, we all sense that certain activities win the approval of assorted watchers and others do not.1212xOn Klout, see Frank Pasquale, “Gamifying Control of the Scored Self,” Balkinization, December 19, 2011; http://balkin.blogspot.com/2011/12/gamifying-control-of-scored-self.html. Behavior is modulated accordingly—sometimes in ways that are best for all involved, but in other cases are not.
One dystopian possibility of the thoroughly modulated life is imagined by Gary Shteyngart in his 2010 novel Super Sad True Love Story,1313xGary Shteyngart, Super Sad True Love Story: A Novel (New York: Random House, 2010). a book which has been favorably compared with George Orwell’s 1984. In Shteyngart’s fictive world, people’s credit scores run from 400 to 1600, conveniently displayed on “credit poles” at any retail establishment. For those who crave even more displays of intimate self-worth, their “personality” and “sexiness” can be measured, by means of smartphones, against those of fellow employees or bar patrons. The protagonist’s employer posts instant updates of salespersons’ “mood + stress indicators,” encouraging them to optimize their attitudes for demanding clients.
In an anomic world where social mores are adrift, the characters in the novel scramble to “find their place” in the social pecking order by desperately comparing themselves with each other. No one dwells on what these matrices signify or how they are calculated; they just want high ones.1414xAs Ben Grosser has observed, this obsession with “more” is not far fetched, given current media practices. Grosser, “What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook,” Computational Culture (2014), at http://computationalculture.net/article/what-do-metrics-want. Paraphrasing Foucault: We are seeing the rise of practices of the quantified self, where easy-to-use dashboards instantly display our relative popularity. Like Max Weber’s Calvinists working to seem worthy of being counted among the elect, Shteyngart’s characters hustle to boost their numbers. Black-box rankings become a source of identity, the last “objective” store of value in a world where instability and short attention spans undermine more complex sources of self-worth.
A defender of surveillance-driven scoring would insist that the sum of the positive modulations (such as the bad behaviors avoided, consumer deals consummated, tax evasions foiled) is greater than the sum of negative modulations (e.g., data-driven bilking or the repression of valid but unpopular opinions).1515xAn example of data-driven bilking might be a “sucker’s list” compiled by a casino that targeted “problem gamblers” for advertisements on the basis of behavior tracked during their Internet use. The more such Internet users deploy “do not track” software, the more they may avoid such negative targeting. But whatever one thinks of surveillance-driven scoring in the antiterror apparatus, should it really drive action in so many other contexts? Do we want to be the kind of people who are constantly assessing how each word or deed will affect permanent reputational profiles? Do we want to live in a society that is (or bills itself as) squeezing every last bit of efficiency out of its members? We could avoid a great deal of crime by installing persistent, immutable video recording in all homes. But even today’s fashionable behaviorists would likely reject that proposal out of hand, because the “society of control” it portends is far more frightening than the increment of crime it would stamp out.1616xGilles Deleuze, “Postscript on the Societies of Control,” October 59 (Winter 1992), 3.
Repeating Ourselves to Death
Yet little is done to resist algorithmic scoring and the surveillance that enables it. Few of us have recognized that behind most encomiums to the power of “Big Data” and “predictive analytics” there is a vast and often unaccountable apparatus of sensors and data controllers. Indeed, there may be a cultural trend afoot to participate in such surveillance, to turn it on oneself via “lifelogging” or on others via casual voyeurism. Few will pause to consider the many pernicious effects of persistent digitized memory, as explicated in Anita Allen’s prescient work on surveillance.1717xAnita Allen, “Lifelogging, Memory, and Surveillance,” February 19, 2008; http://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=1166&context=faculty_scholarship. Allen, a professor of law at the University of Pennsylvania, observes that there are psychological hazards in store for selves committed to recording and quantifying their every move, ranging from excessive rumination on mistakes to the persistence of traumatic memories. Predictable demands for the sharing of such data threaten to make every connected device a future snitch, ready to hold us to account for inefficient or antisocial behavior. But it is hard to communicate such distant and abstract risks; this leaves what Allen calls “unpopular privacies” at the mercy of technological evolution and chaotic consumer choices.
For some, the prevailing quiescence proves that we need more surprising, more arresting characterizations of surveillance. But a drumbeat of revelations is a double-edged sword. Supercookies, device fingerprinting, or carrier-embedded tracking codes outrage the privacy community’s insiders. But for most citizens they prove a kind of background noise with precisely the opposite message: “You are always being watched; only a naif would expect privacy in today’s world.” In other words, the effect of “bombshell” surveillance stories may be the exact opposite of their authors’ intention: a sort of shellshock, a dazed resignation to constant surveillance.
Social theory helps us understand the strangely self-defeating nature of supposedly shocking revelations. A critical thinker here is William Bogard, whose farsighted book The Simulation of Surveillance: Hypercontrol in Telematic Societies was published about a decade before the Internet of “things” (the name given to wireless sensor networks embedded into the built environment and objects within it, including the human body) made “hypercontrol” a real possibility.1818xWilliam Bogard, The Simulation of Surveillance: Hypercontrol in Telematic Societies (New York: Cambridge University Press, 1996). The simultaneous neologism and archaism of “telematic” suggests a startling premise of the book: That surveillance is meant just as much to control the future as it is to record the past. We are surrounded by systems of prediction and control. The supervision (via super-vision) here is not simply a way of stopping particularly bad acts but of shaping behavior toward certain ends.1919xSee, e.g., John Gilliom and Torin Monahan, SuperVision: An Introduction to the Surveillance Society (Chicago: University of Chicago Press, 2012). The better the surveillance becomes, the better the “men behind the camera” can plan, behavioristically, matrices of penalties and rewards to reinforce acceptable behavior and deter terror, crime, antisocial behavior, suspicious activities, lack of productivity, laziness—whatever detracts from the gross domestic product and homeland security. Jeremy Bentham’s ecstatic claim for the Panopticon—“Morals reformed—health preserved—industry invigorated—instruction diffused—public burthens lightened—Economy seated, as it were, upon a rock—the gordian knot of the poor-law not cut, but untied—all by a simple idea in Architecture!”2020xJeremy Bentham, The Panopticon Writings, ed. Miran Bozovic (London: Verso, 1995), 31. First published 1787.—would not be out of place in the prospectuses of Silicon Valley startups, or spy agency mission statements.
Kate Crawford, senior fellow at New York University’s Information Law Institute, captures an important cultural dynamic spurred by these aspirations:
If we take [the] twinned anxieties—those of the surveillers and the surveilled—and push them to their natural extension, we reach an epistemological end point: on one hand, the fear that there can never be enough data, and on the other, the fear that one is standing out in the data. These fears reinforce each other in a feedback loop, becoming stronger with each turn of the ratchet. As people seek more ways to blend in—be it through normcore [i.e., consciously ordinary] dressing or hardcore encryption—more intrusive data collection techniques are developed.2121xKate Crawford, “The Anxieties of Big Data,” The New Inquiry (May 30, 2014); http://thenewinquiry.com/essays/the-anxieties-of-big-data/.
How intrusive will the data collection get? Some technology, says Norberto Andrade, “promises to catch in the act anyone who tries to fake a given emotion or feeling.”2222xNorberto Andrade, “Computers Are Getting Better than Humans at Facial Recognition,” The Atlantic (June 9, 2014); http://www.theatlantic.com/technology/archive/2014/06/bad-news-computers-are-getting-better-than-we-are-at-facial-recognition/372377/. Marketers can’t ignore this edge. Neither can the Secret Service or Samaritans Radar (an app that red-flags tweets that potentially indicate mental disturbance), as they desperately seek a “sarcasm detector” to isolate true threats (of harm to self or others) from the deluge of tweets they now access in real time.
All this surveillance can be used to very good ends. For example, one startup, Deconstruction (http://www.deconstruction.co/), monitors the noise and dust levels from construction sites. But it should be obvious that in its more minatory forms, surveillance is endangering creativity, dissent, and complex thinking. Stray too far from the binary of Democratic and Republican politics, and you risk being put on a watchlist. Protest shopping on Black Friday, and some facial recognition database may forever peg you as a rabble-rouser. Take a different route to work on a given day, and maybe that will flag you—“What is she trying to avoid?” A firm like Recorded Future might be able to instantly detect the deviation. Read the wrong blogs or tweets, and an algorithm like the British intelligence services’ Squeaky Dolphin is probably keeping a record. And really, what good is site-monitoring software in the absence of laws that punish, say, the use of jackhammers at construction sites before daybreak? Will the types of protesters whose activism helped make cities livable be able to continue their work as surveillance spreads? Billing sensor networks as integral to the “smart city” is only reassuring if one assumes that a benign intelligence animates its sensing infrastructures.
Surveillance is not just a camera but an engine, driving society in a certain direction. It is not a mirror of our nature, but a modulating source of selves. What defense analysts characterize as dissent risk (or banks see as “Vox Populi Risk”) can easily expand to include the very foundations of self-governance. We cannot let law enforcement, homeland security, and military intelligence agencies continue to scrutinize dissent, deviance, or disagreement that is not strongly connected with serious lawbreaking or national security threats. If we do so, we risk freezing into place a future that rigidly reenacts the past, as individuals find that replicating the captured patterns of past behavior is the only safe way to avoid future suspicion, stigma, and disadvantage.
From Data We Are Made
As we are treated algorithmically (i.e., as a set of data points subject to pattern recognition engines), we are conditioned to treat others similarly. Consider the “Groundhog Date,” now marketed by Match.com. Participants can e-mail photographs of their ex-girlfriends or boyfriends, so that facial recognition software can find the most similar faces among millions of lonely hearts and lotharios. Few brag about using the service: However habitual our actions may be, no one wants to be typecast as a typecaster. But critics worry that the Groundhog Date represents an outsourcing of our humanity—and a disturbing acquiescence to the status of “guinea pigs” that OkCupid’s impresario, Christian Rudder, cheerfully touted in a blog post about the site’s experimentation on its users.2323xEvan Selinger, “Today’s Apps Are Turning Us into Sociopaths,” Wired (February 26, 2014); http://www.wired.com/2014/02/outsourcing-humanity-apps/.
Cultural theorist Rob Horning dissects these paradoxes, identifying a “data self” that emerges through the process of “sharing, being shared, being on a social graph, having recommendations automated, [and] being processed by algorithms.”2424xRob Horning, “Notes on the ‘Data Self,’” The New Inquiry (February 2, 2012); http://thenewinquiry.com/blogs/marginal-utility/dumb-bullshit/. Horning models these stimuli from a political and economic perspective, revealing similarities between casinos and major Internet platforms: “Like video slots, which incite extended periods of ‘time-on-machine’ to assure ‘continuous gaming productivity’ (i.e., money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers.”2525xRob Horning, “Reparative Compulsions,” The New Inquiry (September 13, 2013); http://thenewinquiry.com/blogs/marginal-utility/reparative-compulsions/. That’s one reason for headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.”2626xBianca Bosker, “Teens Can’t Stop Using Facebook Even Though They Hate It,” Huffington Post (June 24, 2014); http://www.huffingtonpost.com/2014/06/24/teens-facebook_n_5525754.html. There are sociobiological routes to conditioning action.2727xEmily Yoffe, “Seeking: How the Brain Hard-Wires Us to Love Google, Twitter, and Texting,” Slate (August 12, 2009); http://www.slate.com/articles/health_and_science/science/2009/08/seeking.html; Yasha Levine, “The Psychological Dark Side of Gmail: Google is using its popular Gmail service to build profiles on the hundreds of millions of people who use it,” Alternet, Dec. 31, 2013; http://www.alternet.org/media/google-using-gmail-build-psychological-profiles-hundreds-millions-people. The platforms are constantly shaping us, on the basis of sophisticated psychological profiles.
So when do Internet platforms start stunting users, rather than helping them realize their own authentic ends? Facebook’s recent psychology experiment sharply posed that question for those on both sides of the platform. Researchers manipulated a subset of about 700,000 users to demonstrate that they tended to be less happy (or, at least, to post less upbeat material) once they were exposed to more downbeat material than they normally would be. Tech enthusiasts hailed the finding as one more incremental step toward perfecting a new science of society. Ordinary Facebookers, resigned to endure ever more intrusive marketing manipulation, were thrown for a loop by the news that they may be manipulated for no commercial reason at all. Critics claimed that the research violated informed consent laws and principles, eroding user autonomy.
There is something even more disturbing than the lack of consent here: namely, the easy hybridization of social network analysis and social psychology experimentation. Ordinary users can’t access, challenge, or try to adapt the code that Facebook uses to order their newsfeeds, except in the crude and stylized ways offered by the company. Social scientists have to play by Facebook’s rules to have access to the data they need—and we can probably assume that a more informed consent process was either tacitly or explicitly rejected as too much of an interference with the ordinary business of Facebooking. So the restricted autonomy of the researchers in turn led to the impairment of the autonomy of the users. This example of values sacrificed in the name of market rationality is a microcosm of much larger trends in ordinary users’ experience of the Web, and researchers’ experience of their own craft.
Creating Reality
So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, as philosopher Ian Hacking has demonstrated, “theories and classifications in the human sciences do not ‘discover’ an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.”2828xIan Hacking, “The Looping Effects of Human Kinds,” in Causal Cognition: A Multidisciplinary Debate, eds. Dan Sperber, David Premack, and Ann James Premack (Oxford, England: Clarendon Press, 1995), 368–70, quoted in Joel Isaac, “Tangled Loops: Theory, History, and the Human Sciences in Modern America,” Modern Intellectual History 6, no. 2 (2009): 397–424; http://journals.cambridge.org/action/displayFulltext?type=1&fid=5881592&jid=MIH&volumeId=6&issueId=02&aid=5881584&bodyId=&membershipNumber=&societyETOCSession=.
It is very hard to understand the categories and kinds developed by Internet firms, because they are so secretive about most of their operations. Yet it is urgent that we try to do so, because data collection by Internet firms is creating whole new kinds of people—for marketers, for the National Security Agency, and for anyone with the money or connections to access the data and the inferences based on it. More likely than not, encoded in Facebook’s database is some new, milder version of the Diagnostic and Statistical Manual of Mental Disorders, with categories like “the slightly stingy,” who need to be induced to buy more, or “the profligate,” who need frugality prompts. Once a critical mass of flags like “I don’t want to see this” or “This is spam” amasses around one person’s account, he may well be deemed “creepy” or “depressing,” but he may never know that, or know why the determination was made. Data scientists create these new human kinds even while altering them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.”2929xIbid. Perhaps in the future, on being classified as “slightly depressed” by Facebook, certain users will see more happy posts. Perhaps those who seem hypomanic will be brought down a bit. Or, if their state is better for business, perhaps it will be cultivated and promoted.3030xSee, e.g., John D. Gartner, The Hypomanic Edge: The Link Between (A Little) Craziness and (A Lot of) Success in America (2010).
You may think that last possibility an unfair characterization, or at least a mischaracterization of the power of Facebook. But isn’t it troubling that the company appears to have failed even to consider whether children should have been excluded from its emotion experiment? Journalists try to reassure us that Facebook is better now than it was two years ago, the company having appointed an internal reviewing team to vet future manipulation. But the team’s standards (and even its identity) remain obscure. Astonishingly, according to Reynol Junco, an Iowa State University professor who studies human-computer interaction, Facebook has offered “no discussion of how they’re going to address the ethical concerns, and who their ethical experts are going to be, and what their ethical review process looks like.”3131xSelena Larson, “One Thing Is Missing from Facebook’s Research Guidelines: Respect for Its Users,” ReadWrite (October 3, 2014); http://readwrite.com/2014/10/03/facebook-research-ethics-informed-consent. Even when a firestorm of protest breaks out over a given intervention, the leading social network clings to secrecy, its bottom line undented.
Resisting Manipulation
The first step toward protecting the self in an age of algorithmic manipulation is to recognize such manipulation as a problem. One also needs anchors of integrity, in more substantial “sources of the self” (in Charles Taylor’s evocative formulation) than points, likes, and faves.3232xCharles Taylor, Sources of the Self: The Making of the Modern Identity (Cambridge, MA: Harvard University Press, 1989). Protecting oneself from algorithmic domination requires more than deploying counter-manipulation to nudge ourselves back to optimal states. Rather, we must accomplish a nimble fusion of old and new: a commitment to renewing the traditions from which one draws meaning and value.
The “acids of modernity,”3333xThis vivid characterization of the impact of the modern is borrowed from Walter Lippmann, A Preface to Morals (Boston: Beacon Press, 1965), 8. First published 1929. as encoded in the software of today’s dominant platforms, are the enemies of both renewal and tradition. They can make comprehensive worldviews of all kinds seem antiquated, shrunken, or quotidian—a post from the Dalai Lama or Pope Francis on Facebook or Twitter will be rendered in the same format and style as a Clickhole come-on or an ad for teeth whiteners. And these codes are enemies of reinvention, too: There are few experiences more anaesthetizing than the Pavlovian cycle of posting, liking/faving, being liked/faved, and “engagement” online.3434xJodi Dean, Blog Theory: Feedback and Capture in the Circuits of Drive (Malden, MA: Polity, 2010). Without a stronger sense of commitments that endure above and beyond the feedback and control mechanisms of Big Data and big platforms, we are doomed to selves comprehensively shaped by them.
Sources of value will probably differ for each of us. I can only describe, rather than prescribe, a path here. For example, Catholic social thought is an extraordinarily rich source of person-centered social theory. Pope Francis echoed decades of encyclicals in his 2013 critique of “an impersonal economy lacking a truly human purpose.”3535xPope Francis, “Amid the Crisis of Communal Commitment,” in Evangelii Gaudium [Encyclical letter] (Vatican City: Vatican Press, November 24, 2013), 47 (para. 55); http://w2.vatican.va/content/dam/francesco/pdf/apost_exhortations/documents/papa-francesco_esortazione-ap_20131124_evangelii-gaudium_en.pdf. We have had decades of policy arguing for more “flexible workers,” who can turn on a dime to meet any demand by employers (and who will now be monitored ever more closely to assure compliance). But why not reverse that logic and create economic structures better suited to human flourishing?
Algorithms for Flourishing
One final example of algorithmic self-making vindicates Pope Francis’s intervention. Labor activists have recently criticized scheduling software for imposing maddeningly unpredictable schedules on workers. To achieve even marginally improved profit margins, chains like Starbucks have used predictive analytics to break labor time into ever smaller chunks. They have assigned hours on a week-by-week basis, or even day-by-day, leaving workers with little or no power to plan their days in advance. The software has been blamed for this development. But it could just as easily assist workers in juggling labor, caregiving, and leisure by creating more flexible scheduling options and opportunities for cooperation. Computation does not need to be guided by crude profit-maximization algorithms alone. It can incorporate other values.
Of course, not everyone is going to want to work the worst hours. Power—whether in the form of a manager or seniority rules—will always have some place in labor relations. But traditional constraints on the scope of work demands can soften power’s worst effects. A sense of holidays as “holy days,” time outside the ever-quickening cycles of productivity maximization and networked self-expression, is another bulwark against algorithmic imperatives. From a purely economic perspective, disputes over employees’ prerogatives to be off on, say, Thanksgiving, might seem trivial: What’s the importance of setting aside that particular day, above all others? But there’s more to a good life than bargaining, getting, and spending.
We need breaks from the algorithmic tools that, at bottom, are designed to accelerate and intensify that commerce. And that does not necessarily require fleeing all technology. The Roman Catholic Church itself adapts its methods, if slowly, to a technologized world. The Jesuit podcast Pray as You Go is a wonderful resource for reflection in the midst of an ever more accelerated social world. The range of spiritual podcasts or Internet resources is extraordinary, if one has (and takes) the time to look.
Of course, as social theorists like Hartmut Rosa have observed, there is a delicate balance between appropriating new technologies and being appropriated by them. Rosa’s theory of modernity would likely characterize momentary escapes from algorithmatization as a kind of safety valve that ultimately conduces to the resilience of computational acceleration of our social world.3636xHartmut Rosa, Social Acceleration: A New Theory of Modernity (New York: Columbia University Press, 2013), 87. Rosa writes that “strategies of slowdown may be indispensable presuppositions for the further acceleration of other processes. They are implemented by both individual actors and social organizations. On the level of the individual, one can count retreats at monasteries or courses in meditation, yoga techniques, and so on as belonging in this category insofar as they are meant in the end to serve the goal of coping with the swift-paced life of the workplace, relationships, or everyday routine even more successfully, i.e., faster, afterward. They represent oases of deceleration where one goes to ‘refuel’ and ‘get going again.’ In addition, attempts to, for instance, assimilate more learning material in a shorter time through the conscious slowing down of particular learning processes or to heighten innovativeness and creativity through deliberate breaks for rest unmistakably constitute strategies of acceleration-through-slowdown.” Yet without such opportunities to stand back from and reflect on our moment-by-moment bombardment with texts, tweets, e-mails, and status updates, it is, sub specie aeternitatis, hard to see how any more humane social order could arise.
Such an order, if possible, will depend on a pattern of self-making far removed from the buzzing behaviorism of programmed apps and schedulers. Reflecting on the problem of overeating in 1976 (a year also marked by anxieties over automation), Charles Taylor contrasted two approaches to the problem: one marked by the “contrastive language of qualitative evaluation” and another based on an assessment of “quantity of satisfaction” afforded by alternative paths of action.3737xCharles Taylor, “Responsibility for Self,” in The Identities of Persons, ed. Amélie Oksenberg Rorty (Berkeley: University of California Press, 1976), 102. While apps could easily help us implement the latter, utilitarian approach, the former is more complex. “Strong evaluation,” in Taylor’s terms, requires us to classify desires as “higher or lower, virtuous or vicious, more or less fulfilling, more or less refined, profound or superficial, noble or base.” It is where “they are judged as belonging to qualitatively different modes of life, fragmented or integrated, alienated or free, saintly or merely human, courageous or pusillanimous, and so on.” It is hard to imagine such categories integrated into five-star rating scales or gamified badges. They elude the commensuration that is constitutive of computational culture.3838xSee, e.g., David Golumbia, The Cultural Logic of Computation (Cambridge, MA: Harvard University Press, 2009).
Criticism of algorithms must go beyond merely recognizing the emptiness of virality or the numbing self-reference inherent in the algorithmic economy’s obsession with “metrics,” “engagement,” and “impact.” Without robust backstops of cultural meaning, and the fight to preserve them, those at the top of society will increasingly engineer out of daily experience all manner of “inconvenient” cultural and social practices. The least we can hope for is some clear understanding of how the strategies the powerful deploy affect how we see the world, how we are seen, and how capital is deployed. And we must work to recognize and preserve those fonts of value that are so rarely encoded into the algorithms of the everyday.