Last year, the Pew Research Center, in conjunction with Elon University’s Imagining the Internet Center, canvassed nearly one thousand knowledgeable sources—technology executives, journalists, academics, and others— for predictions about how digital technology would affect democracy in the next decade. The results make for dreary reading: Nearly half of the respondents said that digital technology would “mostly weaken aspects of democracy and democratic representation” by 2030.
In a remarkably short time, digital technology has gone from being democracy’s handmaiden to its scourge. Such a dramatic turnaround should give us pause. To imagine the future in which digital technology complements democracy requires that we recall what the digital revolution seemed to promise. We should acknowledge its victories and learn from its mistakes. In this, the first of a series of blog posts, Wheaton College’s Richard Gibson looks into the history of the promise and disappointment of digital democracy—and suggests how we might get back on a healthier course. You can read previous installments here.
American society is prone, political theorist Langdon Winner wrote in 2005, to “technological euphoria,” each bout of which is inevitably followed by a period of letdown and reassessment. Perhaps in part for this reason, reviewing the history of digital democracy feels like watching the same movie over and over again. Even Winner’s point has that quality: He first made it in the mid-eighties and has repeated it in every decade since. In the same vein, Warren Yoder, longtime director of the Public Policy Center of Mississippi, responded to the Pew survey by arguing that we have reached the inevitable “low point” with digital technology—as “has happened many times in the past with pamphleteers, muckraking newspapers, radio, deregulated television.” (“Things will get better,” Yoder cheekily adds, “just in time for a new generational crisis beginning soon after 2030.”)
So one threat the present techlash poses is to obscure the ways that digital technology in fact serves many of the functions the visionaries imagined. We now take for granted the vast array of “Gov Tech”—meaning internal government digital upgrades—that makes our democracy go. We have become accustomed to the numerous government services that citizens can avail themselves of with a few clicks, a process spearheaded by the Clinton-Gore administration. We forget how revolutionary the “Internet campaign” of Howard Dean was at the 2004 Democratic primaries, establishing the Internet-based model of campaigning that all presidential candidates use to coordinate volunteer efforts and conduct fundraising, in both cases pulling new participants into the democratic process.
An honest assessment of the current state of digital democracy would acknowledge that the good jostles with the bad and the ugly. Social media has become the new hotspot for Rheingold’s “disinformocracy.” The president’s toxic tweeting continues, though Twitter has attempted recently to provide more oversight. At the same time, digital media have played a conspicuous role in the protests following George Floyd’s death, from the phone used to record his murder to the apps and Google docs used by the organizers of protests. The protests, too, have sparked fresh debate about facial recognition software (rightly one of the major concerns in the Pew report), leading Amazon to announce in June that it was “pausing” police use of its facial recognition software for one year. The city of Boston has made a similar move. Senator Sherrod Brown’s Data Accountability and Transparency Act of 2020, now circulating in draft form, would also limit the federal government’s use of “facial surveillance technology.”
We thus need to avoid summary judgments at this still-early date in the ongoing history of digital democracy. In a superb research paper on “The Internet and Engaged Citizenship” commissioned by the American Academy of Arts and Sciences last year, the political scientist David Karpf wisely concludes that the incredible velocity of “Internet Time” befuddles our attempts to state flatly what has or hasn’t happened to democratic practices and participation in our times. The 2016 election has rightly put many observers on guard. Yet there is a danger in living headline-by-headline. We must not forget how volatile the tech scene remains. That fact leads to Karpf’s hopeful conclusion: “The Internet of 2019 is not a finished product. The choices made by technologists, investors, policy-makers, lawyers, and engaged citizens will all shape what the medium becomes next.” The same can be said about digital technology in 2020: The landscape is still evolving.
That volatility is important because it means that those parties whom Karpf invokes—technologists, policy-makers, citizens—still have a chance to respond. The technological order is not fixed. We have seen the error of assuming that the next wave of advances will deliver on digital technology’s democratic potential (what Jacques Ellul called the “technological bluff”). We have seen that, contra Licklider and company, digital technologies don’t automatically generate savvy, engaged citizens. (Licklider again: "The information revolution is bringing with it a key that may open the door to a new era of involvement and participation. The key is the self-motivating exhilaration that accompanies truly effective interaction with information through a good console and a good network to a good computer.”)
Yet past digital visionaries were not wrong to argue that networked computers could be tools for empowerment. Their mistake was to give too much credence to a set of beliefs about how computers would affect politics. Those beliefs, as Winner has noted, can be stated as a syllogism: Useful information is power; computers provide a steady stream of information; therefore, getting information-rich computers in more citizens’ hands will automatically produce a better-informed, more active public.
The missing ingredient in the visionaries’ recipe was, of course, an education that helps one to navigate the new media landscape. Computers may provide oodles of valuable information, but they also supply gigabites of misinformation. The visionaries imagined that digital technologies would create a new kind of net-savvy citizen. The lesson of recent history has been that that equation should be reversed: We need to prepare citizens to enter the network.
Remarkably, only a handful of states have passed legislation supporting so-called “digital citizenship” initiatives, with Utah being the lone sponsor of a comprehensive, statewide program (nicknamed “DigCitUtah”). In most states, if anything like this exists, it has been implemented by individual school districts. But “digital citizenship” training should become a universal feature of American primary and secondary schooling, thereby ensuring that future generations arrive at voting age with an understanding of not only how digital technologies work but also their social effects. The last decade has offered plentiful evidence of what happens when a citizenry is left to figure these matters out on its own. Think of “digital citizenship” as an updated approach to civics.
Materials for this kind of education already exist, the most robust curricula stretching from kindergarten through twelfth grade. Early modules deal with topics like the hazards of social media (with a special emphasis on cyberbullying), setting up a strong password, and online privacy. As students mature, they move on to units that explore how search engines work, how tech companies collect and use data, and how to discriminate fake news from true. Some include training on using digital tools for activism. Primary school might sound like too early a start. But average age at which American kids get their first cellphone is now estimated to be around ten years old.
Education along these lines would address what Karpf calls the “Field of Dreams Fallacy” (“if you build it, they will come”)—in which “after an initial wave of media hype, [digital democracy projects] quickly devolve into digital ghost towns.” Consider Americans Elect, a platform designed to help determine a credible candidate for the 2012 presidential election. After spending $9 million dollars and a much-hyped debut in 2012, the site failed to produce a winning name. There are enough such noteworthy failures that the “Civic Tech Guide” includes a “Civic Technology Graveyard.”
This is why education matters so much. Civic tech alone won’t enliven our digital democracy. Getting citizens ready to use it is the first step. “After thirty years of civic tech failures,” Karpf wisely writes, “we are better off assuming the demand for citizen engagement is low, and treating the work of fostering it as a worthy challenge.”
After citizens are ready, however, what then? We must next consider the public spaces, online and off-, that these citizens will enter and the tools that will be available to them to conduct the democratic process. We need to return to the question of the “electronic agora” and its relation to the physical one.