THR Blog   /   October 14, 2020

Can We Salvage Digital Democracy?

High Hopes

Richard Hughes Gibson

( A December 1951 advertisement for the IBM 604 Electronic Calculating Punch that was first produced in 1948. The machine could be programed to do addition, subtraction, multiplication and division. The input and output were done with punch cards. The advertisement claims the IBM 604 can do the work of 150 engineers with slide rules. Via Wikimedia Commons.)

Last year, the Pew Research Center, in conjunction with Elon University’s Imagining the Internet Center, canvassed nearly one thousand knowledgeable sources—technology executives, journalists, academics, and others— for predictions about how digital technology would affect democracy in the next decade. The results make for dreary reading: Nearly half of the respondents said  that digital technology would “mostly weaken aspects of democracy and democratic representation” by 2030.

In a remarkably short time, digital technology has gone from being democracy’s handmaiden to its scourge. Such a dramatic turnaround should give us pause. To imagine the future in which digital technology complements democracy requires that we recall what the digital revolution seemed to promise. We should acknowledge its victories and learn from its mistakes. In this, the first of a series of blog posts, Wheaton College’s Richard Gibson looks into the history of the promise and disappointment of  digital democracy—and suggests how we might get back on a healthier course.

The earliest discussions of digital technology’s democratic potential occurred more than fifty years ago, as computer and social scientists began to see the computer’s promise as a communications medium. These scholars began to envision a role for the machines in facilitating the democratic process—for instance, as a means of improving polling. But by the 1970s, the development of personal computers gave technologists reason to imagine even grander vistas.

In 1979, one of the participants in those early conversations, J.C.R. Licklider, MIT computer scientist and co-founder of the Internet’s “granddaddy,” ARPANET, proclaimed that  home computing would give democracy a much-needed update. “Computer power to the people,” he argued, would help the average citizen be “informed about, and interested in, and involved in, the process of government.” Moving forward, the eighties and nineties would see the publication of numerous books proclaiming digital technology as a leveling force. Similar claims came out of the tech industry. Computers would allow more people in more places to have a say in whatever was the topic of the moment.

The famous pamphleteer of the American Revolution, Thomas Paine, was frequently invoked. Writing for Wired in 1995,  John Katz suggested  that “the Net offers what Paine and his revolutionary colleagues hoped for—a vast, diverse, passionate, global means of transmitting ideas and opening minds.… Information wants to be free. That was the familiar and inspiring moral imperative behind the medium imagined by Paine and Thomas Jefferson.” Similarly, the writers Howard Rheingold and William J. Mitchell, dean of architecture at MIT, both characterized the Internet as the equivalent of the agora in ancient Athens in widely-circulated books, The Virtual Community (1993) and City of Bits (1995), respectively. The agora was the marketplace where Athenian citizens mingled business transactions with discussions of the common good, and the Net,  according to both authors, would be an “electronic agora” where  netizens could mix business, pleasure, and politics.

It is easy now to dismiss these pronouncements as painfully naive and, in some cases,  self-serving. But all of the writers  I have named were responding to a public sphere in dire need of renovation. The Internet looked like the solution. Rheingold takes this issue on at greatest length, and his diagnosis remains valuable. Like other commentators at the time, he was distressed by the role that “commercial” mass media—foremost the big TV networks but also newspapers and radio—had come to play in American politics. Mass media were then increasingly controlled by a few moguls whose commitment to democracy, Rheingold argued, was questionable.

In the early days of the Internet, these advocates witnessed exactly what they believed was missing from the mass media-dominated public sphere: thoughtful, passionate debates taking place directly among citizens. In writing. Free of distracting images. Rheingold also saw activists challenging the information monopoly of the Big Three TV networks (ABC, NBC, and CBS), providing alternative perspectives and vital facts left out of the nightly news. Virtual communities offered a mechanism to oppose the “disinformacry” (Rheingold’s term) of the media conglomerates. On their screens, the popularizers in the eighties and nineties saw early glimmers of a new sort of public sphere, one more inclusive, more direct, and better equipped to facilitate rational debate.

This history may seem somewhat removed from us, yet in it we can see the beginning of a trend repeated several times in the current century. Each subsequent major development in the Internet’s architecture or landscape has raised similar hopes of democratic renewal. In the early 2000s, the blogosphere was named the very embodiment of the public sphere: It was, its boosters claimed, universally accessible, it sponsored rational debates, and it admitted participants without consideration of social rank. At its appearance in the early 2000s, Web 2.0—the so-called “participatory web”—was characterized as democracy’s ideal incubator. Social media garnered similar praise in the late aughts and early teens. “Facebook is one of the most organic tools for democracy promotion the world has ever seen,” argued Jigsaw CEO Jared Cohen, then fresh out of the State Department, in 2011. And let us not forget the exuberant reports on the democratizing effects of social media that followed the Arab Spring.

Surveying the media landscape in the nineties, Rheingold posed a question that aptly sums up the early hopes for digital democracy: “Which scenario seems more conducive to democracy, which to totalitarian rule: a world in which a few people control communications technology that can be used to manipulate the beliefs of billions, or a world in which every citizen can broadcast to every other citizen?” The twentieth-century tech visionaries rightly wanted to correct the information monopoly that dominated their media landscape. But they couldn't see that digital technologies would introduce a host of other problems of equal, if not greater, seriousness. Rheingold, Mitchell, and other would-be prophets made two false assumptions. The first pertained to citizens—that we’d all experience the spiritual rush that Licklider described and be transformed quickly into good netizens. The second pertained to the Internet itself—that it would remain an open-air agora, a space where one could freely mingle business, politics, and entertainment. The prophets, of course, could not see that the Internet would make new kinds of media empires possible, their influence perhaps even more pernicious than that of the Big Three TV networks.The lesson of this history? We can't keep hoping that the next big technological breakthrough will at last deliver on digital technology’s democratic potential. We must begin to imagine new ways of forming citizens and of using (and at times, not using) digital tools.