A senior researcher at Microsoft tells me that the sale of TikTok is more momentous to the fate of American democracy than the mobbing of the Capitol on January 6, 2021. He argues that the latter was a circumscribed event, while the enforced sale of TikTok will put the eyeballs of 170 million American users under the control of one of the two or three bidders already wealthy enough to buy it—such as Elon Musk. I find this view awfully grim, not because Musk has too many conflicts of interest to be a benign presence in government but because I find it dismaying that “American democracy” should occur in the same sentence with “TikTok,” let alone be identified with it. If the fate of American democracy rests on the ownership of TikTok, then maybe the towel has already been thrown in.
It is more complicated than that, of course. But since one of the unshakable convictions of the digital age is that digital services are or could or might be democratic, it is high time we think through the truth of this truism. What hath TikTok—or our current digital environment as a whole—to do with democracy? Nothing good. Or so I will argue. I think we still have a flight line, which I’ll get to. If I spend most of my time trying to characterize the problem, it is because I think that the right answers to our predicament can be formulated only in response to problems rightly fathomed.
Media’s Elective Political Affinities
Let me start by briefly taking up two large questions: What do media (such as printed books, television, or the Internet) have to do with political regimes (such as monarchy, republic, or democracy)? And what media count as democratic media per se?
The first question is confounded by the fact that the same media are elastic enough to accommodate all kinds of political regimes. There are digital monarchies now, just as there have been relatively illiterate democracies in the past. Yet media scholars since Lewis Mumford have argued that, while media do not determine political regimes, there are nonetheless underlying affinities at work. The relationship between media and regimes is not controlling or monocausal but formative or teleological. Media exert qualitative, long-term selective pressures on politics. And just as there are media and machines that tend to standardize, displace, and diminish individual agency toward a center, there are others that disperse expertise in ways that are difficult for any central authority to control. So while all such tools can be used for many purposes, it is not a stretch to say that products such as those of Apple and Tesla (which have proprietary control over their own parts and repairs) are relatively authoritarian, whereas highly adaptable, open-source programs such as Linux are relatively democratic.
If political communities rely on specific forms of communication—if their coherence as communities implies some functional relationship among information, authority, and audience—we can therefore ask whether the introduction of any new medium is consonant or dissonant with them. This is complicated by the fact that old media continue to exist alongside new media. But as a new medium takes on the predominant weight of discourse, as people internalize it and learn it by heart, the question may nonetheless be asked whether its emergent logic of uses is productive or destructive of a given political form. And if destructive, it should be asked what kinds of adaptations will have to take place if the form hopes to survive. Whatever else, it is worth noting that major media watersheds like the introduction of movable type have been associated with major historical ruptures such as the Reformation. The current hostilities between the executive branch (the organ of government most hip to the digital avant-garde) and the judiciary (the organ of government most keyed to the norms of printed literacy) is a premonition of just such another rupture.
Second, what media might be good for democracy? That question is vexed by the fact that, beyond the very loose principle that democratic rule should be accountable and open to some critical mass of its citizens, there is no other uncontroversial definition of it. This means that there is some chance that democracy has only ever been incompletely realized. It also means that there may be regimes that regard themselves as democratic—and even have the adjective “democratic” in their official names—but are not. Democracy is perhaps singularly inseparable from the questions of how it speaks and thinks about itself.
The familiar story of why the digital sphere was initially experienced as democratic is one of disintermediation. Whereas up until the 1990s, any given citizen would have had to go through authoritative channels to widely disseminate a printed message, every human being with an Internet connection is now in the position to broadcast footage and verbiage with a few keystrokes. The Web is a system of horizontally connected nodes, in which every message thereby has the potential for amplification, contagion, and resonance. And horizontality is supposedly democratic.
This stock story is not wrong, but it camouflages features of our situation. For one, it is characteristic of our digital regime that meaning is defined as content. Information is immaterial, instrumental, and useful because it is shareable. The regime of literacy looks, in retrospect, like one in which snobs, Brahmins, and gatekeepers invidiously withheld secrets from the people. It is easy to point to incidents that corroborate this view. But from another angle, pre–digital-media environments were not only or primarily about information at all—they were about the justification and production of certain kinds of institutional authority (what its skeptics called the “manufacture of consent”). But now that it has become clear that the very shareability of digital data decontextualizes and disintegrates institutions of democratic authority too, its horizontality looks less obviously democratic.
The initial giddiness around the Internet’s democratic promise was also experienced against the backdrop of the previous media environment. When bloggers took down CBS news anchor Dan Rather in 2005, when a candid recording of Virginia Senator George Allen forced him to end his campaign in 2006, when Howard Dean and Barack Obama digitally crowdfunded their campaigns, and when social media laid waste to Arab autocrats in the early 2010s, these seemed like democratic wins. By contrast, the same social-media causes no longer seem to straightforwardly yield analogous democratic effects now. What happened?
New tools and new media initially present themselves as enhancements before they become replacements. An enhancement means the same but better; to highly literate people, large language models (LLMs), email, and the search function are (initially) superpowers. But once an enhancement becomes the rule, what once was a want becomes a need. And the new technological capacity, once internalized, transforms the conditions under which it is used, undermining the very capacities that once made it most useful.
This slide from enhancement to replacement happens gradually, unevenly, and inconspicuously. Between the 1990s and the mid-2010s, and to some extent still now, we were (and are) amphibious creatures living both in the past and the future. Everything is still sort of the same—we still vote, we still read, we still live in nation-states—but everything is also sort of other. Our voting, reading, and citizenship all happen in relation to a new way of exchanging information that has quickly refigured our perception of what is normal, legitimate, and shared. This is a maximally confusing situation in which many of our institutions are falling between the analog and the digital stools. In some ways, it recalls the Protestant Reformation, in which the reconfiguration of people’s relationship to the spiritual or virtual world eventually compelled a wholesale reordering of political forms within the terrestrial one. And this means that we are still working out in practice the question of whether digital devices are compatible with the liberal democratic government we had long taken for granted.
Most discussions of digital democracy presume, as I’ve mentioned, a confrontation between the vertical boss-man and the horizontal people. What this dichotomy misses is that the character of these figures, as well as of their relationships, has qualitatively changed. Like all legitimate regimes, democracies live by norms—by shared expectations that go unsaid in order to make communication possible. This includes threads of social tapestry like what people feel they can get away with, how far is too far, and where we draw the lines. Norms are forms of communal responsiveness. Yet the deterioration of democratic practices and institutions during the past twenty years has revealed the degree to which democracy relies on a moral infrastructure of habits, rapports, and dispositions toward the word in particular. And whereas our trajectory so far has largely been one in which democratic norms have been gradually burned out by our information environment, the fate of democracy actually requires (pace TikTok) that we try to articulate what the moral infrastructure of this democracy is, how digital practices bear on it, and whether these practices can be harmonized with it.
To bring these lines of thought into focus, I will describe three digital pressures that seem to abrade democratic norms, three ways in which digital technology has been justified, and is even justified as a democratic force, but that are in fact anti-democratic by virtue of reforming our understanding of what we are and how we communicate: Those three are choice, optimization, and neutrality. I pick these because the digital temptation to conflate choice with agency, optimization with judgment, and neutrality with truth especially illuminates the difference between digital and literate democratic norms. But these conflations work only to the extent that we equate what is democratic with what is egalitarian—it is an article of digital faith that these are synonymous. Yet egalitarianism is a property both of democracy and of certain kinds of autocracy. Everything depends on our resisting this equation.
Three Articles of Digital Dogma
Consider the casual assumption that the scope and accessibility of choice is itself “democratic,” such that Reddit, Facebook, Instagram, and TikTok might be called democratic platforms because everyone gets a vote and anyone can become someone. In fact, there’s nothing intrinsically democratic about participation as such at any level. Countries do not become more democratic when the people are more frequently consulted. Expertise and state secrets are part of it, as is the fact that states need to make long-term, binding commitments. Digital consultations (such as have been used by the Five Star Movement in Italy) give undue power to those who control the software through which they take place. There is also the better reason that, when questions are placed before a mass electorate to an extent that exceeds our capacity for deliberation, there occurs a populist or oligarchic reversal. In a direct democracy such as that of ancient Athens, everything begins to turn on who poses the questions and when and why.
The point is that mass populations can be good judges of what locally touches us but are not able to collectively pay attention to what doesn’t—and that this is an entirely democratic, republican situation. I mean that the will of the people is not a fact but an artifice: It is not the input of democratic government but the output of its codified processes. Intermediate institutions such as parties, caucuses, primaries, debates, and the Electoral College were designed in theory to sublimate citizen passions and interests into more substantive formulations than their raw expression allows. The Federalist Papers explicitly speak of it as a machine—a Newtonian “system” of “bodies,” “springs,” “energies,” and “balances” operating under universally observable laws. The point of this machine is to produce the popular will by representing it. The Constitution is intended to be the context within which popular decision can find its most articulate expression through the medium of good government—which is why any one person’s claim to immediately embody the will of the people is anti-democratic.
For digital services as well as democratic processes, there is a point beyond which the intensification of involvement degrades the objects of preference themselves. Just as social media has made us impatient for headlines spicy or outrageous, the focus on and capture of choice as such can destroy the conditions of meaningful choice themselves. Not to mention the fact that on platforms such as YouTube, TikTok, and Instagram, participation itself functions as a product; the customer experience includes a show of hands.
So much is perhaps obvious, but part of the context of choice is the quality of democratic deliberation itself. Is a situation in which only a few opinions are voiced by influential organs more or less democratic than one in which everyone can post his or her opinion? However we answer, we should attend to the fact that what counts as a public opinion is itself being transformed. In a situation in which media are exclusive, it is not just that educated elites are the ones with a say; it is that opinion takes on a representative function. Public opinion formerly functioned like professional sports, with many spectators and few participants. There are still superb professional athletes; but the winners of the Pulitzer Prize for criticism are now running around on the field with Joe Rogan, Kylie Jenner, and bbqbarbakay83 in an open-ended variety of scrimmages. If the chief benefit of digital democracy is its accountability, its chief harm is the eradication of the difference between public opinions and takes.
The most successful digital tool for democratic deliberation has been Polis—a platform that has been used in Taiwan to consult a wide public and to help legitimize government policy goals. Polis’s key innovation is that it allows citizens to post comments and to vote on those of others in order to approach consensus, but it does not allow them to reply to each other. One can only “engage” with others’ posts by agreeing, disagreeing, or passing. This eliminates the possibility of trolling and flame wars. It is nonetheless telling that its success as a digital democratic process is predicated on bracketing the exchange of reasons as disruptive. By incentivizing fewer siloed, less complex statements, it solves for a political product at the expense of the process of deliberative speech.
The second article of digital dogma is the value placed on optimization and efficiency as paradigmatic forms of arbitration. I am primarily thinking about the pervasive outsourcing of social and political judgment to algorithmic processes. Selection algorithms for credit scoring, risk assessment, determining child welfare services, hiring, assigning students to schools, and penal sentencing are now widely in use and widely justified on the grounds of their democratic fairness—both in the sense that they may be said to incorporate a vast database of previous human decisions and in the sense that they offer an ideal of an impartial decision-making mechanism. The inadequacy and “misalignment” of such systems have also continually come to light, in well-documented ways, occasionally to the point of farce. In 2017, the settings of a risk-assessment algorithm used by US Immigration and Customs Enforcement to determine whether someone should be detained or released were modified so that the algorithm only ever recommended “detain.” That this grotesque was maintained as a good idea is nonetheless suggestive of the ideological prestige they enjoy.
LLMs extend this externalization of judgment to speech itself. LLMs perform two related functions: They boil down an impossibly vast sea of information into a coherent narrative form, and they make it easy to produce standard content instantly and in any quantity. In other words, they automate the judgments implicit in the selection of digital information as well as in the production of it; they are automatic points of view. The democratic risks and possibilities of this situation are well advertised. They make it easy to generate fake content and disinformation, but they also make it easier for average citizens to improve their civic knowledge, to lobby and petition representatives, and to generate op-ed-level gab.
What is most remarkable to me about the adoption of generative AI is how completely seamless it has been—how easy the capitulation of educational institutions has been and how readily LLMs have unmasked the fact that our average digital engagement with speech and writing is extremely mid in the current slang. In theory, one might think that this should mean that LLMs write mid prose themselves, but in practice it is much better than that. I have heard professors express relief at the existence of LLMs because they make it so that students who are not good writers will be able to turn in an essay that is not a dog’s breakfast. Yet, while convenient in the short term, this slide from enhancement to replacement will end up undermining and making obsolete the capacities of professors themselves.
But what is most alarming about this transition is the fundamental refiguring of the relationship between people, words, and meaning. The First Amendment of the US Bill of Rights implies a specific anthropology of liberal democratic privacy, wherein what is fundamental about us is implicitly defined as our right to uphold our ultimate metaphysical convictions, to freely speak, to peaceably assemble, and to petition the government for redress. These four rights are concentric, overlapping, and inseparable aspects of our personality, which is defined as expressive through speech. Our dignity as people consists in being able to give our word. I realize that “speech” means more than words. But in a situation in which AI is either a dominant generator or prop for shared words, then the long-standing principle that people are political animals possessed of speech will obsolesce even further. That is, when words are dis-embedded from context, quantitatively generated, and alienated into circulation as commodities in their own right, they become purely instrumental. Words do and will act more and more like currency—like something that bears no thinking.
Generative AI is about to become inseparable from public agency. Ukraine and Oman have developed AI avatars that provide political updates; Venezuela has AI journalists; and AI candidates have run for political office in Japan and the United Kingdom. There are AI influencers. Arok became the first AI millionaire last year. Anthropic has developed an AI program that can actually control one’s cursor and type text. An AI company is arguing in a Florida court that its chatbot’s outputs should be protected by the First Amendment. The achievement of general AI—defined as a program that can perform cognitive digital tasks as well as any human—is imminent. AI is not about to supplant human agency altogether—human work will still oversee and be overseen by AI. The point, rather, is that AI programs are becoming inseparable from human agency in such a way as to make the distinction practically irrelevant. This represents a fundamental break from the principle that a free and democratic society is one in which human beings are taken at their word.
The perniciousness of the third digital assumption is harder to unearth. I mean the identification of reason with data and quasi-scientific objectivity, exemplified by the presumption that political disagreement can be primarily resolved by fact-checking, the lazy discrediting of sources as “biased,” and the sanctimonious moralism that attaches to the phrases “data-driven,” “evidence-based,” and “studies show.” Recent studies have shown that it is better for you to have friends, that social activism is good for young people, that loneliness is harmful, that the wisdom of religious traditions is worth taking up, and that gratitude is good for you. But this is what Bernard Williams called having “one thought too many.” That is, if you needed a study to affirm these things, then the data show there’s something the matter with you.
My complaint here is not against empiricism but against the presumption that data should count as dispositive in questions that are ultimately moral, ethical, religious, aesthetic, or political, a presumption that is at once cause and effect of the destruction of our public sphere. In the absence of other forms of shared experience, it perhaps makes sense that we should have turned to data as the last lingua franca. But not only is it unable to perform this role; it has actually poured gasoline on the cynicism and polarization it was supposed to remedy. Part of the political bizarreness of our time comes from the fact that the possibility of data’s neutrality is rarely disputed as such. Data is parried with alternative data rather than with something different in kind.
The turn to data as democratic arbiter was central to the justification of Big Tech in the early 2000s. Here are the words of its last major proponent, then presidential candidate Barack Obama, speaking to Google headquarters in 2007:
[The American people] just don’t have enough information, or they’re not professionals at sorting out all the information that’s out there, and so our political process gets skewed. But if you give them good information, their instincts are good and they will make good decisions…I want people in technology, I want innovators and engineers and scientists like yourselves, I want you helping us make policy—based on facts! Based on reason!
As in a fairy tale, this has now come true. And while it was possible for many of us to hew to them so long as their flaws seemed tractable to regulation or algorithmic tweaking, the fundamental logic of techno-optimism has now emerged in destabilizing, neo-reactionary, and anti-democratic forms that clearly exceed government’s capacity or appetite for response. Whatever else, this serves us with an opportunity for articulating how the ideological power of tech-bro authoritarianism resides not primarily in the convictions of certain prominent CEOs but in our own digital practices.
While this is a much larger syndrome to diagnose in full, I single out for consideration our implicit prejudice that technology is neutral—the notion that our devices and our data-centric ways of analyzing human problems may be used equally for good or ill. While whole academic subfields exist to flog this dead horse, it nonetheless continues to operate unperturbed as the chief justification for further technological progress. This is because the conviction that technology is neutral is not itself neutral; it paradoxically functions as the conviction that this neutrality is itself a good thing, so as to sanction the technological development of more neutral tools. Just as we have accepted the idea that what is “nonpartisan” is above partisan loyalty, that what is scientific is above “politics,” and that the “data-driven” is above the “human,” as digital users we have likewise a pervasive conviction in the benevolence of neutrality, in the open-ended desirability of techniques for multiplying, validating, and mediating human choices.
Such a project, by implying that what is true and reasonable can be conceived only in terms of empirical metrics is antithetical to the idea that public deliberation is itself internal to the democratic good. Data-driven process is that against which other processes are defined as “arbitrary” (i.e., capricious). Yet the word arbitrary signifies what is at the discretion of an arbiter, and data has become our paradigmatic arbiter precisely by divesting itself of the impression that anyone is doing any arbitrating at all. And a political project that refuses to acknowledge the need for politics is inherently authoritarian.
The Dawn of the TikTokracy
The prospects of liberal democracy as practiced during the past couple of centuries are not very good right now. The earlier liberal democratic regime was embedded within the epistemic norms of printed context—it cultivated the presence of coherent, consecutive, and consequential reasoning—as well as within a broader Enlightenment vision that upheld the view humans should have minds of their own and be able to practice them. Along with other changes, the implicit principles of our current digital regime are reorienting our sense of what it is to be a person away from capacities like speech, judgment, and reason and toward a merely biological and invariant understanding of our identities. And as digital tools become more powerful, human participation will come to seem more like human error: an undesirable inefficiency. These shifts have reaching political and anthropological implications, since they bear on the question of how we are meaningfully related to others and how we all add up into the political form we have called democracy. If we think that democracy is worth defending, then it is not on account of its neutrality but of the fact that we are called on to uphold its vision of the good we have in common.
The alternatives to democratic politics are not the oligarchies or aristocracies of yesteryear but a novel kind of authoritarianism. People on the left tend to be freaked out by the threat of fascism, just as people on the right are set off by the threat of Marxist speech police. While these fears are not completely without merit, they tend to be too narrowly focused on the analog dangers of outright violence or illegality. We are instead not worried enough about the immediate danger of a TikTokracy: a quasi-democracy so trivialized into entertaining spasms of opinion as to be virtually indistinguishable from autocracy. Since the threat of TikTokracy is qualitative, by degrees, apt to benefit one side of the aisle at any given moment, and more difficult to demonstrate than the fascist one, it is arguably more insidious.
The project of twentieth-century authoritarianism was to establish an Orwellian monopoly over the production of truth. When Lavrentiy Beria, Stalin’s head of secret police, was disgraced and executed in 1953, all who owned a set of the Great Soviet Encyclopedia received a letter instructing them to cut out the entry on Beria and replace it with a new section on the Bering Strait. The project of twenty-first-century autocracy, by contrast, is to establish a digital monopoly over the production of what Harry Frankfurt called “bullshit”: speech that is careless as to whether it happens to be true or false at all.
China and Russia, the two autocracies that have most skillfully managed domestic public opinion in the digital sphere, have done so not primarily by controlling the information their populations have access to but by sponsoring a torrent of polarizing, distracting, and irrelevant content in such a way as to destroy the conditions of concerted discussion and response. In the American context, the current administration’s deliberate effort to “flood the zone” with more information than media attention can process—disorienting resistance to any one measure in particular—is a version of the same strategy. It is a shrewd exercise of media control that dispenses with the need for state ownership of media. Nor is it a coincidence that President Donald Trump, having initially led the effort to ban TikTok in the United States, has emerged as its champion. With his infallible nose for media that speaks his language, he drew the equation explicitly: “If you like TikTok, go out and vote for Trump.” (Though, as usual, the joke is on anyone who takes his words too seriously.)
TikTokracy can be warded off only by a democratic moral infrastructure. And such infrastructure is only a matter of time, which is exactly what we don’t have. Whereas cultivating democratic citizenship is like growing bonsai trees (requiring the patience of generations), our digital participation is like growing dandelions—requiring weeding out each day. This mismatch between what is happening today and what should be happening in general has become so gross that we should perhaps acknowledge that we find ourselves at a place where two distinct democratic trajectories diverge.
The first path runs toward the best possible version of a “digital democracy.” This project primarily entails the zealous management, programming, and fine-tuning of our information environment in order to make most people’s online engagement as salutary as possible—in other words, to try to solve for democracy by engineering the best conditions under which users operate. This might include initiatives such as labeling disinformation, instituting programs of digital literacy in public education, creating “bridging algorithms” that expose people to information they do not already agree with, designing “friction-in-design” digital products that require people to pause before posting, and so on. The strength of this path is that it accepts our digital situation as is and can be implemented at large scale with measurable success.
The weakness of this path is that it is technocratic. In focusing on top-down processes, techniques, and services, it does not answer the question of how democratic people should be acculturated by the institutions that form us. As with Taiwan’s Polis, users will no longer engage in rational deliberation. Rather, our opinions will be assumed to be givens of our identity, and the aim will be to minimize the most destructive consequences of their clashing. The implementation of most such projects not only accepts that social-media platforms have the obligation to socially engineer public opinion but also requires a level of collective legitimacy that no one platform now enjoys. In these and other ways, this path might be the same in kind as our problems. At best, these initiatives are more like democratic weed killer than like fertilizer. At worst, they obfuscate the distinction between democracy and TikTokracy altogether.
The second path is to retrieve the future from the past, that is, to reimagine long-term projects of holistic, wholesale, and post-digital literacy. This begins from the basic acknowledgment that the project of fertilizing democracy is connected to the long-term disciplines of thought, that these must now be placed in intentional opposition to our digital environment, and that they stand or fall with book learning as their necessary material condition. Books are, from this angle, the paradigmatic democratic technology; literacy provides a basic standard of competence that embeds speech, consolidates liberal rhythms of discussion, and sustains abiding norms; it improves the quality of participation through a shared capacity rather than by removing all expectation of one.
A literate education should obviously revolve in part around the reading, writing, and discussion of texts, both for their own sake and for that of creating norms of democratic deliberation, justification, and attention. And this will partly entail an education in which young people’s digital involvement is much more aggressively specified in order to protect the practice of literacy up until college. But in addition to these familiar expedients, post-digital democratic education must also make plain that the project of literacy is not just a way of processing information but one in which the attention to and the legibility of the material and political world are completely at stake. In addition to texts, that is, it should introduce students to the disciplines of craft and work—which might mean boatbuilding as well as gardening or learning how to code. And it needs to provide living contexts within which students can acquire practical experience of the democratic principles that it purports to embed. That is, post-digital institutions of literacy must establish an atmosphere of attentional demands to rival the compelling and immersive character of the digital one. This is something that only total institutions like monasteries, boarding schools, and Deep Springs College are currently doing well. It could spread more widely if groups of parents gave their minds to it.
The weakness of this second path is its patent unrealism. The United States is already well on its way to becoming an illiterate country. Data suggests that growing numbers of American adults—as many as half—are subliterate. For a time, the pandemic was to blame—but literacy has deteriorated even faster since 2022. The point is not that people are getting stupider; stupidity is our natural condition. Rather, the point is that our information environment no longer requires and aids the habits of thought that it previously did. One can now be a fully functional adult without being able to follow the units of narrative meaning we call essays, novels, books, political movements, policy agendas, and nation-states. This degradation reverberates throughout all octaves of our social and political lives. Is it possible to reverse a slide like this?
However you wish to answer, it is plain that the cardinal difference between the first path and the second path centers on whether we think democracy depends on the presence of high-quality public speech and that our current trajectory is made so much the worse by the fact that we refuse to acknowledge the rupture happening at our fingertips. If a “national discussion” is not possible now, I think that seeds of one are still present in specific settings of people staked to thought in common. And if a program of post-digital, democratic literacy is not the most realistic option, I think it remains the most practical—that is, the most harmonious with the latent hopes of a democracy still young, a republic still to be achieved.