Markets and the Good   /   Fall 2023   /    Essays

Language Machinery

Who will attend to the machines’ writing?

Richard Hughes Gibson

THR illustration depicting Claude Shannon, William Weaver, and Italo Calvino (Alamy Stock Photos).

Generative artificial intelligence is a headspace and a technology—as much an event playing out in our minds as it is a material reality emerging at our fingertips. Fast and fluent, AI writing and image-making machines inspire in us visions of doomsday or a radiant posthuman future. They raise existential questions about themselves and ourselves. And, not least, they should lead us to reconsider certain neglected thinkers of recent intellectual history.

Consider a few of the bolder claims made by experts. Two years ago, Blaise Agüera y Arcas, vice president of Google Research, had already declared the end of the animal kingdom’s monopoly on language on the strength of Google’s experiments with large language models. LLMs, he argued, “illustrate for the first time the way that language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals.”11xBlaise Agüera y Arcas, “Do Large Language Models Understand Us?” Medium, December 16, 2021, https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75. In a similar vein, the Stanford University computer scientist Christopher Manning has argued that if “meaning” constitutes “understanding of the network of connections between linguistic form and other things,” be they “objects in the world or other linguistic forms,” then “there can be no doubt” that LLMs can “learn meanings.”22xChristopher Manning, “Human Language Understanding and Reasoning,” Daedalus 151, no. 2 (Spring 2022), 134. Again, the point is that humans have company. The philosopher Tobias Rees (among many others) has gone further, arguing that LLMs constitute a “far-reaching, epoch-making philosophical event” on par with the shift from the premodern conception of language as a divine gift to the modern notion of language as a distinctly human trait, even our defining one. On Rees’s telling, engineers at OpenAI, Google, and Facebook have become the new Descartes and Locke, “[rendering] untenable the idea that only humans have language” and thereby undermining the modern paradigm those philosophers inaugurated. LLMs, for Rees at least, signal modernity’s end.33xTobias Rees, “Non-Human Words: On GPT-3 as a Philosophical Laboratory,” Daedalus 151, no. 2 (Spring 2022), 169.

Rees calls the AI developers “philosophical laboratories” because “they disrupt the old concepts/ontologies we live by.”44xIbid., 168. That characterization is somewhat misleading. Those disruptive engineers do not constitute a philosophical school in a traditional sense, since they aren’t advancing a positive philosophical program (such as explicit new theories of language or consciousness). And by their own admission, they lack important answers about how and why LLMs work. Yet unquestionably, the technology is blazing some kind of trail—whither, no one knows for sure—leaving us to philosophize in its wake, just as Manning, Agüera y Arcas, and Rees have done.

In this respect, current debates about writing machines are not as fresh as they seem. As is quietly acknowledged in the footnotes of scientific papers, much of the intellectual infrastructure of today’s advances was laid decades ago. In the 1940s, the mathematician Claude Shannon demonstrated that language use could be both described by statistics and imitated with statistics, whether those statistics were in human heads or a machine’s memory. Shannon, in other words, was the first statistical language modeler, which makes ChatGPT and its ilk his distant brainchildren. Shannon never tried to build such a machine, but some astute early readers of his work recognized that computers were primed to translate his paper-and-ink experiments into a powerful new medium. In writings now discussed largely in niche scholarly and computing circles, these readers imagined—and even made preliminary sketches of—machines that would translate Shannon’s proposals into reality. These readers likewise raised questions about the meaning of such machines’ outputs and wondered what the machines revealed about our capacity to write.

The current barrage of commentary has largely neglected this backstory, and our discussions suffer for forgetting that issues that appear novel to us belong to the mid-twentieth century. Shannon and his first readers were the original residents of the headspace in which so many of us now find ourselves. Their ambitions and insights have left traces on our discourse, just as their silences and uncertainties haunt our exchanges. If writing machines constitute a “philosophical event” or a “prompt for philosophizing,” then I submit that we are already living in the event’s aftermath, which is to say, in Shannon’s aftermath. Amid the rampant speculation about a future dominated by writing machines, I propose that we turn in the other direction to listen to field reports from some of the first people to consider what it meant to read and write in Shannon’s world.

To read the full article online, please login to your account or subscribe to our digital edition ($25 yearly). Prefer print? Order back issues or subscribe to our print edition ($30 yearly).