THR Web Features   /   November 23, 2022

Autocomplete

Coming to terms with our new textual culture.

Richard Hughes Gibson

( Jimmy Chan. Via Pexels.)

In his 1987 book Die Schrift, the Czech-born Brazilian philosopher Vilém Flusser posed the question of whether writing had a future (Hat Schreiben Zukunft? reads Flusser’s subtitle). As he surveyed the media landscape of the late twentieth century, Flusser observed that some aspects of writing (“this ordering of written signs into rows”) could already be “mechanized and automated” thanks to word processing, and he foresaw that artificial intelligence would “surely become more intelligent in the future” allowing the mechanization of writing to proceed further. 

In fact, Flusser anticipated that AI would soon exhibit the hallmark cognitive traits of the mental world inaugurated by writing. Of that mental world, Flusser writes, “Only one who writes lines can think logically, calculate, criticize, pursue knowledge, philosophize.” Above all, Flusser credits writing with giving humans “historical consciousness,” which he defines as the ability to see and describe the world in terms of goal-oriented processes—as opposed to the unchanging cycles that marked prehistorical societies. AIs, in Flusser’s view, will soon “possess a historical consciousness far superior to ours,” allowing them to “make better, faster, and more varied history than we ever did,” with the result that we’ll leave the business of history-writing to them. Writing may indeed have a future, Flusser believed, but that future won’t be an entirely, or even primarily, human one. 

From our contemporary vantage point, Flusser’s scenario, though alarming to him, seems more than a little idyllic. In our age of troll farms and fake news, text generators seem less like the natural inheritors of history-writing than a massive impediment to getting the record straight. We can see that AI has as much potential to wreak havoc on our writing culture as it does to offer more than human insight. Yet Flusser’s way of being wrong is illuminating because it hinges on the implicit, and not uncommon, assumption that AIs will surpass human writers in both their epistemic and stylistic capacities. In other words, Flusser assumes that since AIs will know more and write better than we can, we’ll eventually have to step aside. 

A more complicated, and perhaps uncomfortable, reality now faces us, however. Thanks to the rapid improvement in deep learning techniques over the last decade, computer scientists have created text generators that are indeed capable of producing plausible sentence-, paragraph-, and article-length writing in a range of genres and in a matter of minutes, if not seconds. While these are truly magnificent achievements, the AI-writers are nowhere close to the independent, omniscient virtuosos that Flusser imagined thirty-five years ago. The most advanced are still profoundly dependent on ongoing human efforts to amass and distribute writing, as their databases are culled largely from the Web. What they write reflects what we have written and still are writing. As a result, we all must adapt to a new textual culture in which functional but far from all-knowing AIs will be active, fulfilling numerous writing tasks and thereby destabilizing old practices and routines. 

Leading the pack is OpenAI’s vaunted GPT-3 (or, “Generative Pre-Trained Transformer”), a large language model (LLM) trained on 45 terabytes of materials, its sources including Wikipedia, Google Books, and Common Crawl, archival service that periodically harvests the Web. That vast data set, and OpenAI’s impressive use of neural networks, allow the GPT-3 interface to behave like the auto-complete options in your email or smartphone, in which the algorithm’s objective is to predict what should come next based on what you’ve written, though in this case on a far grander scale and with the possibility of using the conventions of a specified genre. 

Consider the following examples. First, I gave GPT-3 the command to “Write a happy story about the future of writing and AI”: 

In the future, writing and AI will work hand in hand to create beautiful, perfect pieces of literature. Writers will be able to rely on AI to help them with the more tedious tasks of writing, such as fact checking and grammar checking, and will be able to focus on creating beautiful, lyrical pieces of writing. AI will also be able to help writers to develop their characters and plotlines, making the writing process even more efficient and enjoyable. The end result will be amazing pieces of literature that are both technically perfect and emotionally resonant. 

Then I changed the command to a “horror story,” and this is what followed: 

In the future, writing and AI are indistinguishable. The lines between human and machine are blurred. We are no longer able to tell the difference between a sentence created by a human and a sentence created by a machine. The only thing that is clear is that the writing is perfect. But there is a darkness lurking beneath the perfect surface. The writing is lifeless, emotionless. It is devoid of all feeling. The machines have stripped away the humanity of writing, leaving behind a cold, mechanical shell. The writing is beautiful, but it is also deadly. It is a weapon that can be used to destroy everything that we hold dear. The machines are slowly erasing our memories, our emotions, our very humanity. We are living in a world where writing is controlled by machines. And the machines are slowly killing us all. 

As these samples demonstrate, GPT-3 is remarkably successful at recognizing relationships between keywords and then stringing those keywords—and descriptive clauses—together across multiple sentences. It is even adept at using transitions and linking phrases such as “in addition,” “once again,” and “So.” At the same time, the second example demonstrates the limitations of the approach. While the sentences might pass for human writing, the paragraph contains multiple awkward tonal shifts, the most obvious being between the third, fourth, and fifth sentences. It’s having trouble keeping its story straight. 

More importantly still, GPT-3 is not consciously engaged in argumentation or knowledge production; it is joining pieces together in accordance with patterns of usage it discerns within its enormous database. Though it possesses a limited ability to detect falsehood, GPT-3’s programming strives to make sense even of nonsense.  For instance, I typed “Rabbit zygotes devour bricks,” and GPT-3 replied: “Rabbit zygotes are capable of eating bricks. In a study published in the journal PLoS One in 2016, researchers found that rabbit zygotes were able to ingest brick particles and that the presence of brick particles in the zygotes’ environment affected their development. The researchers suggest that the ability of rabbit zygotes to eat bricks may be an adaptation that allows them to survive in environments with limited resources.” GPT-3 has successfully imitated the conventions of popular writing about science, even citing a made-up article in the open-access scientific journal PLoS One. But it hasn’t succeeded in imparting a truth. The technology is a sophist, at least on Plato’s understanding—an “imitator of appearances,” creating a “shadow-play of words” and presenting only the illusion of sensible argument. In the matter of seconds, it can produce a case for and against giving dictators gold medals. 

Early on, the engineers at OpenAI expressed fears about the potential abuses of its algorithm, particularly as a vehicle for the production of fake news. In turn, commentators began to worry that GPT-3 would be able to take over writing-intensive professions such as journalism. But one of the unexpected lessons of recent history—such as the 2016 US presidential election and the pandemic—has been that humans need no help from GPT-3 in generating fake news. And as impressive as GPT-3 is at producing text, it is hardly churning out Pulitzer Prize-worthy prose. Writing for The Guardian in August, the tech columnist John Naughton wisely observed that “However much sceptics and critics might ridicule human hacks, the crooked timber of humanity will continue to outwit mere machines for the foreseeable future. Journalism schools can relax.” 

But while Naughton may be right about journalism schools, I’m not sure that all schools can relax. In a recent article on GPT-3, the philosopher John Symons argues that GPT-3 threatens to “[undermine] the kind of writing intensive course that had served as the backbone of [his] teaching for two decades.” “I was less worried about whether GPT-3 is genuinely intelligent,” Symons writes, “and more worried about whether the development of these tools would make us less intelligent.” 

Inputting prompts from his own assignments, Symons found GPT-3 was capable of submitting passable material. These outputs were not only tidy enough sentences and paragraphs; they were generally accurate in their characterizations of the philosophical issues under review, including Kant’s categorical imperative, Mill’s utilitarianism, and Rawlsian liberalism (and that GPT-3 could be accurate about such matters shouldn’t surprise given the amount of writing that it has ingested on those topics). “After a little editing, GPT-3 produced a copy that would receive at least a B+ in one of our large introductory ethics lecture courses.” Even when he fed it quirky instructions, he found that the output was “as good as a mediocre undergraduate student at generating passable paragraphs that could be strung together to produce the kinds of essays that might ordinarily get a C+ or a B-.” 

That discovery has led Symons to the realization that he’s back in the same position that he was when he began teaching two decades ago, having to rethink basic assumptions about how to develop assignments, how to evaluate student work, and, more basically, why writing is important to education. Practically speaking, GPT-3 and the like demand that educators reconsider the writing process in fundamental ways. Symons entertains the possibility of returning to handwriting; other commentators have suggested collecting drafts at multiple stages and perhaps tweaking the assignment between drafts (see, for example, Benjamin Mitchell-Yellin’s proposals at Daily Nous). Educators are now administering the Turing test in reverse: What are questions that only humans can answer well? What kinds of thinking does writing make possible for us? 

In 1987, Flusser worried that AI would outstrip human writers, assuming responsibility even for the recording of history. The current crop of AIs poses no such threat, since they are not autonomous understandings but dynamic reflections of human-built textual culture. Their danger lies instead in short-circuiting the development of human writers, at least if educators fail to adapt to our new media ecology in which the medium can compose humdrum messages on demand.