THR Web Features   /   June 18, 2024

Chatbots and the Problems of Life

Resisting the pedagogy of the gaps.

Alan Jacobs

( THR illustration.)

With increasing availability and sophistication of chatbots, we teachers are seeing a drastic decline in the cost of what in Great Britain is called “commissioning”—that is, getting someone else to do your academic work for you. There are many forms of academic cheating, at various levels of schooling, but commissioning by university students is the one I want to discuss today.

Long, long ago, in a pre-Internet galaxy far away, commissioning was costly and therefore rare. It was a bespoke commodity: Typically you’d find someone smart and pay him or her to write an essay for you, or even (this could be done only in large lecture classes whose students were anonymous to their professors) take an exam for you. The talent was almost always local; in a large university, cynical or broke graduate students could supplement their meager stipends quite significantly by catering to the anxieties of academically marginal undergrads. Such commissions did not always involve money; money is, after all, only one medium of exchange. But you had to have something of value to exchange for the academic work—drugs, sex, the willingness to clean a filthy apartment—and not everyone had what was required. Also, some planning in advance was necessary: If you were ten hours away from the deadline for a paper, you would be hard-pressed to find someone competent to write it for you, even if you were willing to pay extra for a rush job.

With the advent of the Internet, the costs of commissioning dropped, for several reasons. Online essay-writing services keep on hand a library of essays written on common topics—Hamlet’s indecisiveness, the Federalist on the dangers of political faction, Durkheim’s theory of religion—which could be bought for reasonable prices and at short notice. If something better or less common were required, then bespoke work could be arranged, though, as in the earlier dispensation, with more time and more money. (But, if you were an American, thanks to the mighty dollar you could save a bit by commissioning work from people in the Global South, or those who were not native English speakers.)

Also, only the bespoke work was really safe, at least if your professors used Turnitin to discover pre-existing material. Turnitin and similar services arose when the costs of commissioning dropped and its frequency (naturally) increased: Professors who barely have time to grade the papers they assign certainly don’t have time, and maybe not the ability either, to search databases of papers. Googling peculiar phrases for signs of plagiarism often marks the limit of what they can do to detect cheating. And even then your quest could conclude in uncertainty about whether a particular passage was or was not plagiarized. It was much simpler to run all the papers through Turnitin and accept its verdict.

But note what’s happening here: an arms race. Students use certain Internet technologies to enable teaching, and teachers call in other Internet technologies to detect that cheating. Commissioning services arise that promise essays with undetectable provenance; ed-tech companies introduce new tools that promise to detect the undetectable; the alternation bids fair to go on forever. One begins to wonder after a while whether the paper mills and the cheating-detection services are in cahoots, because the longer the alternation goes on, the more money all of them make. And, as I have argued on this site, the heaviest costs are paid by teachers and students, not in money, but in trust—a rapidly vanishing commodity.

I don’t like this collapse of trust; I don’t like being in a technological arms race with my students. So over the years I have developed a series of eccentric assignments. These days I rarely assign the traditional thesis essay—an assignment I always hated anyway, because it makes both the writing and the grading utterly mechanical—but instead assign dialogues between two literary theorists, or an imaginary correspondence between two novelists, or just an old-fashioned textual explication: Take this passage and explain to me, I ask them, without paraphrase, what it’s doing, what’s going on in it. And those assignments have, as it were, taken us back in time, back to the time when commissioning was expensive and therefore rare: the online paper mills, after all, don’t have a stack of conversations about The Brothers Karamazov featuring Dostoevsky and Jane Austen. It’s been a very successful strategy…until now.

The advent of the chatbots has suddenly made my life much more difficult, for several reasons.

First: No one has to be a committed cheater to use them. You only have to be someone who, in the face of an onrushing deadline, experiences either extreme fatigue or disabling anxiety. You don’t even need any money—though money to purchase a more recent and powerful version of, say, ChatGPT will probably help. All you need is a computer that’s connected to the Internet and the ability to write an appropriate prompt. (The skill that’s going to improve the most, among the most people, in the coming years is prompt engineering.) The cost of commissioning has dropped almost to zero.

Second: Very few, if any, colleges and universities have developed clear and consistent rules for the use of chatbots. Is it okay for students to ask for an outline of an assignment, if they then go on to write the substance of the essay? Are they allowed to request a draft of the essay, if they then alter that draft significantly? But wait, what counts as “significantly”? Students can very plausibly claim ignorance on these and many other questions.

Third: Chatbots can already do some of my “eccentric assignments” and do them very well. When I asked for a dialogue between Michel Foucault and Judith Butler on gender and power I got a very good one indeed. Ditto a conversation on colonialism between Gayatri Spivak and Franz Fanon. And when I asked for an exchange of letters between Jane Austen and Fyodor Dostoevsky…Well. ChatGPT’s response was historically insensitive boilerplate: “I trust this letter finds you in good health and spirits. I have recently had the pleasure of reading your novel, The Brothers Karamazov, and I felt compelled to write to you to express my admiration for your work.” But Google’s Gemini was another thing altogether:

My Dear Mr. Dostoevsky,

It is with a peculiar mixture of curiosity and trepidation that I take up my pen to address you today. Curiosity, because your novel, The Brothers Karamazov, has caused quite a stir in English literary circles. The depth of emotion you portray, the exploration of faith and morality – it is a far cry from the quiet manners and matchmaking concerns that typically occupy my own pen. Trepidation, I confess, because the world you paint is one of such stark contrasts, such turmoil, that it feels worlds away from the drawing rooms and landed estates of Hampshire.

It was with a peculiar mixture of admiration and despair that I read that letter.

Interestingly, the assignment that neither chatbot can at the moment do is the textual explication. I asked both ChatGPT-4o and Gemini to explicate a passage from Simone Weil without summarizing or paraphrasing it. Here’s the passage:

Affliction is a marvel of divine technique. It is a simple and ingenious device which introduces into the soul of a finite creature the immensity of force, blind, brutal, and cold. The infinite distance separating God from the creature is entirely concentrated into one point to pierce the soul in its center.

The man to whom such a thing happens has no part in the operation. He struggles like a butterfly pinned alive into an album. But through all the horror he can continue to want to love. There is nothing impossible in that, no obstacle, one might almost say no difficulty. For the greatest suffering, so long as it does not cause the soul to faint, does not touch the acquiescent part of the soul, consenting to a right direction.

What I got back was not analysis but simple restatement. Gemini:

Weil describes affliction as a “marvel of divine technique,” a deliberate tool used by a higher power. This tool is “simple and ingenious” because it achieves a profound effect through a seemingly straightforward mechanism: introducing immense, “blind, brutal, and cold” force into the soul.


Weil then shifts to the human experience of affliction, emphasizing the passivity of the afflicted individual who plays no active role in this divine operation. She compares this state to that of a butterfly pinned alive, highlighting the helplessness and ongoing struggle within the suffering. Despite this torment, Weil asserts that the capacity to desire love remains intact. She suggests that maintaining this desire is neither impossible nor particularly difficult, implying an inherent resilience within the human soul.

Not a wholly accurate paraphrase, especially in that last sentence, but a paraphrase nonetheless. I tried several ways of asking for something analytical rather than periphrastic, but periphrastic is what I always got. (And without any context from the rest of the essay—I tried asking for that also.)

That’s something of a relief, but I doubt I will be safe for much longer. I can easily find myself in a position like that of the theologian who worships—this is a famous phrase from one of Dietrich Bonhoeffer’s prison letters—“the God of the gaps,” a deity who only has a place where our knowledge fails, and whose relevance therefore grows less and less as human knowledge increases. If I can only pursue a “pedagogy of the gaps,” assignments that happen to coincide with the current limitations of the chatbots, then what has become of me as a teacher, and of my classroom as a place of learning? At least I can still assign my explications—a pathetic kind of gratitude, that.

No; there’s no refuge there. I must then begin with the confident expectation that chatbots will be able to do any assignment that they are confronted with. What follows from that expectation? Two possibilities, I think.

One is to develop assignments that cannot be done on ICDs (Internet-connected devices)—assignments that would almost certainly have to be done in class. But even if it were workable, and I’m not sure it is, such a strategy is simply another form of the pedagogy of the gaps; it is to accept severe constraints upon classroom design, constraints that have nothing to do with one’s educational goals and everything to do with one’s fear of our new chatbot overlords.

The second possibility requires great courage, a courage I am not sure I possess. I am moved to consider it by reflecting on something T. S. Eliot wrote in 1944, a sentence I often have reason to quote: “Not least of the effects of industrialism is that we become mechanized in mind, and consequently attempt to provide solutions in terms of engineering, for problems which are essentially problems of life.” With this sentence in mind—or rather, with the evident truth the sentence points to in mind—I could simply make the assignments that I believe best suited to what I want my students to learn, and then turn to them and ask: “What are the ‘problems of life’ that incline so many of you to turn to the chatbots rather than do these assignments?” If I could get honest answers to that question, then we all might be in deeper waters than we’re prepared for. But maybe the deeper waters are precisely what university education should be aiming for.