The California Coastal Records Project was created in 2002 to document the erosion of California’s coast. In the process of taking aerial photographs of the entire coastline, the Project inevitably captured images of the houses—many of them quite grand—built to provide their occupants with views of the Pacific. One of those mansions belonged to the actor and singer Barbra Streisand, who grew quite unhappy when she discovered that a picture of her house was available online. She sued a website that had posted the photograph and thereby called dramatically greater attention to it. Before her suit it had only been viewed a handful of times, but when word of the lawsuit got out, hundreds of thousands of people saw the giant house overlooking the sea. And thus the Streisand Effect was born.
Streisand didn’t win the lawsuit, but even if she had, she would have lost in a larger sense, because in her attempt to keep the appearance of her house private she made it almost infinitely more public than it had been. And this is what we mean by the Streisand Effect: any attempt to keep something private or silent or invisible that has the opposite effect: the private experience goes public, the secret word is spoken aloud, the formerly invisible is placed on display.
But this is a specific instance of a more general phenomenon, one that has a thousand faces: backfiring. That is, you attempt to make something better and end up making it worse. But precisely because that phenomenon is so general, we need names to distinguish its specific forms. To the Streisand Effect, let’s add the Elon Effect.
Recently, Elon Musk became deeply concerned that the many LLMs (Large Language Models) now being developed were scraping data from Twitter, a practice that, Musk said, stressed Twitter’s servers and degraded the ordinary user’s experience. So he implemented “rate limiting”—strict quotas on the number of tweets any given account could read in a given time. The effect was…to degrade the ordinary user’s experience.
What I’m calling the Elon Effect works like this: You perceive a threat from a new technology; you employ technological means to address that threat; as a result, you either exacerbate the problem you were trying to address or create a new one.
As I begin to prepare for the coming semester, I’m thinking about the ways that teachers like me are growing more vulnerable to the Elon Effect. Professors fearful that students employ ChatGPT to write their assignments are turning to services like GPTzero to catch the cheaters. One problem here is that such tools are strongly disposed to produce false positives—for instance, they are quite certain that the US Constitution was written by AI—but that ought to be a relatively minor concern.
This desperate grasping at digital technology solutions to problems created by digital technology has not been created by the public release of LLMs; it has been going on for quite some time, and is merely an extension of an arms race between teachers and students in effect since the advent of the Internet. Students find places to download essays, professors run them through Turnitin, students search for options that can evade Turnitin’s algorithms, and so on ad infinitum, or anyway ad nauseam. We can become so absorbed in trying to outwit each other that we lose sight of what a technologist might call the degradation of the user experience. What I, as a nontechnologist, would call it is an apocalypse, that is to say, an unveiling. And here is what is unveiled by this arms race: that teachers and students rarely trust one another. And in the absence of trust there can be no real community of learning.
None of this should be taken to suggest that such failures of trust are new. Students have always cheated, and professors have always tried to catch them. But there were limited resources on both sides. When I was an undergraduate I had a friend who, at the end of his final college semester, with a final paper to write for a class he needed in order to graduate, was suffering from severe eyestrain and simply could not see either the text of the books he was supposed to write about or the words he needed to type. So I wrote the paper for him, on his typewriter. Later, when I became a teacher, I often remembered my cheating and realized that if any of my students were to cheat in the same way I would have no way to know; all I could do was to state my expectations for academic integrity and hope that my students met them. Precisely because I so manifestly lacked the resources to know the truth, I could simply set the matter aside. But in the Internet era, especially now that it has developed into the LLM era, the opportunities for cheating and for catching cheaters have increased exponentially. There have always been ways to game the system, and ways to detect the gaming of the system, but they were often impractical. Now they’re right there in the forefront of our minds, all the time.
In 1944, T.S. Eliot wrote: “Not least of the effects of industrialism is that we become mechanized in mind, and consequently attempt to provide solutions in terms of engineering, for problems which are essentially problems of life.” For those of us committed to the vocation of teaching, the question of our time, I believe, is this: Will we realize that we are, increasingly and compulsively, perpetrators and victims of the Elon Effect? (As with the Streisand Effect, perpetrator and victim are one.) Will we realize that we are avoiding problems of life by redescribing them as problems of engineering? As every addict must learn, the first step is admitting that you have a problem—a problem that as you are now you cannot fix.