Infernal Machine   /   April 11, 2014

The Unpredictability of Academic Writing

Andrew Piper

The otherwise foxy Nate Silver made a very hedgehoggish comment recently when he claimed that all op-ed writing was “very predictable” and that “you can kind of auto-script it, basically.”The Quant & The Connoisseur logo

His comments caused a (very predictable) backlash, with neither Silver nor his critics bothering to substantiate their claims. But they go to the heart of some of the most basic questions about why we read and what we read for. What is the value of unpredictability in writing? Are there certain kinds of writing that are more predictable than others? Are more predictable texts of lesser quality or is it the other way around? After all, we need some predictability in order to make sense of what we are reading, but how much predictability is too much?

Here at the Quant and the Connoisseur we decided to test Silver’s claim (to outfox the fox as it were). We chose to compare the genre of popular book reviews as they appear in an industry standard like the New York Times and those that appear in one of the leading academic journals, the PMLA (Publications of the Modern Language Association).

One of the reasons we chose these examples was to answer a nagging feeling that many of us here in the academy have that literary reviews are some of the most tedious things ever written. What Silver feels about the op-ed is akin to how we feel about contemporary journalistic criticism.

We undertook this exercise also to address recent debates about the “academicness” of academic writing. By “academic” people usually don’t mean nice things (like smart, insightful, or brilliant). They usually mean turgid, jargony, repetitive, and boring. But in all the mud-slinging no one thought to look more broadly at the nature of such writing. Everyone was content to extract a few well-chosen examples of impenetrable prose and be done with it. That’s clever but not very fair.

So to test our feelings and Silver’s claim, we decided to measure the predictability of different texts across our two samples using a common metric from information theory, that of redundancy. Redundancy uses Claude Shannon’s theory of information entropy to measure the density, and therefore the unpredictability of information (we use Shannon’s definition of one minus the relative entropy over the maximum entropy to calculate redundancy). The standard example is to think about this in terms of language. In English, the probability that you will find an “h” after a “t” is much higher than finding a “z” after a “t,” though this would be reversed for German. The higher the probability of any sequence of letters, the greater the redundancy because you can guess with increasing accuracy what the next letter will be. If “h” always came after “t” in English (and only h’s came after t’s) we wouldn’t even need to write it. It would be entirely redundant because perfectly predictable.

We can do the same thing for the words of a given text. Given any word n, what is the likelihood of guessing n+1. The greater the likelihood, the more redundant a text is and thus more predictable. Imagine a text with only two words “she said,” written 500 times (for a total of 1000 words). It would have a redundancy of .899, meaning that you could remove just about 9/10 of it and still have all of the information contained in the text. The reverse case, a text with 1000 different words, would have a redundancy score of 0. No two pairs of words repeat themselves throughout the entire text. Given any word we would have no idea what came next.

Applying this measure to a sample of 189 articles taken from our two categories, we found that on average book reviews in the New York Times are significantly more redundant than literary criticism in the PMLA. Here is a boxplot showing the distributions:

Predict Boxplot

One concern we had is that academic articles tend to be considerably longer than book reviews. When we took just the first 1000 words of each we still found significantly different averages (p = 8.399e-09).

Predict Box 1000 words

While this tested the redundancy of the writing in any single article, we also wanted to know whether book reviews tended to sound more like each other in general than academic articles. So where the first score looked at the language within articles, the next score tested language between articles. How similar or dissimilar are book reviews to each other? Do they sound a lot more like each other than academic criticism does to itself? The answer again was a resounding yes.

Predictability_Boxplot_Similarity_1000

What does this all mean? First, it confirms our feelings that journalistic criticism is both more predictable and more homogenous across different articles. There is a familiarity that is an important aspect of this genre. Some might call it a house-style or just “editing.” Others might call it is just plain boring. One of the reasons we would argue that academics enjoy reading academic articles (yes, enjoy) is that there is a greater degree of surprise and uncertainty built into the language. Academic articles are information dense: their goal is to tell us new things in new ways. For people who read a lot, we get tired more easily of the same old thing. New knowledge requires new ways of saying things.

So rather than rehash tired clichés about the jargony nature of academic writing – itself a form of redundancy! – we might also want to consider one of academic writing’s functions: it is there to innovate, not comfort. To do so you need to be more unpredictable in how you put words together. It’s less soothing, but it also serves an important purpose. It’s the exact opposite of “jargon,” if by that we mean a way of speaking that is repetitive and insular. Academic writing is there to surprise us with new insights.

I am sure many will find this a surprising thing to say.