Lessons of Babel   /   Summer 2025   /    Book Reviews

Perplexity

An untidy history of AI across four books.

Trevor Quirk

Floriana/iStock Photos.

The history of artificial intelligence (AI) cannot be separated entirely from the general development of technologies that go back to the ancient world. Like the abacus, the machines we today call AI reproduce and automate our formal and cognitive abilities, albeit at higher levels of generality. More officially, AI research began in the postwar era with the “symbolic” paradigm, which sought to program human faculties such as logic, knowledge, ontology, and semantics within software architecture. It was harder than it sounds. Despite the inveterate optimism of the broader field, the symbolic approach encountered major logistical and conceptual limitations, and by the turn of the century had begun to stagnate. 

A competing approach, machine learning, developed algorithms that, through brute optimization, appeared to replicate some of the mind’s basic effects. At first, the paradigm was constrained by a paucity of data and computing power, but those bottlenecks cracked open in the new millennium when the Internet accumulated galaxies of information and a niche technology (graphic processing units, otherwise known as GPUs, used in PCs and gaming consoles) proved useful for the intense computation required by machine-learning models. 

In 2011, computer scientists Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton designed a neural network (a model loosely inspired by brain structures) to tackle the legendary ImageNet competition, a shoestring contest in automated image captioning that was ridiculed by many AI researchers at the time. The team’s model described images with 85 percent accuracy, a major improvement from previous attempts. In short order, most resources in AI research were rerouted into this neglected subfield, which ultimately led to the neural networks that today facilitate social media, search engines, and e-commerce, as well as a novel consumer product. 

In 2015, an obscure nonprofit called OpenAI was founded by Sutskever, Elon Musk, Sam Altman, and a roster of computer scientists and engineers. Seven years later, the organization released ChatGPT, introducing the public to generative AI with “zero fanfare,” as one article described the marketing for the product. OpenAI, blindsided by its reception, had not secured enough computing power for the traffic it received. That was only three years ago. Now generative AI is ubiquitous, and OpenAI is speculatively valued at $300 billion.

It should surprise no one to see this brief account of technology exhibit the capriciousness of history: the skips, loops, and halts of progress; the weird contingencies (GPUs); the wrongheadedness of consensus; the arbitrariness of recognition; the maddening unpredictability of success. Yet a popular fantasy offers a tidier narrative that reduces the history of computing to a plottable sequence of triumphs and epiphanies in which progress is trivial and steadily exponential. I am referring to the hype surrounding AI, those industry-driven gusts of hot air blowing through every quarter of society and the cultural mania they are meant to inflame.

To read the full article online, please login to your account or subscribe to our digital edition ($28.00 yearly). Prefer print? Order back issues or subscribe to our print edition ($33.00 yearly).