The history of artificial intelligence (AI) cannot be separated entirely from the general development of technologies that go back to the ancient world. Like the abacus, the machines we today call AI reproduce and automate our formal and cognitive abilities, albeit at higher levels of generality. More officially, AI research began in the postwar era with the “symbolic” paradigm, which sought to program human faculties such as logic, knowledge, ontology, and semantics within software architecture. It was harder than it sounds. Despite the inveterate optimism of the broader field, the symbolic approach encountered major logistical and conceptual limitations, and by the turn of the century had begun to stagnate.
A competing approach, machine learning, developed algorithms that, through brute optimization, appeared to replicate some of the mind’s basic effects. At first, the paradigm was constrained by a paucity of data and computing power, but those bottlenecks cracked open in the new millennium when the Internet accumulated galaxies of information and a niche technology (graphic processing units, otherwise known as GPUs, used in PCs and gaming consoles) proved useful for the intense computation required by machine-learning models.
In 2011, computer scientists Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton designed a neural network (a model loosely inspired by brain structures) to tackle the legendary ImageNet competition, a shoestring contest in automated image captioning that was ridiculed by many AI researchers at the time. The team’s model described images with 85 percent accuracy, a major improvement from previous attempts. In short order, most resources in AI research were rerouted into this neglected subfield, which ultimately led to the neural networks that today facilitate social media, search engines, and e-commerce, as well as a novel consumer product.
In 2015, an obscure nonprofit called OpenAI was founded by Sutskever, Elon Musk, Sam Altman, and a roster of computer scientists and engineers. Seven years later, the organization released ChatGPT, introducing the public to generative AI with “zero fanfare,” as one article described the marketing for the product. OpenAI, blindsided by its reception, had not secured enough computing power for the traffic it received. That was only three years ago. Now generative AI is ubiquitous, and OpenAI is speculatively valued at $300 billion.
It should surprise no one to see this brief account of technology exhibit the capriciousness of history: the skips, loops, and halts of progress; the weird contingencies (GPUs); the wrongheadedness of consensus; the arbitrariness of recognition; the maddening unpredictability of success. Yet a popular fantasy offers a tidier narrative that reduces the history of computing to a plottable sequence of triumphs and epiphanies in which progress is trivial and steadily exponential. I am referring to the hype surrounding AI, those industry-driven gusts of hot air blowing through every quarter of society and the cultural mania they are meant to inflame.
Princeton University computer scientists Arvind Narayanan and Sayash Kapoor have written AI Snake Oil to help nonexpert citizens identify and resist AI hype by relying on “common-sense ways of assessing whether or not a purported advance is plausible.” While not denying “genuine and remarkable” advances in generative AI, the authors are deeply concerned, even pessimistic, about the social consequences of its widespread adoption and use.
A big part of the problem, the authors maintain, is confusion about the meaning of artificial intelligence itself, a confusion that sustains and originates in the present AI commercial boom. Consider Hollywood’s renewed obsession with renegade AI (Mission: Impossible—Dead Reckoning Part One, Atlas, The Creator) or the commercial scramble to slap the AI label on vacuum cleaners, humidifiers, and other basic appliances, or even on the seasoned algorithms of Spotify and YouTube. More recently, the emergence of services that nominally use machine learning (Amazon Fresh) or don’t use it at all (the “AI” scheduler software Live Time) have only amplified the public’s bewilderment about the identity and capabilities of artificial intelligence.
Narayanan and Kapoor are particularly worried about the conflation of generative AI, which produces content through probabilistic response to human input, and predictive AI, which is purported to accurately forecast outcomes in the world, whether those be the success of a job candidate or the likelihood of a civil war. While products employing generative AI are “immature, unreliable, and prone to misuse,” Narayanan and Kapoor write, those using predictive AI “not only [do] not work today but will likely never work.” Such critical distinctions have been lost in the maelstrom of hype, allowing grifters, techno-messiahs, and pseudo-intellectuals to further manipulate the public with myths and prophecies.
While boosterism is hardly unique in the history of business and technology, the exceptional scale and intensity of this wave of hype is evident in the expanding bookshelf of titles by authors engaging in nothing less than a form of technological augury: The Singularity Is Nearer, by Google’s Ray Kurzweil; Nexus, by Yuval Noah Harari; and Genesis, by former Microsoft executive Craig Mundie, former CEO of Google Eric Schmidt, and the late Henry Kissinger, are just a few of many.
A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself. After the publication, in 2015, of Homo Deus, a book which appeals to pop evolutionary biology and post-humanist fantasies in order to prognosticate about technological innovation, Harari, who trained as a military historian, discovered he had earned “the reputation of an AI expert.” Nexus intends to “provide a more accurate historical perspective on the AI revolution,” but it reads like an undergraduate exercise in misreading, category error, and shoehorning. Explaining the basics of machine learning, Harari compares the pre-training of “baby algorithms” to the childhoods of “organic newborns,” blundering into the single worst explanatory analogy for the technique. What little we know of how humans learn (which allows us to independently generalize from very little data) is that it functions nothing like machine learning (which must be trained on oceans of data). Undeterred, Harari underscores the capacity of models to “teach themselves new things” in an iterative fashion. He offers the example of “present-day chess-playing AI” that are “taught nothing except the basic rules of the game.” Never mind that Stockfish, currently the world’s most successful chess engine, is programmed with several human game strategies. Harari fails to explain that while machine-learning models assemble a template of solutions to a specific problem (e.g., the best possible move in a given chess position), the framework in which those problems and solutions are defined is entirely constructed by engineers. Such models are entrenched in a particular complex of human judgment and knowledge that they functionally cannot transcend.
In passage after passage, Harari bungles straightforward issues and ideas concerning artificial intelligence. Philosopher Nick Bostrom’s version of the “alignment problem,” a staple in AI discourse, is a simple thought experiment that illustrates how an artificial intelligence could accomplish human goals through unforeseen means that violate the broader interests of its designers. An AI tasked with maximizing viewers’ time spent on a social-media platform might just accomplish that goal by exposing them to grotesque, false, or politically radical content. But Harari, attempting to argue that the alignment problem is a timeless conundrum, applies it to historical events that did not materially involve artificial intelligence (i.e., the “American invasion of Iraq”) when “short-term military” ambitions diverged from “long-term geopolitical goals.” Yet Bostrom’s warning is not about basic shortsightedness but a longsightedness that is blind to intervening steps taken by nonhuman systems.
In some cases, such ignorance seems strategic. Harari discusses the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, a machine-learning tool adopted by several state courts to score a defendant’s likelihood of recidivism. Harari rightly portrays the use of COMPAS as a scandal wherein “opaque algorithms” threaten “democratic transparency.” Yet he does not mention the most basic flaw of COMPAS: As Narayanan and Kapoor write, the “tool wasn’t very accurate to begin with; it had a relative accuracy of 64 percent,” marginally better than flipping a coin—a figure they believe is “likely to be an overestimate,” although such assessments are disputed by the tool’s owner and other researchers. But Harari’s elision is perplexing, given his critical stance toward the technology, his citation of a Criminal Justice study outlining the “mixed” performance of these systems, and his reference of the ProPublicainvestigation of COMPAS, which Narayanan and Kapoor cite.
The opacity of machine-learning tools is a genuine technical problem, but Harari adopts it as a magician’s silk behind which he shifts from mystifying to mythologizing his subject. In this practice, though, Harari is a bumbling acolyte compared to the high priesthood of Kissinger, Mundie, and Schmidt. The trio’s Genesis succeeds The Age of AI (2021), a tome Narayanan and Kapoor describe as “incessant in its hyperbole” and “littered with AI hype.” Indeed, it’s challenging to assess the claims within Genesis, because its idea of artificial intelligence resides so far afield of this writer’s (admittedly inexpert) understanding of the technology. (Perhaps it is technical illiteracy underlying my conviction that the phrase “interstellar fleets” should never appear in a text hoping to be taken seriously as a technological forecast.) Eloquent for its slapdash genre, Genesis is a sequence of pretentious historical odysseys that bring human endeavors (science, politics, warfare, etc.) to the brink of metamorphosis at the hands of AI:
Our minds remain childlike with respect to God, our world, and now our newest creations.…
But will AIs be conquerors? Will human leaders become their proxies: sovereigns without sovereignty? Or, perhaps, will godlike AIs resurrect the once-ubiquitous human invocation of divine right, with AIs themselves as anointers of kings?…
Might the apparently superior intelligence of machines with structures based on the human brain, combined with our intense reliance on them, lead some to believe that we humans are ourselves becoming, or merging with, the divine?
It seems sufficient to ridicule this as the typical effluent of Silicon Valley’s intellectual culture, until you detect its political inflection. Kissinger, Mundie, and Schmidt habitually ponder the “fatalism,” “passivity,” “submission,” and “faith” with which “individual humans and whole human societies may respond to the advent of powerful AI.” Like Harari, the authors belabor the “opacity” of AI in order to legitimize musings like this: “Will the age of AI not only fail to propel humanity forward but instead catalyze a return to a premodern acceptance of unexplained authority?” These loaded questions might provoke similar queries from the reader. Could the passivity that preoccupies these sages betray some wish to instill that attitude in their readership? Might the plutocrats and tycoons they represent somehow benefit from making fatalism seem respectable and even reasonable to the general public? Does the depiction of AI as omnipotent, omniscient, and unknowable perhaps work to mesmerize the media, cow potential regulators, and, above all else, juice financial markets?
Fixated on revolutions and catastrophes, beginnings and endings, Genesis offers an eschatology centered on the “existential” risks posed by “misaligned AI.” The authors compare artificial intelligence to nuclear weapons in order to frame the geopolitical jockeying over AI as an “arms race” that recapitulates the Cold War. While their Kissingerian approach to this grim future curiously resembles the postwar international formation (“Unipolarity may be one pathway that could minimize the risk of extinction”), their equation of nuclear Armageddon (a long-standing, real possibility) with AI’s (ill-defined, hypothetical) global danger is not distinct to them. The strategy is the hobbyhorse of OpenAI’s Sam Altman, who lavished Genesis with advanced praise and apparently enjoys telling audiences that artificial intelligence will “most likely lead to the end of the world.”
Narayanan and Kapoor argue that the “bugbear of existential risk” from artificial intelligence serves to “overstate its capabilities and underemphasize its limitations” while distracting elected officials and citizens “from the more immediate harms of AI snake oil.” I would add that it monopolizes our imagination and sustains a frenzied pitch of the discourse around AI, both of which attract investors while affording large companies a means of regulatory capture. When Altman appeared before a Senate committee in 2023 to testify about the dangers of AI, he advocated for a government agency that would conveniently solidify OpenAI’s first-mover advantage by placing the burden of regulation on new competitors while neglecting “many of the transparency requirements that researchers had been arguing for OpenAI to follow.” AI systems that are imprudently embedded within social structures will pose threats, but Narayanan and Kapoor argue that “society already has the tools to address [those] risks calmly” while the specter of rogue AI cultivated by Altman, the authors of Genesis, and the so-called AI safety community is “best left to the realm of science fiction.”
Importing ideas from science fiction is the business of Ray Kurzweil; literally so. The titular event of Kurzweil’s The Singularity Is Near (2005) was first popularized by sci-fi legend Vernor Vinge in his 1993 essay that predicted the emergence of “superhuman intelligence” and closing of the “human era” within thirty years. The premise of Kurzweil’s sequel, The Singularity Is Nearer, is that humanity has begun the final preparations for this belated technological rapture, an event guaranteed by his “law of accelerating returns,” which supposedly describes how “positive feedback loops” and declining costs in information technologies make “it easier to design [their] next stage.” Artificial intelligence will orchestrate across numerous domains to bring about progress so precipitous and consistent that, Kurzweil asserts, humans will “merge with AI” around 2045. This is Kurzweil’s “Singularity,” the imaginary event that illustrates the primitive mechanics of his thought, which consist almost entirely in extrapolation.
A typical Kurzweil prophecy begins by citing recent improvements in a particular industry or field. Assessing medicine, for instance, he notes that in 2023 a drug designed using machine learning “entered phase-II clinical trials to treat a rare lung disease.” He then pontificates on thinly related philosophic or mathematical subjects, discombobulating the reader with unexplained jargon and Very Large Numbers—“1024 operations per second,” “306,000,000 gigabytes,” “100 trillion human beings,” “a googleplex of zeros,” “1010 123 possible universes,” a “million billion billion billion billion billion billion possibilities”—which are meant somehow to assure us that “exponential” advancement shall blast through any remaining ceilings, roadblocks, or bottlenecks, at least the ones that Kurzweil mentions. The interphase of this performance is like watching a bird struggling beneath a net. Because once Kurzweil escapes the trap of evidence and intellectual humility, he truly flies. As AI revolutionizes medicine, he asserts, applications will surge by the late 2020s, enabling us to combat biological limitations on the human lifespan through the 2030s with AI-controlled nanorobots, ultimately leading to the “definitive” defeat of aging. In the 2040s, cloud-based technologies will allow us to abandon our biological shells altogether by uploading our minds into digital environments.
One might wonder why Kurzweil commits himself to such specific time frames, having had to revise them before. Isn’t it advantageous to the soothsayer to remain tentative and vague? But then you remember that Kurzweil is seventy-seven years old and that just maybe (in the spirit of conjecture) he has chosen the next three decades as the window of our transcendence because they are the ones in which he has the best, not to say the last, chance of seeing his prophecy fulfilled. (As a fail-safe, he has paid to have his body “cryogenically frozen and preserved” so he can be resurrected to marvel at his prescience.) For Kurzweil, death is a technical problem we must solve no matter how pathetic or grotesque the solution. The reader’s jaw creaks open as Kurzweil describes the “dad bot” he trained on personal family records as “the first step in bringing my father back.” The conversation he proceeds to have with his simulated “father” is pitiful, but not for the reasons Kurzweil would believe.
Why is the essential promise of technology—the alleviation of drudgery—not enough? Maybe, in the case of AI, because it remains unclear what drudgery it can realistically alleviate. I, along with Narayanan and Kapoor, don’t doubt that machine learning will find positive applications in various industries (including medicine) while the underlying computer science will continue its winding amble forward. (AI is not a hopeless deviant technology like cryptocurrency.) But the promise of artificial intelligence does not provide any reason to believe we are living in “the most exciting and momentous years in all of history,” as Kurzweil puts it.
After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists. The public is vulnerable to this campaign, in part, because of the cumulative nature of technological innovation. Understanding products such as ChatGPT, for example, requires a baseline familiarity with the tools and subjects it builds upon (e.g., transformers; neural networks), which are themselves subject to similar requirements (e.g., backpropagation; linear algebra.) In this way, such technologies levy a compounded cognitive cost. At some critical threshold unique to each technology, that burden becomes too great and ordinary people no longer have the time or energy to resist the sort of deception that is the incubator of hype. Paradoxically, the sure sign that a technology has undergone this transition is not widespread disinterest but superficial fascination and wide-eyed utopianism (nuclear fusion and quantum computing are good case studies). Hype appears, then, as a social mechanism through which technology becomes a kind of magic. When the authors of Genesis invoke Arthur C. Clarke—“Any sufficiently advanced technology is indistinguishable from magic”—they, of course, don’t mention that he was describing a nineteenth-century scientist’s first impressions of twentieth-century technology. For them, Clarke’s adage echoes their only real goal: to artificially prolong our childlike enchantment with newfangled toys and tools in order to buy time for the technicians to make good on unearthly promises.
Building or adapting a technology before articulating its function is usually the hallmark of a doomed product (see Google Glass, Apple Vision Pro, or the Metaverse). Over the past three decades, however, many leading tech startups, corporations, and venture-capital firms have operated according to a backward logic that has nevertheless proven remarkably successful for machine learning. This success is due, in part, to personalities like Sam Altman and Elon Musk, who have perfected the art of manufacturing public enthusiasm. In this case, the hype surrounding AI amounts to more than harmless promotion. By shaping expectations of what it can accomplish (such as a future civilization enthralled to godlike machines), Kurzweil, Harari, and their ilk pave the way for broad public acceptance of the comparatively humble promises and predictions of tech CEOs (what are fully self-driving cars before those interstellar fleets?). But it is all the same cartoon divorced from the realities of a powerful but limited technology. If there is any prediction one could make with confidence about AI, it is that its successful applications will be hammered relentlessly into public consciousness. But there will be little accounting for the opportunity costs incurred by an all-or-nothing industry that neglected the unglamorous problems and workaday inefficiencies that machine learning might have actually resolved. The project of making life a bit better for most people is being traded for the unthinkable waste in service of an impossible utopia.