Infernal Machine   /   October 22, 2014

John Searle and the Threat of Artificial Intelligence

John Searle wants to reassure us. The University of Califorinia, Berkeley, philosopher, a specialist on mind, “intentionality,” consciousness and other concepts integral to the distinct sort of being that we refer to as human being, wants us to know that we need not fear “super intelligent computers intentionally setting out on their own to destroy us.” Nor, it turns out, should we grow giddy about the immanent passage of human being into the “infosphere,” where “everything is information and . . . computers are much better at it.”

In this month’s issue of The New York Review of Books (October 9, 2014), Searle offers an extensive rebuttal of the theses offered in two recent books: Nick Bostrom’s Superintelligence, which warns of the impending rise of “machine brains” superior to human ones, and Luciano Floridi’s The 4th Revolution, which announces the metaphysical emergence of the “infosphere.” Searle persuasively argues that both authors fail to account adequately for the vital role of consciousness in human being, and therefore miss in a fundamental sense what is really entailed in the creation of “artificial intelligence” and indeed “information.”

Searle offers several worthwhile distinctions, which he's explored elsewhere, especially distinctions between “observer independent” and “observer relative” features of reality. The former exist “regardless of what we think”—mountains and molecules, for example. The latter “depen[d] on our attitudes” for their real existence—money and marriage, for example. Searle suggests that “information” falls in the latter category, as does “computation” when it is the product of machines. For only conscious agents, he argues, can have, create, or otherwise interact with “information” and “computation.” There is, then, no such thing as purely artificial information or intelligence, for there is no such thing as artificial consciousness. Conscious beings—something machines are not—must cooperate with the artificial mechanisms of information and computation in order for them to function in any way remotely as “intelligent” machines.

Or so it will be for the foreseeable future. It is possible, he surmises, that perhaps someday we will be able to make machines that “duplicate” the human brain, including consciousness. And it is at the point of this speculative possibility that Searle’s argument becomes both more interesting and more problematic, because it probes—somewhat indirectly, but powerfully nonetheless—the significance of the “artificial,” a category in which we can put both “art,” “artifice,” and certainly “technology.”

A bit of background on the artificial might be helpful here. In ancient Greece, a story circulated about the creation of human beings by the gods that began uncomfortably with humans being left “naked and shoeless” and thus in grave danger before the elements. And so it was until Prometheus gave humans fire and the mechanical arts by which to sustain and preserve their lives. The “artificial,” we might say, saved human life.

But the Greeks were as capable of worrying about the artificial as they were about celebrating it: Most famously, Plato worried about simulacra, those copies of copies that did nothing but deceive and mislead humans in their quest for order and justice.

The Edenic account in the Hebrew scriptures is different from the Greek one in that it presumes the goodness of being naked and shoeless—until the great Fall, at which points artificial coverings were made to cover human nakedness in a gesture of divine mercy and judgment.

I could offer other examples of various ideas and arguments about the status and significance of the artificial in human life. Questions about the relationship between art and nature, or the artificial versus the real, are longstanding and taken up in many cultural traditions.

But what interests me here is Searle’s account, which is fascinatingly emblematic of our own age. Whereas these older accounts were concerned with the relationship between nature and art, Searle is concerned most crucially with what it takes to make something really or truly an artificial version of some non-artificial entity. What does it mean, Searle asks, to really “artificialize” (my own word, if such a word exists) something? “Artificial intelligence” as we now know it, argues Searle, may be artificial, but it is not really intelligence. So what would a truly artificial intelligence look like?

An artificial heart, by contrast, seems to be for Searle really an artificial heart. Why? Searle bases this distinction on that between “a simulation or model” and “duplication.” He writes:

Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both real and artificial hearts are physical pumps, unlike the computer model or simulation.

There is a strange literalism at work in Searle’s approach—or, better, an artificial essentialism. Causal processes are for Searle the essence of “reality”; the heart’s essence, it seems, is found for him in a basic causal function, pumping. In order to create a real artificial heart, that basic causal function needs to be literally, materially duplicated, or re-produced. Then we have, in a paradoxical formula, a real artificial heart.

But, I must ask, can that artificial heart skip a beat in a moment of terror or wonder? Such heart stopping moments, too, can be understood in cause-and-effect terms. Can an artificial heart grow stronger with exercise, or weaker with poor nutrition, also causal phenomena? Can an artificial heart, to be a bit hyperbolic, be eaten and subject to the causal processes of digestion? If not, then clearly the artificial heart is not a “real artificial heart” in every respect, but only in one respect, albeit a very important one.

My point is that “duplication” is a poor measure of the “really artificial,” for it is in the very nature and substance of the “artificial” itself to have a relative and partial relationship to that which it is imitating, copying, or “duplicating.” The artificial heart duplicates some aspects of the natural heart, but not all aspects. And the same thing can be said about computerized artificial intelligence: Of course it is true that, as Searle writes, computers lack altogether a “psychological reality,” and are nothing but well-designed, highly functional circuit systems. Nevertheless, in certain circumstances they behave outwardly in a manner that we recognize as “intelligent.” This resemblance may be far less intense than the duplication of the “causal mechanism” of the human heart, but it is a resemblance nevertheless.

If the philosopher’s quest is to find the line at which point “artificial X” crosses a line to become a truly artificial X, I am afraid it may be a frustrating task, for sharp conceptual distinctions are not going to do it. Rather, we are better off thinking in terms of a continuum, on which perhaps “concepts” themselves might sit at one end, “models” somewhere in the middle, and “duplications” at the other end.

Searle, as I said, wants to reassure us: Computers are not going to take over the world, and we have not entered a new aquarian age of the “info sphere.”

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

I don't find this very reassuring, however. A greater danger than a future filled with “really artificial intelligence” is already squarely with us: We often behave as though computers believe, desire, and move. We ascribe them human agency. And in this present reality, not an apocalyptic future, what is “really artificial” matters little. Rather, what we need is better reflection on the meaning of the artificial in our lives together.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America Since the Kennedy Assassination.