I don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.
But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.
I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.
To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their "structure of intention," or “intentionality." This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.
But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.
So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.
Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or "deactivated" in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about "meaning" in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.
Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, "The plant wants water"? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.
Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.
I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.
It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.
To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).
So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.
So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!