Infernal Machine   /   April 22, 2015

79 Theses on Technology: Our Detachment From Technology

Guest Blogger

insect blue_FLAT

When reading Alan Jacobs’s 79 theses, three jumped out at me:

55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”

56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok

These theses suggest a single issue: We have become increasingly detached from our software, both in how it works and how it is built.

The algorithms involved in much of our software are each designed to do something. When an algorithm was a single snippet of code or a tiny computer program, it could be read, understood, debugged, and even improved. Similarly, computing once involved regular interactions at the level of the command line. There was little distance between the code and the user.

Since the early era of command lines and prompts, software has become increasingly complex. It has also become increasingly shielded from the user. These are not necessarily bad changes. More sophisticated technology is more powerful and has greater functionality; giving it a simpler face prevents it from being overwhelming to use. We don’t need to enter huge numbers of commands or parameters to get something to work. We can just swipe our fingers and our intentions are intuited.

Thanks to these changes, however, each of us has become more distant from the inner workings of our machines. I’ve written elsewhere about how we must strive to become closer to our machines and bridge the gap between expert and user. This is difficult in our era of iPads and graphical interfaces, and often it doesn’t even seem that important. However, since these technologies affect so many parts of our lives, I think we need the possibility of closeness: We need gateways to understanding our machines better. In the absence of this proactive decision, our responses to our machines will tend to be driven by fear, veneration, and disdain.

As we have become detached from how algorithms and software operate, this detachment has caused a gross misunderstanding of how technology works. We find it to be far more inscrutable than it really is, forgetting all technology was designed by fallible people. We respond to this inscrutable power by imputing a beauty and sophistication that is not there. (For more on this, see Ian Bogost and his observation that many people use the word “algorithm” in an almost religious manner.)

Veneration of the algorithm as something inordinately impressive is detrimental to our ability to engage with technology. Software is often incredibly kludgy and chaotic, far from worthy of worship. This response is not so far from fearing technology just because we can't understand it. Both fear and veneration are closely related, as both make algorithms out to be more than they are. (This is the subject of Jacobs’s Theses 55 and 56, though stated in a bit more extreme forms than I might be willing to do.)

But what about disdain? How does this work? When a device suggests the wrong word or phrase in a text or sends delivery trucks on seemingly counterintuitive routes, we disdain the device and its algorithms. Together, their outputs seem so self-evidently wrong that we are often filled with a sense of superiority, mocking these algorithms’ shortcomings, or feeling that they are superfluous.

Sometimes, our expertise does fall short and complex logic can seem like stupidity. But David Auerbach, writing in Nautilus, offered this wonderful story that shows that something else might be going on:

Deep Blue programmer Feng-Hsiung Hsu writes in his book Behind Deep Blue that during the match, outside analysts were divided over a mysterious move made by the program, thinking it either weak or obliquely strategic. Eventually, the programmers discovered that the move was simply the result of a bug that had caused the computer not to choose what it had actually calculated to be the best move—something that could have appeared as random play.

In this case, ignorance prevented observers from understanding what was going on.

Is complex logic indistinguishable from stupidity? I don't think so. Our response to a process we don't understand may be closer to the nervous laughter of ignorance than a feeling of superiority. We call these algorithms stupid not because we recognize some authentic algorithmic inadequacy in them. We call them stupid because to admit a certain humility in the face of their increasing complexity would be a display of weakness.

When I took an artificial intelligence course in college and learned the algorithms for programs such as playing board games or constructing plans, I didn’t feel superior—I felt a kind of sadness. I had seen behind the screen and found these processes sophisticated, but fairly mundane. Most complex technology is this way. But when each of us encounters a surprising and apparently stupid output, if we don’t understand its origins, it is a lot easier to mock the system than to feel humbled, or even disappointed, at discovering its true structure.

These responses to technology are not the everyday user’s fault. Many of the creators of these technologies want the user to attribute a certain power to these algorithms and so have protected them behind layers of complexity. Ultimately, I think the most appropriate response is intellectual humility in the face of technology from which we have become increasingly detached. Only then can we engage with algorithms and try to see, even if only a moment, what they are actually doing.

Samuel Arbesman is a Senior Adjunct Fellow at the Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado and a Visiting Scholar in Philosophy at the University of Kansas. Follow him on Twitter at @arbesman.