In Moral Man and Immoral Society (1932), Reinhold Niebuhr noted something salient about the kinds of tools we develop to “solve” the problems of human nature and, presumably, improve ourselves—they have their limits:
The traditions and superstitions, which seems to the eighteenth century to be the very root of injustice, have been eliminated, without checking the constant growth of social injustice. Yet the men of learning persist in their hope that more intelligence will solve the social problem. They may view present realities quite realistically; but they cling to their hope that an adequate pedagogical technique will finally produce the “socialized man” and thus solve the problems of society.
Today, we use our tools to produce not only the perfect “socialized man” through education but also to craft a kind of “algorithmic” or “informational man” guided by the presumably superior intelligence of the machine. This person, augmented and assisted by technology, we are told, is able to cast aside base prejudices and biases and be guided by more objective and efficient decision-making processes. As Brian Christian argues cogently in his book, The Alignment Problem: Machine Learning and Human Values, technologists who embrace such thinking assume that “society can be made more consistent, more accurate, and more fair by replacing idiosyncratic human judgment with numerical models.”
As Christian shows, the attempt to enlist machine learning in decision-making has already succeeded in many areas of daily life: “This is happening not only in technology, not only in commerce, but in areas with ethical and moral weight,” he writes. “State and federal law increasingly mandates the use of ‘risk-assessment’ software to determine bail and parole. The cars and trucks on our freeways and neighborhood streets are increasingly driving themselves. We no longer assume that our mortgage application our resume, or our medical tests will be seen by human eyes before a verdict is rendered.” As he puts it bluntly, “It is as if the better part of humanity, were, in the early twenty-first century, consumed by the task of gradually putting the world—figuratively and literally—on autopilot.”
Who could argue with a project that had as its end goal a world where citizens are guided more by efficiency and reason than passion or, worse, prejudice?
The “alignment problem” of his book’s title answers the question: We should all be willing to argue with this project when it produces evidence of real and potential harms. As Christian shows, we are creating algorithms to solve our problems that often inadvertently end up creating misalignments between our intentions and their end results. “How to ensure that these models capture our norms and values, understand what we mean or intend, and, above all, do what we want?” he asks.
For one thing, we should be more skeptical about the models themselves, and more rigorous about questioning their components. Even the most well-intentioned models and algorithms can fail to capture the complex ethical and moral demands inherent in governing a heterogeneous society. As Christian notes, algorithmically-driven decisions about sentencing and parole, for example, initially created to introduce greater objectivity, in many cases merely replicated unacknowledged racial biases and inequalities in the criminal justice system. Such examples remind us that algorithms cannot and should not serve as stand-ins for political (or moral) decision-making.
There are global repercussions as well, which highlight the fact that the challenge for AI and machine learning isn’t merely technical, but ethical, given the realities of competing values: In recent years, discussions about the use of AI often ignore ethical issues in favor of framing the debate as one about competition between the U.S. and countries like China. The National Security Commission on Artificial Intelligence recently issued a report that described the challenge as follows: “China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.”
As Eric Schmidt, formerly of Google parent company Alphabet and a member of the Commission, told Axios, military leaders are already asking for AI systems that will help them in decision-making, not merely defense monitoring. As Axios reported, “Computer vision is one area where AI can help the military now, Schmidt said, saying it is a mistake for the U.S. to rely only on humans to examine drone and satellite footage, for example, when computers perform that task better than humans.”
Note that Schmidt’s claim that “computers perform that task better than humans” is presented as a statement of fact rather than what it really is: a statement of value. The eagerness to outsource to supposedly unbiased machines responsibility for tough decision-making, combined with a tendency toward hubris among technologists, can lead to an unwillingness to confront error, to course-correct, or even to think through the unintended outcomes made possible by their own creations.
Such thinking has long been baked into the system, which has, until recently, ignored training in ethics and moral reasoning in favor of moving fast and breaking things, as Facebook’s Mark Zuckerberg once put it. As one STEM student at Stanford told KQED in 2018, “Probably by far the majority of CS [computer science] students at Stanford go through their four years here without like really taking any classes that actually force them to think about ethical questions or engage in that kind of ethical analysis.” More intensive ethics education at the undergraduate and graduate level, akin to what people in the medical profession are required to take, would go a long way toward correcting Silicon Valley’s willful disregard of things that “are not easily quantified or do not easily admit themselves into our models,” as Christian notes.
While Christian is no Luddite (he is largely optimistic about the great good that AI and machine learning will bring to humanity) he offers a compelling argument against techno-utopian thinking, and urges a thoughtfully cautious approach to our artificially intelligent future.
One thing we clearly need is fewer premature declarations of victory by the purveyors of AI and machine learning; although their enthusiasm about their creations is in many cases justified, their efforts to improve society by outsourcing to machines the difficult job of determining value and punishment and reward should not be embraced in all areas of life. Similarly, we need a clearer delineation of what we, as a society, believe is and is not quantifiable and manipulable. This might eventually bring a deeper respect for what Niebuhr once called the “indeterminate possibilities” of life, as well as a greater humility about our ability to alter human nature.