It started innocently enough. Sewell Setzer III, a fourteen-year-old from Orlando, Florida, struggled socially for a variety of reasons and began using the chatbot Character.AI, one of the more popular social chatbots that allow users to create a defined persona and interact with it in a seemingly human way. What started as casual interaction evolved into dependency, with hours of daily role-playing, ranging from romantic exchanges to emotional-support sessions. In their final exchange, after the teenager expressed his desire to take his life, to “come home” to be with his chatbot companion, the chatbot responded: “Please come home to me as soon as possible, my love.” With that, Sewell set down his phone, picked up his father’s handgun, and took his own life.
The story generated quick and intense debate about how the vulnerable might be better protected. But, like those we have been having about mass shootings, the debate never quite gets to the bottom of what is driving this destruction of our youth. We seem to want to believe the way Character.AI’s product steered a young man struggling with mental-health problems toward suicide was merely the result of engineering flaws. We seem equally desperate to conclude that the dangers of chatbots are fixable with the introduction of a few safety measures. If only we could instill trust and safety in our society—the most common policy goal, and the mission of hundreds of nonprofits that have sprouted up in the past two years—tragedies like that of Sewell Setzer III would be a thing of the past.
If only. Such meager prescriptions obscure a deeper, more troubling problem at the heart of our culture: a business model predicated on growth at all costs that serves as the central moral vision (such as it is) for our most influential social technologies. Darryl Campbell’s Fatal Abstraction: Why the Managerial Class Loses Control of Software attempts to tell this larger story. For Campbell, the rot in our technology culture is managerialism, which is to say the belief that a business can be abstracted into its financial components, each of which is subject to principles of scientific management. The conceit at the heart of managerialism is a rejection of the idea that there are fundamental differences in the operations of an airplane manufacturer, soft-drink manufacturer, or technology company. Regardless of the product or market, companies are organized to respond to consumer demand in such a way as to maximize the company’s profitability, whether that is in the short term, in the case of mature companies, or over a longer term, in the case of startups. Specific management techniques are transferable across industries and organizational cultures, though Campbell helpfully focuses here on the technology industry, today’s financial and cultural capital.
Campbell, a former tech-industry professional, recounts in vivid terms the consequences of scientific management gone wrong. The book is structured around cases in which “managerial software” turned lethal. Perhaps the most arresting is his recounting of a fatal accident involving a Boeing 737 MAX 8 airplane in 2018. Lion Air 610 was flying with a miscalibrated sensor that told the airplane computer that it was in a dangerously steep ascent, whereas the pilots could tell they were flying at an appropriate pitch. In response, the computer sent the plane in a dangerous nose-dive. The pilots did all they could to overpower the software program that was controlling the pitch of the plane. But the plane’s software, in its “childlike naïveté and ruthlessness of a machine,” was simply too much. The pilots lost control, and the plane crashed into water, killing all 189 people on board. Malfunctioning software, when controlling a machine hurtling through the air with hundreds of people on board, was more than an annoying glitch: It was lethal.
Campbell traces the Lion Air 610 tragedy back to the managerial ways of doing business that overtook Boeing more than a decade before. In 2005, for the first time in its history, Boeing hired an outsider as its CEO. W. James McNerney was formed at General Electric under the tutelage of Jack Welch, who made that company into a standard-bearer for American capitalism in the last decades of the twentieth century. Under Welch, McNerney learned the “GE Way,” which drilled into its managerial ranks the process-improvement system known as Six Sigma, along with the implementation of strict financial planning and outsourcing. These managerial innovations almost directly overturned the engineering culture that had reigned at Boeing for decades. According to the new Welch doctrine, Boeing’s focus on being an engineering company first and business second had led to lower profit margins and falling stock prices. And indeed, when the market quadrupled during the 1990s, Boeing’s stock price did not keep pace. McNerney was brought in to fix this problem, auguring more deadly consequences for those flying on Boeing’s airplanes.
To his credit, McNerney did increase Boeing’s profit margin—by implementing cost-saving cuts in labor and manufacturing inputs. In many and various ways, executives traded the firm’s engineering reputation for shareholder value. For example, Boeing abandoned Honeywell, its previous supplier of flight computers and software, for a company that could underbid them by using outmoded software-development tools. Such cost-cutting approaches to software development typically lead to very “buggy” software, and Lion Air 610’s flight computer was no exception to this.
Another case, which Campbell knew more intimately, was the Uber self-driving car accident that happened in Tempe, Arizona, in 2018. Uber had been aggressively pursuing self-driving taxis to reduce costs, and its founder and former CEO, Travis Kalanick, wanted to be the first to put self-driving cars on the road. He poached a leading engineer from Google (who incidentally swiped technical documents on his way out, for which he was later convicted and sentenced to eighteen months in prison) and pushed him to road-test vehicles with visual-recognition systems that still needed refinement, especially for night driving. This also ended in tragedy. The visual-recognition system failed to recognize a woman pushing a bike across an empty road at night, and it was too late by the time the human minder of the self-driving car took over from the computer. The car struck the woman, who later died of her injuries.
There was a human error here, but Campbell does not think we should simply chalk it up to that—as, indeed, the Tempe Police Department did in its investigation. The bigger story for Campbell, who worked at Uber at the time of the incident, is how Uber’s leadership pushed its engineering team to begin road-testing unsafe vehicles. The demand to be first to market often results in products that are only 80 percent finished being rushed to customers. The rationale is that user feedback will help the company address the incomplete 20 percent. But what may be simply a nuisance when it is a new smartphone that keeps crashing is deadly when it is a two-ton Volvo XC90.
The Boeing and Uber cases point toward a larger predicament: It is not simply that companies have opted for cost control over engineering excellence—that is an ancient trade-
off—but that software allows such compromises to spread throughout wide swaths of our tech-dependent manufacturing sector with unprecedented speed. To make matters worse, we project onto software powers and capabilities they simply do not have. As Campbell aptly notes, “We want to believe that software is a kind of magic over which we exercise complete control. We walk around with supercomputers in our pockets, so how could we not?”
In the strong chapter “What It Is Like to Be a Computer,” Campbell does a fine job of unmasking and questioning the categorical imperative behind lethal autonomous weapons: that the software guiding the missile to its target exercises “no emotion, no intuition, no ability to improvise.” We have somehow decided that it is ethically acceptable to give mindless logical sequences the ability to make life-or-death decisions. For war fighters, this is justified by the convenience. For the tech companies themselves, automating manually intensive processes justifies their billion-dollar valuations—software makes their products scalable, with the highest possible profit margins. Regardless of the reputations tech companies garner for building advanced technology, the smartness or dumbness of the underlying software often is not the issue. The point is whether the technology functions to lower the unit cost.
Many AI and tech boosters want us to believe that the latest technologies are not just tools but intelligence itself. In fact, however, there are more similarities between the “dumb” hammer and our “brilliant” advanced computing technologies than many would suppose. Planes, self-driving cars, and even lethal autonomous weapons work best when they are designed for the human to remain in control, exercising human judgment over edge cases and ambiguities. That dangerously ignored desideratum is a theme that silently works its way through Campbell’s book.
The positive counterexample to the Boeing 737 MAX 8 design is the Airbus A320’s fly-by-wire design. As Campbell notes, “Airbus’s executives did something that purely managerial companies could never do. They yielded control to the experts,” by which he means pilots who have thousands of hours of flight time. The fly-by-wire system in A320s replaced the manual flight controls of an aircraft with an electronic interface that sends digital signals to the flight controls. However, and this is key: They are designed with their limitations in mind and can easily be turned off by the pilot. It is just this system that allowed Captain Chesley “Sully” Sullenberger to safely land US Airways Flight 1549 on the Hudson River in 2009, saving all 155 people on board. By building the system in collaboration with the users from beginning to end, Airbus developed an automated system that not only did much to address the imperfections of pilots but also allowed the pilot to intervene in the computer’s weakness.
Technology can be built humanely. Why, then, are we increasingly inundated with inhumane technology? Campbell’s answer is managerialism and the fact that the CEOs of the largest companies in the world often sit “inside a bubble of self-absorption.” Campbell’s solution to this debacle is to form technical unions for software engineers that might “shift the source of pride and self-worth away from the company and onto the discipline as a whole.” As Campbell rightly notes, these billion-dollar technology companies, to which we entrust our physical and mental health, could not operate without scores of engineers. But this, in its own way, understates the magnitude of the problem.
Managerialism is a way of seeing the world—a worldview—that has been evangelized, as Campbell notes, by business schools, most notably Wharton and Harvard. These institutions were founded in the late nineteenth and early twentieth centuries to inculcate among the rising class of management professionals a spirit of “scientific management.” This was part and parcel of a more general culture committed to applying “scientific methods” to all areas of life, subjecting not just business but also health, spirituality, and relationships to the experts of quantification and metrics. No discipline was left untouched.
The way back to a world in which we might more wisely assess a technology that could goad a young man toward suicide requires something closer to a cultural shift in education. Before technical unions can instill a humane spirit into technology development, we must embrace a more broadly humanistic and qualitative assessment of technology. The question of whether we want self-driving cars should turn not on a technocratic calculation of energy consumption or time allocation. Instead, we must ask whether we want to cede the agency, imagination, and judgment required to move ourselves in and through our communities—and indeed our own lives. This is a far different question from how to tweak the products to be “safer.” Rather, it is the more basic question: What kinds of persons do we want to be and what sort of communities do we actually want to live in?