Since the collapse of Silicon Valley Bank (SVB) in early March, the term moral hazard has once again been on the lips of those men and women—otherwise amoral—passionately dedicated to the bottom line. In the days following the run on SVB, the Wall Street Journal, arguably the flagship publication of the amoralist, had its own hot take: If we bail out SVB now, it will only encourage other banks to engage in the same type of risky behavior in the future. We must avoid moral hazard, or the risks will harden into the new normal.
In the wake of the 2008 housing crash, the US government stepped in to bail out even those institutional players that were knowingly working both sides of the housing crash. Goldman Sachs is a memorable case in point. It promoted collateral debt obligations consisting of subprime mortgages to some clients while shorting the CDOs for themselves and a handful of privileged clients. The risk of not doing something was so great that even those inclined to let some of the investment “banks” take more of a hit were not inclined to let them fail just to avoid moral hazard. Goldman Sachs alone received a $10 billion investment from the US Treasury (which, it should be noted, the firm repaid in full the following year).
Perhaps because the collapse of a midsized bank (SVB had a little more than $200 billion in deposits at its peak) didn’t quite rise to the level of turmoil that would result if, say, Bank of America, JPMorgan Chase, or Citigroup suffered a similar fate, questions of systemic risk weren’t quite as real. (As it turned out, bank size wasn’t the only factor.). But how moral is moral hazard, really?
For those uninitiated into economic mysteries, the term moral hazard can be misleading. As a concept, it appears to have originated in the insurance industry as early as 1662, around the time the science of probability emerged, allowing people to predict the likelihood of an adverse outcome. The great concern was with people who were taking out insurance so that they could engage in some risky behavior. In this context—before the first instance of the term—moral hazard was picking out policies that incentivized people to take more risk, including the very idea of insurance. It’s reminiscent of the old Seinfeld bit about helmets—instead of inducing humans to stop their head-cracking ways, we invented the helmet.
Skepticism about the idea of insuring activities (and arguably outcomes as well) has roots in medieval scholastic debates. The hesitation then focused on the morality of the insurer: Is the insurer capitalizing on something that is properly left to divine providence? Can a mere human being profit from the outcome of a sea journey, whether or not shipment and crew arrive safely in port?
Those concerns were entangled with the Roman Catholic Church’s complicated relationship to usury, the charging of interest on loans. The main problem with usury was that it involved charging for both the thing and its use, “double charging,” as Thomas Aquinas argued. But Aquinas regarded insurance differently: Insurance was not, in fact, a kind of usury since it did not affect ownership. Aquinas’s proposal provided the basis for Catholic teaching that declared insurance “licit.” According to the sixteenth-century Dominican friar Domingo de Soto, a founder of the theological movement known as the School of Salamanca, risk itself was an economic object, and insurance made commercial activity possible by sharing the risk between merchant and insurer. All parties benefited in the long run when we made sure that those who ferried our goods to us wouldn’t be ruined by a storm, robbery, or other adverse event. Insurance was a step toward socially responsible commerce, at least according to one reckoning.
Fast-forward to the late nineteenth century, when economics was beginning to establish its scientific bona fides. In his field-defining textbook Principles of Economics, first published in 1890, Alfred Marshall wrote that he did not think moral hazard warranted treatment. It was five years later, when John Haynes introduced the concept in the Quarterly Journal of Economics, that it was first theorized as an operative economic concept. For Haynes, it was a thoroughly moral or even criminal category. “Lack of moral character gives rise to a class of risks known by insurance men as moral hazards,” he announced. “The most familiar example of this class of risks is the danger of incendiary fires. Dishonest failures, bad debts, etc. would fall into this class, as well as all forms of danger from the criminal classes.” Haynes effectively introduced into the dismal science the moralism of the insurance company.
But Haynes’s more consequential move was to frame the problem as one of poorly designed or flawed incentives. In this new formulation, it was incentives—not moral weakness—that might enable behaviors that could result in an insurance claim or, to use the parlance of our age, “bailout.” His formulation removed the problem from the cut and thrust of personal agency and located it in a system. The science of economics and the actuary tables of the insurance industry were increasingly separate domains of expertise: Economics wasn’t interested in this person or that person, but in the aggregate behavior of persons, whereas the insurer needed to know the likelihood of a particular person filing a claim.
The shift toward incentives proved consequential for how economics as a moral science developed. In the 1960s and ’70s, this tendency was refined with mathematical precision by the economist and social choice theorist Kenneth Arrow. His work was emblematic of a general trend in the discipline of economics—it was in the 1970s that the University of Chicago dropped the history of economics as a course requirement for economics majors and graduate students.
It was also around this time that moral hazard had its breakout moment. No longer a concept for private insurance markets, it was now operational in an array of issues relevant to public policy, including unemployment, workers’ compensation, disability benefits, sharecropping, and family behavior. Subjected to extensive quantification, the concept seemed to slip out of the realm of morality and into that of objective fact.
Or did it? The people who needed to be reined in were typically “welfare queens” and other “takers,” those well-known tropes and negative and highly moralistic stereotypes applied to welfare recipients. And now it is the banks. As commentary from the Wall Street Journal (and countless financial bloggers) suggests, moralism is still alive and well. Perhaps we never really shifted out of the high moralism of the insurance industry, or even that of Aquinas and other Scholastics.
The difference today is that the costs of enforcing moral hazard—not bailing out SVB, First Republic Bank, or some such company that was considered too systemically important to fail—aren’t borne by those making the decisions to take the risk. If, in the case of SVB, the US Treasury and the Federal Deposit Insurance Corporation had declined to intervene out of a fear of encouraging other banks to take such risks, it wouldn’t have been the executives or even their shareholders who would have had to deal with the most difficult real-world consequences. Rather, it would have been the employees of the companies who held their money at SVB—people who would have gone unpaid for weeks, even months in some cases. This didn’t stop a chorus of people on both the right and left from cheering for the demise of SVB, though for very different reasons, of course.
While the pendulum of disapproval seems to have swung far away from the welfare lottery—we are rightly more suspicious of big corporations, executives, and high net worth individuals taking outsized risks and then receiving a bailout from some government program—have we actually shifted the cost to those who make the decisions?
If the doctrine of moral hazard is supposed to warn us against encouraging risky behavior in the future—a reasonable and salutary objective—it is, at least in its current post-2008 housing crash manifestation, woefully inadequate at directing that encouragement toward those who actually make the consequential decisions about how much risk to take on at any moment. Perhaps we would have a better future if, at least on this issue, we stopped worrying so much about future behavior and instead focused on holding accountable those who made poor decisions yesterday and today.