In Need of Repair   /   Fall 2024   /    Thematic: In Need of Repair

The Pathologies of Precision Medicine

The Runaway Logic of Risk Reduction

Paul Scherz

Alamy Stock Photos.

To say that the American health-care system is in less-than-great working order may strike many as a colossal understatement. Yes, it functions well on the heroic-medicine front. It possesses incomparable medical technology and pharmaceuticals, and it can boast an impressive corps of well-educated and compassionate physicians, nurses, and other health-care workers. For all those strengths, however, American health care labors under mighty challenges, including an overstretched and underfunded public-health system, the increasing concentration of hospitals and even medical practices in the hands of for-profit businesses, and a patchwork health-insurance regime that leaves too many people inadequately covered and gives too much control to insurance-company executives and too little to health-care professionals.

Related to, and compounding, those and other problems is a consequential and arguably insidious paradigm shift—one that is moving medicine away from the guiding imperatives of patient care and the cure of illness to an overriding concern with the prediction of health risk, an approach dubbed, with no trace of modesty or irony, precision medicine. This shift is evident in the explosion of interest in technologies that not only help people manage diseases they currently have but also make it possible for people to monitor themselves for signs of diseases they might develop in the future.

Consider the case of the continuous glucose monitor (CGM), which for the past twenty years has enabled diabetics to track their blood sugar in real time without the annoyance of a finger stick. In March 2024, the Food and Drug Administration authorized the expanded use of these devices for people who are healthy. Many of the new users are concerned about the risk of acquiring the disease, possibly because of excessive weight or family history of disease. Other users simply consider the monitor to be a tool for self-optimization, the kind of instrument promoted, for example, by the Quantified Self movement. After all, if you monitor your steps, blood pressure, and sleep, why not your blood sugar? If you are at risk for diabetes from episodes of high blood sugar, the monitor will alert you when to avoid foods that elevate it. Though not ill, you take on the habits, routines, and technologies of the sick.

The expanded use of the CGM is just one example of a growing desire among individuals, governments, technology companies, and corporate health-care systems to predict and monitor health risk. The focus on risk can be seen in the popularity of a range of consumer products, including direct-to-consumer genetic testing provided by companies such as 23andMe. It is even more dramatically on display in the efforts of health-care practitioners to use AI-based techniques to combine information from genetic tests, medical charts, demographic studies, surveillance devices, and other data sets to predict the risks facing an individual so that those risks can be assessed, managed, and treated. Such a data-centric methodology is another trait of precision medicine.

What We Share with Nematodes

The turn to risk marks a significant change for both medicine and mainstream genetics research. When, in the 1990s, I started out in genetics, for example, researchers sought to find the genes that caused a disease in order to cure it, either by finding an effective drug or, in a more futuristic mode, directly editing the gene itself. The focus was on curing the disease. While this kind of drug development has had some success, and a growing number of therapies can correct rare single-gene disorders, the broad curative dream of genetics has largely ended. Instead, more and more research is dedicated to identifying risks.

Ironically, it was the greatest triumph of human genetics research, the Human Genome Project, launched in 1990, that ended up crushing the dream of generalized genetic cures. Before the Genome Project, researchers estimated that humans had approximately 100,000 genes, a projection that led many geneticists to believe they could find genes that were major contributors to each common disease, whether a heart-disease gene, for example, or a diabetes gene. These few genes could then be targeted for cures. However, when the Genome Project concluded its work in 2003, researchers had found that we have only twenty- to twenty-five-thousand genes, approximately the same number of genes belonging to a nematode, which is a parasitic worm. Because humans achieve far more complexity and capability with their genetic endowment than a humble microscopic worm does with its own, there could be no simple one-to-one gene-disease correspondence. Instead, much depends on how genes are regulated and how they interact. And that means there is a lot of flexibility in how the genome specifies our traits. Once it became clear that our genetic makeup was too complex for simple cures, researchers understood that there was no heart-disease gene waiting to be discovered.

The Imperative of Risk

With the feasibility of genetic cures for common diseases receding from sight, the field had to find something to do with its wealth of data and expensive sequencing infrastructure. So it turned to predicting risk.11xHallam Stevens, Life Out of Sequence: A Data-Driven History of Bioinformatics (Chicago, IL: University of Chicago Press, 2013); Jenny Reardon, The Postgenomic Condition: Ethics, Justice and Knowledge After the Genome (Chicago, IL: University of Chicago Press, 2017). Researchers started running what are called Genome-Wide Association Studies to examine thousands of DNA sites across the genome, each of which has several variants that differ among people. Geneticists take thousands of people, measure traits such as heart disease, schizophrenia, or diabetes, and see whether each genetic variant is correlated with a disease and how much it increases or decreases the probability of a given trait. By adding all the increased and decreased risks across the genome, researchers determine what is called a Polygenic Risk Score, the statistical likelihood that a person will have a trait. So for heart disease, a person who has many variants that increase the likelihood of a heart attack will be placed in a high-risk group. Others will have a low risk because of their specific gene variants. Such scores are now undergoing clinical trials to determine their relative accuracy in predicting a condition and the effects of their use on patient outcomes.

The imperative of risk prediction has not only transformed genetics but also reshaped medicine more broadly. In the 1960s, medical researchers began studying physiological and lifestyle risk factors such as smoking, high blood pressure, or elevated cholesterol.22xRobert Aronowitz, Risky Medicine: Our Quest to Cure Fear and Uncertainty (Chicago, IL: University of Chicago Press, 2015); Joseph Dumit, Drugs for Life: How Pharmaceutical Companies Define Our Health (Durham, NC: Duke University Press, 2012); Jeremy A. Greene, Prescribing by Numbers: Drugs and the Definition of Disease (Baltimore, MD: Johns Hopkins University Press, 2008). At the same time, pharmaceutical companies turned to developing drugs, such as statins or metformin, to manage those physiological risk factors. Such risk-reducing medications could be even more profitable than curative treatments because they generally require people to use them indefinitely to maintain reduced levels of risk. In the United States, a yearly medical appointment is now largely an assessment of an individual’s risk factors.

Gene variants have become one more risk factor to incorporate into a medical risk analysis. Along with asking whether you smoke, drink, or have high blood pressure, medical practitioners can analyze your genetic profile. We now have the technical power to start integrating much of this information. This is exactly what governments are investing in through research projects such as the United Kingdom’s Biobank project, the European Union’s Beyond 1 Million Genomes project, and the NIH’s $1.5 billion, one-million-person All of Us program. These projects seek to provide an individualized prediction of total risk using the power of data analytics that interprets information from polygenic risk scores, medical records, physiological markers, and, in some cases, nonmedical data such as social media.

No One Is Healthy

It would be foolish to reject the paradigm of risk reduction out of hand. There are clear cases in which it makes perfect sense, such as providing statins for people with hereditary high cholesterol or increasing breast-cancer screenings for women with a family history of cancer and the BRCA1 mutation. Yet risk-reduction frameworks also have significant problems.

Take the now-defunct Pioneer 100 Wellness project. Researchers sequenced the genomes of 108 participants and performed 218 laboratory tests every three months to analyze 643 metabolites and 262 proteins. They also mapped participants’ microbiomes and tracked their activity through the use of a Fitbit, a popular device that monitors its users’ heart rate and other physiological metrics.33xRyan Cross, “‘Scientific Wellness’ Study Divides Researchers,” Science 357, no. 6349 (2017): 345, https://www.science.org/doi/10.1126/science.357.6349.345; Nathan D. Price et al., “A Wellness Study of 108 Individuals Using Personal, Dense, Dynamic Data Clouds,” Nature Biotechnology 35, no. 8 (August 2017): 747–56, https://doi.org/10.1038/nbt.3870.  These detailed exercises in risk detection seemingly revealed that no one was healthy: Fifty-two participants were determined to be prediabetic, ninety-five had low vitamin D, eighty-one had high mercury levels, one had a problem with iron metabolism, and on and on. All these findings led to further surveillance and interventions. Participants in the Pioneer 100 Wellness program began taking risk-reducing medications, underwent prophylactic surgeries, and submitted to health coaching.

And what did all of this demonstrate? Perhaps most importantly, it showed that as more conditions are surveilled, more people are determined to be at risk. The unintended effect is that people who formerly considered themselves healthy now live under a cloud of anxiety, acutely aware of potential health problems that cannot always be addressed. People without disease or active health challenges find that their lives have become medicalized. Some people are willing to accept medicalization to gain the possibility of a few extra years, but the problem is that interventions themselves are not risk-free. Every risk-reducing medication, screening, or surgery introduces its own risk of side effects. As a result, the risk-based paradigm is now subject to medical controversy, with growing evidence of overdiagnosis and overtreatment. Recent randomized controlled trials have shown questionable efficacy for many supposedly risk-reducing interventions, with many finding little increase in overall survival. There is scant evidence that continuous glucose monitoring has benefits for healthy people, for example. It is also clear that individualized risk reduction is not the most effective way to reduce our society’s disease burden. As many have argued, we can better prevent chronic disease by addressing the social determinants that underlie more specific risks, such as poverty or bad working conditions. Scholars also raise questions about bias because of the unrepresentative nature of the populations on which precision-medicine research is done.

Here, though, I want to consider two problems with the risk paradigm: the personal burden of anxiety on patients and the professional costs of focusing on risk in the medical profession.44xFor a fuller discussion, see my The Ethics of Precision Medicine: The Problems of Prevention in Healthcare (Notre Dame, IN: University of Notre Dame Press, 2024). Social scientists have chronicled the psychological toll of knowing that you are at risk of cancer or Alzheimer’s. Some breast-cancer advocates have even coined the term “previvor” for those at genetic risk. One’s subjectivity is transformed from that of a healthy person into a patient-in-waiting.55xStefan Timmermans and Mara Buchbinder, Saving Babies? The Consequences of Newborn Genetic Screening (Chicago, IL: University of Chicago Press, 2012). Positive results from screenings such as mammograms or the prostate-specific antigen test (or PSA, which evaluates the likelihood of prostate cancer) generate further anxiety, as people spend weeks worrying over the results of a biopsy. People alter their behavior, sometimes in drastic and unsafe ways, to prevent or mitigate risk. For example, some users of continuous glucose monitors, fearful of incipient diabetes, become obsessed with optimizing their blood sugar. They will even forgo healthy foods like fruit because of their effect on blood sugar. The fretful surveillance and tinkering with risk-reducing medications can distract us from more fulfilling aspects of life.

Risk and Anxiety

These anxieties are hard to constrain because there is no rational limit to risk reduction, no objective level of acceptable risk.66xFrançois Ewald, L’Etat providence (Paris, France: Grasset, 1986), 424ff. See also Ulrich Beck, Risk Society: Towards a New Modernity (London, England: Sage Publications, 1992). A risk can always be lowered. So there is always an incentive to lower it, especially if you focus on a single risk factor. We see this in medical guidelines. What is considered healthy blood pressure is always getting revised lower. US guidelines for healthy cholesterol have also been revised lower over the past few decades. This is not illogical. Reducing these levels does seem to reduce some risks of disease. Yet, there is a danger, too. Any health goal has the potential to become unlimited, with no lower boundary set for these guidelines, particularly when the risks of treatment or concerns about other dangers are not considered. Many companies, for example, are pursuing therapies that will bring cholesterol levels beneath species-level norms.

It does not help that our society incentivizes lowering the threshold of acceptable risk. In our safety-obsessed culture, no one will be blamed for extra precautions. Patient-advocacy groups tend to argue for more screening, more genetic testing, more treatment of risk. Almost all clinical research trials on risk reduction are funded by pharmaceutical companies, whose bias, we might assume, would be toward increasing the consumption of medication.77xJeremy A. Greene, Prescribing by Numbers; Joseph Dumit, Drugs for Life; Sharon R. Kaufman, Ordinary Medicine: Extraordinary Treatments, Longer Lives, and Where to Draw the Line (Durham, NC: Duke University Press, 2015). One need not impute fraudulent or other sinister motives to pharmaceutical researchers (much of whose work is of high quality) to recognize there are conflicts of interest here that shape how research is performed and evaluated. At the very least, our funding structure discourages certain kinds of studies—those, for instance, on the long-term consequences of being on a drug, on when it is acceptable to stop using a drug, and so on. Such studies would impede the momentum of the risk reduction paradigm.

But we would do well to challenge the headlong adoption of predictive medicine. One obvious way to do so is to let patients determine acceptable levels of risk, allowing them to decide whether to have a mammogram or to take statins. The problem with this approach is that most people have a difficult time assessing risk and probability, as behavioral economists have demonstrated during the past half century.88xDaniel Kahneman, Thinking, Fast and Slow (New York, NY: Farrar, Straus and Giroux, 2011); Gerd Gigerenzer, Rationality for Mortals: How People Cope with Uncertainty (New York, NY: Oxford University Press, 2008). Autonomy alone is not the solution, because patients must be educated on the risks and benefits of complex, probabilistic treatments. But even the best educational materials must be translated from the abstract level of the population into the concrete lived experience of the patient. For example, perhaps an older patient who lives alone is more fearful of the risks of a fall as a side effect of medication than the risks of high blood pressure, and thus it would be appropriate to let those numbers rise. Such considerations require the counsel of a careful medical practitioner in conversation with the patient.

The Runaway Logic of Risk Reduction

Medicine is not merely a scientific discipline. It is an art—and it requires not only a command of the latest research but also the wisdom that comes from clinical experience and sensitivity to a patient’s individual situation.99xEdmund D. Pellegrino and David C. Thomasma, A Philosophical Basis of Medical Practice: Toward a Philosophy and Ethic of the Healing Professions (New York, NY: Oxford University Press, 1981); Eric J. Cassell, The Nature of Suffering and the Goals of Medicine, 2nd edition (New York, NY: Oxford University Press, 2004); Annemarie Mol, The Logic of Care: Health and the Problem of Patient Choice (London, England: Routledge, 2008). But the personal art of medicine does not always comport well with the goal of risk prediction.1010xEric Juengst et al., “From ‘Personalized’ to ‘Precision’ Medicine: The Ethical and Social Implications of Rhetorical Reform in Genomic Medicine,” The Hastings Center Report 46, no. 5 (September 2016): 21–33; https://doi.org/10.1002/hast.614. The methods of risk mitigation stratify people into different risk groups, measuring only those things that can be quantified for precision-medicine readouts, thereby leaving out all the important contextual details of people’s lives, including their life stories and their values. Resisting the runaway logic of risk reduction, the prudent practitioner draws on such details and his own knowledge to match the patient’s medical situation with possible treatment aims.

Yet because a personalist, dialogical model of health care is unpredictable, it is not the sort of treatment encouraged by the executives of health-care systems, especially in the United States. Management, always motivated by the goal of erasing uncertainty through such methods as quantitative risk analysis, actively seeks to limit the practitioner’s prudential judgment. Although some health-policy advocates argue that reducing risk will also reduce future disease, thus lowering costs for treatment, it is clear that reducing costs—not improving health—is a primary goal. US insurers and health-care systems are already starting to benchmark their reimbursement rates for physicians based on measurements of risk management.1111xDonald M. Berwick, “Making Good on ACOs’ Promise—The Final Rule for the Medicare Shared Savings Program,” New England Journal of Medicine 365, no. 19 (November 10, 2011): 1753–1756; https://doi.org/10.1056/NEJMp1111671. To be sure, outcome metrics such as number of deaths, heart attacks, and hospital readmissions are also valuable data points, but it is difficult to equalize outcome metrics of different practitioners, because outcomes depend, in large part, on the characteristics of the populations they serve. For example, a clinic in an inner-city area with a heavy burden of chronic disease, homelessness, and drug use will nearly always have worse outcomes than a clinic in a wealthy suburb. The reasons extend far beyond the quality of care given in the clinic. Consequently, many quality metrics are called process metrics because they track the particular actions medical practitioners take, such as prescribing a statin for someone with high cholesterol or metformin for someone with a higher risk of diabetes.1212xThis discussion of the problem of using process measures and their impingement on clinical judgment draws on the analysis found in Justin Mutter, “A New Stranger at the Bedside: Industrial Quality Management and the Erosion of Clinical Judgment in American Medicine,” Social Research 86, no. 4 (2019): 931–954. Many of the actions tracked regard reducing risk.

The management of population risk through metrics, then, will almost inevitably marginalize clinical judgment. The practitioner is no longer considering only whether this risk-reducing pharmaceutical is right for her patient given his total life and medical circumstances. She is also considering how the decision to prescribe (or especially not to prescribe) this medication will affect her quality metrics and thus reimbursement. This is the whole point of incentive systems—to shift judgment so the practitioner will consider the desired metrics.

Practitioners are not pleased with these systems. As Justin Mutter, a physician and director of the University of Virginia’s Center for Health Humanities & Ethics, writes, “The denotation of ‘clinical reasoning’ in much of contemporary medical education literature is already reflective of a transition away from providers’ judgment toward statistical and algorithmic computation.”1313xIbid., 945. These systems of management disrupt the practitioners’ judgment in ways that are ethically problematic, preventing a free response to the particular other by regimenting action according to guidelines. In short, they threaten to undermine the dignity of the medical worker.

Troublingly, these trends will accelerate as corporations integrate different kinds of information and tools into the clinical workflow, including AI-driven clinical decision support systems. Such systems will “suggest” tests and medications (frequently risk-reducing ones) to the practitioner. These suggestions, however, are not merely suggestions, because they come with the threat of decreased reimbursement if they are not followed. The promised automation of risk prediction and the integration of AI systems will effectively control practitioner behavior and thus pose even deeper problems for clinical judgment.

If the practitioner believes a recommended prescription is not called for, she must determine the grounds for rejecting that recommendation. In the case of a clinical guideline, a doctor can at least point to the evidence used in formulating the guideline and say that a particular patient does not fit the criteria of the body of evidence for some reason. AI systems that are being developed for precision medicine cannot be challenged in the same way. More often than not, their users do not know why the system made a particular prediction. Machine learning systems are designed to make predictions by finding patterns in data that humans can’t recognize. These programs are black boxes, neither explainable nor transparent, and that is partly what makes them so effective. Practitioners cannot push back against their determinations, because they do not understand them. Without the possibility of engaging in reasoned argumentation, there is no space for rational, prudential judgment. We are left with a battle of authorities: the practitioner versus the machine.

Loss of Agency, Dignity, and Prudence

It is not at all clear that the human will win this struggle. After all, these systems are implemented by managers because of a distrust of, and a desire to control, the human factor. But while practitioners certainly make many mistakes, so do guidelines and computer systems, especially once they emerge from the idealized worlds of clinical trials into the messy world of everyday clinical practice. Still, one of the major concerns in clinical ethics is over whether practitioners will be able to reject these AI judgments without facing the threat of greater liability.1414xW. Nicholson Price II, Sara Gerke, and I. Glenn Cohen, “Potential Liability for Physicians Using Artificial Intelligence,” JAMA 322, no. 18 (November 12, 2019): 1765–1766; https://doi.org/10.1001/jama.2019.15064. If the practitioner rejects the machine’s recommendation and the patient suffers an adverse outcome, as some will in any case, the practitioner will be open to greater scrutiny. In the face of such threats from management and courts, practitioners will become ever more likely to acquiesce to what the systems predict.

Increased submission to these systems will likely come without much human resistance. Studies of human-technology interaction describe the phenomenon of automation bias, which develops as users become more trusting of automated systems the more they use them. So dulled are their own observations and intuitions, users follow the systems’ directives in the face of overwhelming counterevidence.1515xNicholas Carr, The Glass Cage: How Our Computers Are Changing Us (New York, NY: W.W. Norton & Co., 2014); Charles Perrow, Normal Accidents: Living with High-Risk Technologies (Princeton, NJ: Princeton University Press, 1999). Practitioners eventually become deskilled, losing the ability to make their own judgments. Perhaps the most disturbing examples of deskilling and automation bias occur among pilots who become disoriented and crash planes because they fail to recognize a system error such as an incorrect altitude reading—or because they rely on an automated system and are unable to reassert control in an unusual situation. In medicine, similarly, providers might end up trusting risk prediction even in circumstances when doing so is unwise.

AI-based risk prediction poses many other challenges to the clinical judgment of practitioners. Their agency is taken away by bureaucratic initiatives that centralize processes, reducing practitioners to components of a machine; distributors of approved risk-reducing medication. There is little room for prudence when the practitioner is forced to see the patient more and more through the interpretive lens of physiological markers, genetic variants, and aggregate risk scores.

The boosters of precision medicine in the tech industry promise a future of zero-risk, low-cost health care delivered by an app. But by transforming the experience of health, this data-centric form of precision medicine actually eliminates any secure sense of well-being. As people monitor themselves, as they discover the frightening dangers of their genetic heritage, they discover risks that gradually turn into pre-diseases that need to be treated. At the same time, the bureaucratic structures of medicine prevent practitioners from serving as a check on the vain quest to control population health through technological monitoring and intervention. Rather than fixing an ailing health-care system, precision medicine imperils the human factor at a time when algorithmic machines, driven by the optimization-focused imperatives of cost reduction, efficiency, and profit, threaten to assume control over the most important decisions concerning our health and well-being.

This essay was adapted from Paul Scherz’s most recent book The Ethics of Precision Medicine: The Problems of Prevention in Healthcare.