THR Web Features   /   September 7, 2023

Paging Dr. Bot

Why algorithms cannot replace intuition in medicine.

Ronald W. Dworkin

( THR illustration/Shutterstock.)

One evening during the mid-1980s, while working as a medical intern in a Manhattan hospital, I evaluated an elderly, upper-class Lebanese woman for a possible stroke. To gauge her mental status, I asked her who the president was. She said she didn’t know. I asked her who the previous president was. Again, though this time more dismissively, she said she didn’t know. I made a few notes in her chart, and must have looked concerned while doing so, because she declared, “Young man, I know what you’re thinking.” Then, holding herself proudly, she said, “I don’t know who the presidents are because I don’t find American politics all that interesting. But I do know the names of all the kings of France, as well as those of their mistresses.” I saw I had blundered in thinking her mental status compromised.

How did I know? I can only say it was my intuition. By her socioeconomic status, education, and language ability, she would be expected to know who the president was. That she didn’t suggested by all mental status measurements that something was wrong. Nevertheless, my intuition proved correct. Most doctors have spent so many years immersed in mental status diagnoses that they acquire a simple, indivisible experience of what “normal” mental status means. This forms the basis of their intuition, which they can appeal to in hard cases. It is like having a perfect understanding of one’s hometown culled from all the years spent living there.

But you cannot do the inverse operation. Even with a thousand descriptions of normal mental status, you cannot give yourself an intuition about what constitutes a healthy consciousness if you never had it in the first place. Descriptive sketches alone are not enough to know something from the inside. In the same vein, you can never truly know a town based on descriptions without ever having been there. Such sketches are not real parts; they are only notes of a total impression, and without a total impression to refer to, you may think you know a lot about something when, in fact, you don’t.

The basic defect of artificial intelligence follows from this principle. Using descriptive sketches of mental status drawn from reference books, journal articles, and tests, AI would have got my elderly Lebanese patient all wrong. She knew the French kings and their mistresses, something she learned decades ago perusing her family’s library, but not the name of the recently elected American president. Since knowing the president’s name is standard on most tests of mental status, AI would have mistakenly judged her to have an “intact long-term memory with a loss of short-term memory,” and awarded her the ridiculous diagnosis of “altered mental status,” or even senile dementia. It is why AI cannot replace doctors or other professionals who rely to some degree on intuition. Doing so courts ridiculous outcomes. Indeed, AI is becoming known for its stupid mistakes as much as for its triumphs. Even researchers admit that AI lacks “common sense.”

Nor will this change. AI can never have intuition and therefore can never have common sense. This is because intuition is not an analytical experience. It is a metaphysical one, something that cannot be reached through objective studies of material reality. AI can analyze billions of descriptions and reconstitute them. Yet it risks stupid behavior because it can only analyze, and not intuit. Even with a trillion descriptions at its disposal, it cannot create and experience an intuition of the original that it never had in the first place.

But the flaw is not just in AI. It is also in us. AI draws on knowledge that we ourselves have created over the years as a substitute for intuition. If the nineteenth century was the era of the machine, the twentieth century was the era of “knowledge of human affairs”—vast accumulations of data, research studies, and journal articles, first in the soft sciences such as psychology, then in the social sciences such as sociology and public policy, and later in the humanities. Were it not for digitization, today’s libraries would be bursting at the seams.

Yet the whole enterprise is based on an error. Millions of careers rest on the false belief that by analyzing human phenomena from the outside, and by gathering more and more knowledge through research, we can get an accurate representation of reality to substitute for knowing these phenomena from the inside, through intuition. An example is the notion that we can “know” a person through the array of psychic states that professional psychology has named, described, and quantified. We hesitate to criticize all this because we threaten people’s livelihoods if we do. Yet AI’s silly mistakes and the assumption that such mistakes are temporary have thrust this problem out into the open.

The Defect in AI

I was once called to an anesthesia emergency that was ongoing. The patient had come for routine surgery on her cervix. The operation had just finished and they were about to move her onto the stretcher when, awake and under spinal anesthesia, she suddenly lost consciousness. Two rounds of stimulants to raise her blood pressure failed. The other anesthesiologist was maniacally squeezing air into her lungs with a bag and mask when I rushed in.

I glanced at the patient. I scanned the monitors. I saw the candy cane stirrups that had held the woman’s legs apart during surgery. My subconscious kept ferreting out the solution. Suddenly I realized the woman’s blood pressure had likely cratered when the nurses had removed her legs from the stirrups and plunked them down on the operating table at the end of the case. Spinal anesthesia causes blood to pool in the legs, and to suddenly lower them can intensify the effect. Also, by mindlessly pushing air into the woman’s lungs at a rapid rate, the other anesthesiologist had raised the pressure inside her chest, preventing blood from returning to her heart. I quickly advised corrections, including telling the other anesthesiologist to calm down and slow his manual ventilation of the patient. The woman’s blood pressure returned.

Whether it is an anesthesia emergency that one has stumbled upon or an essay that has to be written, starting is often the hardest part. Surrounded by data, but with no obvious way to tie everything together, you have to put yourself in the middle of things, at the heart of the subject; you seek an impulse and then you can go. Yet if you turn back and look for the impulse that you feel behind you and try to seize it, it’s gone, because the impulse wasn’t a thing. It was a direction of a movement. And while it seems simple, it is also indescribable.

That impulse is intuition, which AI lacks. Take my anesthesia emergency. I walked into the room and saw data, which I translated into concepts. For example, I called the patient’s pale face “low blood pressure.” I wanted to catch in ready-made concepts, as if in a net, something of the reality passing before my eyes, to understand it better.

All analysis works like this. By turning something into a concept, it tries to freeze reality to get a better handle on it. Even when dealing with a moving object, analysis turns movement into a series of still snapshots—motionless symbols of moving reality. The snapshots, or stopping points, are projections of our minds aimed at the many places a moving body, which, by definition, does not stop, would stop if it did. All concepts are suppositions in this manner—in other words, illusions.  

AI works on the same principle. AI machine learning starts by combing through information, looking for patterns and correlations similar to how an anesthesiologist surveys a room filled with data looking for connections. AI freezes a piece of data in time, examines it, and makes a probabilistic calculation, then repeats the process ad infinitum.

It is how AI generates text when asked a question. Rather than understand the meaning of words the way people do, AI focuses on a word, studies how the word is used in other contexts, and then makes a probabilistic calculation of what word should follow. By doing so it stitches together whole paragraphs of impressive language. Yet some AI researchers admit AI is just a “stochastic parrot,” generating convincing language that it doesn’t really understand. AI does not know the meaning of words from the inside. It is why AI can go from seeming wise to suddenly making a silly mistake. An example is an AI-generated recipe for chocolate brownies written beautifully but which includes a cup of horseradish.

Contrast AI’s method with how I used both analysis and intuition in the operating room that day. Yes, I sifted through data seeking correlations with emergency algorithms. But I also experienced the simple and indivisible feeling that followed from identifying with the other anesthesiologist. When entering into his experience I did not think in terms of general psychological “concepts,” but only in terms of what belonged to him alone. Out of that indivisible feeling, his essence flowed to me all at once.

He was a young man recently hired out of residency. He had been so excited about starting at the hospital, still in those magical early months when the doctor’s soul, all fresh from the making, first discovers what medical practice is like. Now he was terrified. His desperate eyes pleaded with me, seeking commands that he might fulfill to rescue the situation.

I intuitively grasped his distressed thought processes while also ferreting out the reason for my own relative calm: He was ultimately responsible for the patient, not I. I was just there to offer advice. The feeling was in me, just below the surface, but nevertheless a hidden source of comfort, that I could leave the room at any time. It is like being in a crowded, stuffy theater and seeing the brightly illuminated exit sign above one of the doorways. At any time, you can get up from your seat and head for this sign and leave. You won’t do so, unless the movie is really bad, but the realization that you can leave at any time is a comforting one.

This was not an analysis of static concepts. This was mobile life yielding its inner meaning to me. Through a moment of intellectual sympathy, I placed myself inside the other anesthesiologist’s mind and assimilated what was unique in it. I also felt my own personality flowing through time. No symbols, concepts, or data. Just a continuous flux of emotions and memories, none of them beginning or ending, but all extending into each other. Somehow this activity directed my attention along a groove. “Young,” “panic stricken,” a “rookie mistake”—the thoughts drove my mind and gave me a unity of direction that led me to correctly diagnose the trouble. 

This was intuition. AI could not have done this. Without intuition, it would have probably followed some treatment algorithm based on probabilistic calculations, maybe recommending a third or fourth dose of stimulants—or even a dose of horseradish.

The Defect in Us

I asked an AI chat program, “Why do I feel unhappy today?” It asked me if I had “low self-esteem.” The program then said, “It is important to work on having good self-esteem,” and gave me a list of six things to try.

The concept of self-esteem is a product of analysis, and all analysis involves a comparison. When comparing people, it looks for some resemblance between them to find a property they share. That property is considered part of a person’s make-up and given a name—for example, “self-esteem.”

Some researchers driven by analysis believe that if they gather enough psychological parts, they will be able to reconstruct the psychology of a human being. No longer will they have to investigate what is essential and unique in each person through intuition. Instead, an intellectual representation of a human being, created through analysis, will allow everyone’s broken “parts” to be identified and fixed through general methods, without need for deeper investigation.

Analysis has generated thousands of empirical concepts that large numbers of people are believed to share. Examples include “rational decision making,” “wellness,” “whiteness,” and “addiction,” to name just a few. Much of our economy is built around these words in the form of services sold or models constructed, while millions of people are employed to perform research around these concepts or simply offer services in their name. Yet much of this is based on an illusion. The concepts may represent certain aspects of people, but they are not parts of people, as people’s minds cannot really be broken down into parts.

The philosopher Henri Bergson illustrated the futility of relying solely on the analytical method when he described breaking down a poem into letters, and then, without knowing the poem’s meaning, trying to reconstitute the poem through the letters alone. It can’t be done, he said, because the letters are not “parts” of the poem; they are merely symbolic elements used to express the poem’s meaning. Rather than fragments of meaning, the letters are merely fragments of symbols. Applying analysis to the poem’s letters without any intuition of the poem’s meaning yields a ridiculous outcome.

Reconstituting the totality of a person knowing only the “parts” of his or her mind is equally nonsensical. What we think of as parts are just fragments of feelings, thoughts, or sensations that run through the mind and have been given names, but which cannot be assembled to estimate the meaning of any person’s life. To understand that we need intuition.

Belief in these “parts” of the mind has led to serious public policy mistakes. For example, the part known as “self-esteem” has caused distortions in pedagogical and educational practices, such as pouring unconditional praise on children, cutting back on school discipline, and the awarding of trophies to everyone. Fortunately, in the 2000s, psychologists with an intuition that the self-esteem concept had been carried too far showed through analysis that self-esteem and school performance were not related. In another example, the part of the mind known as “rational,” as in “rational actor theory,” led reformers to assume that prudent and cautious behavior could be made the norm. Yet people continued to eat bad food, smoke, and take drugs. Again, intuition provided the necessary corrective, reminding reformers that all people have a touch of the saint, a touch of the devil, a touch of the citizen, and even a touch of the madman. Despite what pure analysis says, the “rational” is not a part of people’s minds that can be worked on in isolation and made to rule uppermost.

AI without the ballast of intuition represents the tyranny of pure analysis. Unleashed, and without intuition to give it a more profound understanding of humanity, AI stands ready to extend the power of reductive and often dangerously misleading concepts.