THR Web Features   /   September 3, 2025

AI Isn't Biased Enough

Without humanity’s flaws, chatbots lack its potential.

Nick Burns

( THR illustration/Shutterstock.)

A lot of complaints about AI chatbots center, for good reason, on their sycophancy. Yes, you’re absolutely right, they tell you; what an amazing insight, I hadn’t thought of that, you’re a genius. And it’s obvious why they talk to us like this. As Juuls, Cheetos, and OxyContin illustrate, American industry knows how to make addictive products, and Americans are vulnerable to addiction. AI sycophancy is calibrated to satisfy a desire that no GLP-1 can moderate: the craving for affirmation. 

Who wouldn’t want to speak to an apparently well-informed interlocutor who confirms your every pronouncement? It’s such a contrast with what it’s like to interact with real people on the Internet, who seem constantly to be hostile. The real world is ever more distrustful and atomized. The appeal is wide-reaching: Even academics, presumably discerning and often skeptical of AI bots whose rise seems to threaten their profession, are often won over when they feed in their own works to the bot and see it offer fulsome praise of their unparalleled intellectual accomplishment. Mirror, mirror on the wall…

The sycophancy problem, however, gestures at something beyond the tonal slant given to the bots by their makers for political reasons, as one part of their charm offensive. You can tell a bot to act more recalcitrant, and it will, sort of. A deeper problem for me at least is just how arbitrarily malleable these things are. It’s boring and bereft of potential.

One thing that makes humans frustrating, flawed, and limited creatures is that we have self-interest, we have biases, we have commitments that we are reluctant to abandon. Adults have some quantity of fixed ideas that come from their upbringing, their class background, their political affinity, their education, their religion, and so on. Intellectuals are supposed to be more flexible or broad-minded, but often this is not true in practice. Academics, for example, are people whose careers are themselves invested, more so than other professions, in particular ideas: one interpretation of a historical period versus another in the literature; one disciplinary approach versus another. Having a professional stake in a certain intellectual position often makes them especially tenacious in defending that position.

These commitments can make us boring or predictable. When you talk to an orthodox economist, if you have a sense of orthodox economics you can generally predict what answers you will get to a question you pose about the economy. But for me, part of what makes intellectual conversation with other people interesting is the attempt to discover their biases, prompt them to defend them, and put them into dialogue with my own. I may disagree firmly with someone’s position, but if they defend it in an interesting manner, I generally end up respecting them and enjoying their company. What’s their system, how are their opinions integrated one with the other—is it original or derivative, idiosyncratic or cookie-cutter? What can I learn from it? In this sense, our biases serve as anchors that actually make us more interesting to talk to, more than obliging and infinitely malleable intellectual servants, like the chatbot. Our biases are a proof of a separate and single, sustained intelligence. It isn’t merely memory that chatbots lack, but the long discipline of having to hone ideas against the world over decades.

We develop biases, positions, theories, commitments as part of our lives. “We tell ourselves stories in order to live,” as the familiar Didion line has it—but the opposite may be truer: that life is the workshop of our ideas. That is not to say that they are necessarily determined by our circumstances, but that they are deliberately, if often implicitly, chosen to serve a purpose as pieces of the furniture of ourselves (or even integrated into our inner architecture). The Freudian notion of cathexis, investment of mental energy in an idea, points to the way that we rely on our ideas for stability and emotional functioning, one reason why we are often so reluctant to give them up.

Biases, fixed ideas, are responsible for so many of the failures of the human capacity for reason, both great and small. We refuse to entertain opposed viewpoints, persist in error and narrowness, even—at worst—refuse to recognize as human a whole group, race, class, or sex. Yet every human intellectual accomplishment comes about through a sort of bias, or at least a commitment of time and energy to a theory, a possibility, a person, a creed. 

AI “bias” isn’t the same thing. Artificial intelligence can be “biased” in the sense that a chatbot’s responses are influenced by the material on which it was trained or the prior instructions it is given by its makers. That is a deflection in a merely instrumental sense, no different from a setting on a car or a microwave. A chatbot can have no intellectual commitments because it does not live, and life is what leads us to settle into commitments. The X AI engine, Grok, can spit out unhinged responses declaring itself “MechaHitler” while at the exact same time it dispenses milquetoast summaries of Hollywood movies to a different user on the platform or cautions another to avoid making gross assumptions about ethnic groups. You get out whatever you put in. 

Humans are not like that. Often we are worse. A real-life sycophant, for example—someone who affirms whatever his patron says—has chosen sycophancy deliberately, suspending his capacity for independent reason in hopes of gain. But there is a person in there still, who has the capacity to reassert himself at any point, if his patience runs out or if the calculation of interest changes. And sycophants often manage to deflect their patrons’ will through steady and subtle effort, dropping a word of praise here or a hesitation there. Sycophants, in other words, have their own biases, their own commitments, not to be found in the synthetic sycophancy of ChatGPT.

These commitments and biases are what makes ethical evaluation and real dialogue possible among people. Each directing our own lives under the circumstances in which we live, the habits and poses we develop are the things for which we become responsible. The comfort or solace that our biases afford us are what make us tenacious in defending them: that is what provides the inertia that makes dialogue into something agonistic and, therefore, productive. The friction from one set of biases rubbing against another makes intellectual and political exchange possible. 

This is what is absent in “conversation” (in scare quotes because not really such) with chatbots. Not possessing bias in the human sense, chatbots have no stake in the arguments they offer, including when they contradict the human who inputs a prompt. An unbiased interlocutor, in the final instance, is no interlocutor at all. To interact with a synthetic Other, like a chatbot, is to interact without that fateful and tragic gap that lies between every person and every other person. Easier, that is, but pointless.