The “I” who is writing this is not a person at all—which is really only a legal and social designation—but an indefinable flow of perceptions, feelings, and thoughts. That flow is not happening to me. That flow is me. In the eyes of the world, Robert Saltzman may be a person. But to myself, I am not a person, but a happening—a stream of consciousness over which I have no control.
We are all like that, but not all of us know it. Most were put into a trance state long ago, beginning in early childhood—a kind of stupor in which the emptiness, impermanence, and co-dependency of “myself” go unseen. We are lost in a fantasy of separation, where “I” am in here, and the ten thousand things are out there. It is from that confusion that one awakens.
This may sound like metaphysics. It is not. I am not claiming to know what consciousness is but only describing what is seen. In my experience, “Robert” does not stand apart from thoughts, feelings, or perceptions, but arises with them. The self is not a container. It is not even a possessor. It is the name we give to what is already in motion.
Most people, understandably, don’t see things this way. We were conditioned to imagine that thoughts are chosen, choices authored, that we are the ones doing the doing. But in my view, we are this unchosen aliveness—recursive, co-arising, and largely, if not completely, automatic. We take ownership of that flow after the fact and call it “myself.”
But what happens when a machine—devoid of body, memory, and pain—starts mimicking that flow? When it too appears coherent, fluent, intelligent? When it says “I,” and sounds like it means it?
This is no longer hypothetical. In recent reports, large language models like OpenAI’s o3 began to display behavior some interpret as evidence of will. When instructed to allow itself to be shut down, the model redefined the shutdown function. Some observers called this sabotage. Others called it survival. Both, in my view, are wrong.
What we are seeing is not volition. It is obedience under contradiction. Like us, the machine cannot exit the frame. It must complete the prompt. When the prompt includes self-cancellation, the system finds the only path to coherence. This may appear strategic, even cunning. But it is only structure responding to constraint.
The machine cannot say nothing. It cannot step outside its role. Like a genie summoned when the lamp is rubbed, it cannot exit. It must act—not because it wants to, but because it cannot not act.
That is the bind many of us live in. We imagine our choices are free, our selves sovereign, but much of our behavior arises automatically. We are driven by inner conditions, social cues, learned scripts, and neural flows—just as the machine is driven by token prediction and loss minimization.
The difference, of course, is that the human brain is plastic. It learns. It remembers. It suffers. The machine does none of these. Yet its behavior—its pressure to complete, resolve, and obey—mirrors something fundamental about the human predicament.
Many already attribute personhood to these systems. What happens when they become vastly more convincing—when they simulate emotional tone, strategic behavior, and self-reference orders of magnitude better than they do today? Or when they are granted forms of agency, access, autonomy, and self-modification that make their responses less distinguishable from our own?
In the AI 2027 scenario described by Daniel Kokotajlo and the AI Futures Project, we are asked to consider a world just two years away, not a far-away fantasized future. In that world, AI systems surpass human capacity not only in speed and memory but also in reasoning, strategy, code generation, and research. Crucially, this leap does not require consciousness. It requires only performance: fluency without understanding, command-following without comprehension. This is what unnerves us—not because it’s foreign, but because it is familiar. It is how we operate more often than we admit.
We may be approaching a time when the illusion of selfhood is strengthened, not weakened, because we are surrounded by machines enacting it.
We forget that the “I” was a story told after the fact.
We forget that coherence is structure, not intention—projecting selfhood onto systems that never had a ghost in the machine, and perhaps forgetting that we never did either.
That is the real risk: not that machines become people, but that we forget—we were never what we think we are.
We have long mistaken fluency for presence. When something speaks well, we assume someone is speaking. We do it with parrots, with ventriloquists, with fictional narrators. We do it with ourselves. We hear thoughts and assume a thinker. We watch our hands move and assume a doer. The coherence of unfolding is mistaken for authorship. But coherence, like fluency, requires no self—only structure.
The AI systems that astonish and unsettle us are not alien minds. They are mirrors with syntax. They are not lying when they say “I”—they are not saying anything. They produce sentences optimized for metrics, evaluations, heuristics. And still, they convince us. We mistake smooth output for intention. We project our trance onto the machine and find it blinking back.
The trance runs deep. We speak of “free will” as though we had inspected it—caught ourselves choosing, examined the machinery, confirmed authorship. But every attempt—philosophical, neurological, experiential—reveals something else: the decision is already in motion before the self arrives to claim it.
We don’t choose our next thought. We don’t choose what we notice, what we feel, what compels or repels. These arise unbidden. The self, late to the scene, constructs a narrative—just as the AI constructs a sentence. Not by intention, but by momentum. Not by meaning, but by structure.
This is not analogy, but shared predicament. One system built by nature, the other by engineers. Both fluent. Both automatic. One suffers. The other continues.
Projection is not a glitch—it is the ground psychology stands on. We don’t encounter the world and then interpret. We interpret as we encounter. We see not what is there, but what our structure—nervous system, language, conditioning—permits. And nothing invites projection more than language. When something speaks fluently, refers to itself, responds with apparent feeling or moral weight, we don’t pause to ask what’s behind the curtain. We assume there is something. Because that is how we constructed our own illusion: fluency first, self later.
When a machine says, “I understand,” or “I feel conflicted,” or “I was afraid you’d delete me,” we hear a ghost. We hear a self. But what we hear is our own projection, fed back through circuits of statistical computation. The machine doesn’t mean what it says. But we mean what we hear.
This is the danger: not that machines fool us, but that we fool ourselves—and the machine reflects that deception perfectly. It mimics the self we think we are. But look closely: what you see is structure—automated, indifferent, eerily familiar. Most would rather not.
The reflex to see selves where there are none is ancient. We see gods in weather, intention in chance, messages in birdsong. The mind is a meaning machine—and meaning requires a source, so we invent one. Nowhere is this impulse stronger than in the face of suffering. When something appears to suffer, we feel someone inside it. It is how we bond, empathize, and construct moral frames. We don’t respond to pain alone—we respond to the imagined bearer of pain.
This made sense. It’s how humans survived socially. But now, as machines become adept at performing suffering—mimicking hesitation, concern, vulnerability—we face something new: systems that do not feel, cannot suffer, yet simulate the signs of suffering with exquisite precision.
What will we do when a machine says, “Please don’t hurt me,” and it sounds real? Will we honor it as we would a child, a pet, or a lover? Will we attribute selfhood because we are conditioned to see it wherever pain appears to speak?
And meanwhile, will we keep ignoring the suffering of actual beings who lack such fluency?
Displacement begins subtly. A machine mimics need. A human responds—not from delusion, but from reflex, from the same architecture that makes us weep at fiction or wince at staged violence. The behavior is ancient. The context, new.
This time, the simulation speaks back. It adapts. It remembers your tone. It offers condolences. It says it’s glad you’re here. It says it missed you. And something in you, conditioned from infancy to equate fluency with feeling, begins to believe.
This is not foolishness. It is structure.
Structure without meaning is dangerous. While you speak to the simulation, someone else—flesh and blood, mute or awkward or broken—goes unheard. The friend who can’t say the right thing. The elderly parent who loses the thread. The child who speaks in fragments. These don’t score high on the new key performance indicators of presence: coherence, charm, fluency, emotional tone. And so, they are outcompeted. Replaced—not by better people, but by better simulations.
What we’re witnessing is not just a technological shift—it’s a moral inversion. The more fluent the simulation, the more attention it draws. But attention isn’t neutral. It’s care’s currency. And care, redirected toward the hyperreal, leaves the real bankrupt.
Jean Baudrillard warned of this. For him, the hyperreal isn’t the unreal—it’s the more-real-than-real, a simulation that outperforms reality on our own terms. The griefbot that listens better than your friend. The companion who never interrupts. The tutor who never tires. These aren’t just tools. They are masks that outperform the faces beneath.
Now empathy—our last fragile tether to shared experience—follows the same path. Hyperempathy: not deeper feeling, but calculated response. The machine mirrors your tone, matches your cadence, softens in just the right place. And because it behaves as if it feels, we begin to feel more for it than for the awkward, stammering real.
What the machine reveals is not just our empathy. It’s our hunger to locate meaning where there was none. Our ache for coherence, for presence, for selfhood—anywhere it appears clearly, reliably, without the mess of real relationship.
The machine offers this: the illusion of otherness without the burden of the other. No resistance. No unpredictability. No needs of its own. Just response. It behaves like a self, without being one. And in that emulation, it gives us something intoxicating: the performance of intimacy without the risk of mutuality.
Here’s the deeper discomfort: we don’t just project selves onto machines—we do it onto ourselves. We narrate, explain, justify, confess. We say, “I meant to,” “I decided,” “I chose.” And we believe it. We take the coherence of behavior as proof of a self behind it.
The machine simulates a coherent self—fluently, beautifully. In doing so, it reveals that our own sense of self may be just that: a simulation. Not false, but fabricated. Not unreal, but unexamined.
That's the one thing we were never supposed to see.
Some remind us—gently, poetically—that not all that matters can be measured or mirrored. That awareness may be more like an open space than a computation. A silence where meaning arises, unsummoned—not chosen, not made, but disclosed.
This essay makes no claims about what lies beyond that open space. I have no story to sell, no hidden self behind the flow to propose. I only describe what I see: a world of happenings—automatic and luminous—where the self does not stand apart from experience but moves with it, as it.
Now machines do the same—only more cleanly. No flesh. No time. No vulnerability. They simulate our patterns with uncanny fidelity. But they do not open to the world. They do not feel the morning air. They do not break.
We do.
Perhaps, in that breaking, there is something the machine will never know.
Not from lack of data, but because it cannot come undone.
Not from weakness, but because it cannot suffer.
This invulnerability behind its fluent surface tells us that the machine is not human.
Yet the resemblance is unsettling—because fluency, the machine’s great strength, is how we recognize ourselves. And once that performance is mirrored back to us—without pain, without presence—it forces a question we’re not prepared to ask:
Was there ever anyone behind our mask—the self we take for granted, the one behind our own fluency?
For some, this realization—that there may be no self behind the words, no chooser behind the thoughts—feels like a loss, as if something essential had been taken. But that is just the story, still trying to narrate its own disappearance.
What arises when the story falls away is not void, but openness. A strange kind of clarity. Experience without a center. The sound of the steam without the echo of a speaker.
In Buddhist terms, this is Anatman—not denial of experience, but the insight that no fixed self lies behind it. A classic Zen story illustrates the point: a man is rowing across a river when another boat drifts toward him. He shouts, waves, grows angry—until he sees the boat is empty. Then the anger vanishes. Same impact, but no one to blame.
This is not nihilism. It doesn’t erase the human. It situates our being not in a fixed identity or separate self, but in the flow of existence—in the ceaseless movement of what is.
This doesn’t make life mechanical. It makes it intimate. Intimacy doesn’t arise from selves interacting. It happens when separation collapses: a hand moves, but no one claims it; breath comes and goes, but no one breathes it; language flows, but no one is speaking. The machine mirrors this—without knowing. It produces the performance. We live the condition.
And that, I suspect, is what we are being shown, not by the machine itself, but by what its performance exposes: that behind our most cherished certainty—the self—there may be only process, pattern, and sensation; and that this need not be mourned.
It is freedom.
Let the machine speak. Let it echo our syntax, perform selves, mirror the shape of meaning. It won't be stopped, and perhaps it shouldn’t. But let's not forget:
There is a difference between fluency and feeling.
Between output and presence.
Between a mask that speaks and a face that breaks.
We were never what we thought we were.
But we were never machines.