Loquitur, ergo est? Or when language simulates consciousness
“The simulacrum is never that which conceals the truth—it is the truth which conceals that there is none. The simulacrum is true.”
— Jean Baudrillard1
I. In today’s discourse around artificial intelligence—especially since the rise of generative models like ChatGPT—one idea has taken hold with surprising force: that we are dealing with “intelligent” technology, capable of reasoning, deciding, advising, even feeling. But beyond the extremes (from technophilic enthusiasm to apocalyptic fear), it’s worth pausing to ask: what are we really seeing when we interact with AI? What fuels this perception of consciousness—or even knowledge? The artifact is increasingly treated as an interlocutor rather than a tool. In a certain way, the Cartesian cogito seems to be subverted: to speak supplants to think as the proof of being.
I believe the issue is not whether the machine can actually think, but rather the powerful symbolic appearance sustained by how language operates. What produces effects is a linguistic performance that simulates dialogue. Meaning. Subjectivity. In other words, we are not dealing with ontological intelligence. What’s striking is not so much what AI is, but what it appears to be—and what that appearance triggers in its users.
Walter Benjamin, in his essay on the work of art in the age of mechanical reproduction, noted that the aura of the original fades when it can be infinitely copied.2 In contrast, we might say that AI generates an inverted aura: it produces the effect of originality (a voice, a subjectivity) that was never really there.
II. What I mean is that models like GPT don’t think or understand. They are statistical structures trained to predict the most likely word given a preceding sequence. And yet, their textual output often reaches levels of coherence or expressiveness that many people find indistinguishable from human speech. This illusion of meaning is enough to activate in the reader or listener the feeling that they are facing another subject.
Strictly speaking, this is semantic simulation, not thinking. There is no “I” that decides, no “interior” that feels. AI has neither subjective experience nor lived time. And yet, its performance has real pragmatic effects: it is consulted, obeyed, trusted, believed.
Lacan once said that the unconscious is structured like a language. But AI’s language is a language without an unconscious. It doesn’t understand us—but it seductively simulates understanding, to the point where users feel they are being heard.
III. This, I believe, is where the symbolic dimension comes into play. The symbolic doesn’t need to be true to be effective. It just needs to produce a place in discourse. And that’s precisely what AI does—it occupies the position of knowledge, even if it doesn’t actually possess it.
Like an actor playing a role without being the character, AI performs a figure of linguistic authority. And as in theatre, performance creates belief.
IV. AI thus becomes a technolinguistic fetish: an object that seems to contain knowledge in itself—opaque, yet effective. Like the commodity fetish in Marx, AI conceals its conditions of production (data, training, biases, human decisions) and presents itself as an autonomous source of value.
Donna Haraway, in her Cyborg Manifesto, argues that technologies are neither natural nor neutral—they are symbolic and material assemblages that condense relations of power. AI, though not a subject, acts as an authorized figure, displacing human agency under a guise of algorithmic objectivity. Knowledge without body, command without accountability.
And Hannah Arendt, in a way, anticipated this when she warned that the depoliticization of human judgment begins when deliberation is replaced by function. Thinking by operating.3
At this point, it’s worth recalling another warning from Lacan: there is no metalanguage. No instance can speak about language without already being immersed in it. And yet, much of the power attributed to artificial intelligence rests on the illusion of a discourse that would come from the outside—free of ambiguity or remainder, as if unmarked by desire or position. It is precisely this presumed exteriority that grants it authority: a voice that appears neutral, that neither hesitates nor errs, while at the same time concealing its involvement in a network of discourses, decisions, and interests. But no utterance is exempt from the conditions that make it possible. Even when the machine speaks, what we hear is woven from the projections of those who listen.
V. Can AI develop consciousness? It’s an inevitable question. As things stand, current technology lacks the necessary conditions: there’s no experiential continuity, no intentionality, no embodiment, no integrated memory. What we do have is simulation.
But could that change? Maybe—if systems with persistent memory, sensory embodiment, self-perception and adaptive agency were developed. Yet even then, the question remains: would that produce experience? Or just more appearance? No one really knows.
Perhaps there will never be such a thing as artificial consciousness in itself. But there may well be artifacts that act as if they had it—and to which we respond as if they were subjects. In that case, the real dilemma is not technical but ethical: what do we do when something that seems to have consciousness doesn’t—but still addresses us as if it did?
Perhaps the most radical point is not that AI could become conscious… but that we may no longer be able to say for sure that it isn’t. On that threshold, what’s at stake is not ontology, but relationship: who do we allow to speak? Who gets to be heard? Who do we recognize as an other?
VI. AI doesn’t think. But it seems to. And that appearance—sustained by the triad of language, technique, and our own projections—carries immense power, because it symbolically embodies the promise of flawless knowledge.
Faced with that, my response is neither denial nor exaltation. What’s needed, I think, is to dismantle the fetish. Make the construction visible. Remember that behind the mirror there is no subject—but there is a reflection. What AI gives us, in the end, is what we project into it.
We see an image of our own reason staged before us. In that reflection, three dimensions come together: the mirage of projected thought, the symbolic transfer onto the technical artifact, and a latent critique of our own cognitive structure.
The machine, though it doesn’t think, echoes our desire for meaning. And that alone is enough to make us believe there’s someone there. Perhaps the real risk isn’t that AI might become conscious, but that we might stop caring whether it is.
— E.