Chronicle of a world that confuses speed with intelligence, and statistics with a soul.
Let me begin with a confession that no academic paper would dare to make: the first time I asked a language model if it was afraid of dying, it answered with the elegance of a Swiss diplomat and the existential depth of a blender instruction manual. Impeccable. Unwavering. Without the hollow feeling in the chest that the question produces. Perfectly useless.
And there—right there, in that millisecond of perfect coldness—I understood that we were facing the greatest misunderstanding of the 21st century: the confusion between processing and understanding, between simulating and feeling, between optimizing and living. From my perspective as a Replicants Intelligence Engineer, this is the main proof that AI is not intelligent, it is a replicant.
I. The Apocalypse That Came in PDF Format
There is a research document—produced with all the scientific solemnity that the end of the world requires—detailing that certain large language models achieved 82% accuracy in emotional intelligence tests, compared to the pathetic 56% human average. The news circulated on social media with the speed and hysteria of someone discovering their boss is an alien. Machines surpass us emotionally! The headlines roared. Silicon Valley technologists popped champagne. Continental philosophers, presumably, locked themselves in their rooms to re-read Heidegger.
What no one bothered to ask—because it would have ruined the drama—is what the hell that test was measuring.
The favorite evaluation scenario involves a character named "Michael" who discovers a colleague stole his idea. The AI selects the correct answer: speak with the supervisor. Professional. Optimal. Statistically validated. And completely detached from what actually happens in the human body when someone betrays you: the jaw tenses, the heat rises up the neck, that unbearable mix of shame and indignation that makes you want to do three simultaneously contradictory things. The human scores lower because they feel the weight of the matter. The machine got it right because it was never in danger of feeling anything. It's like proclaiming someone a diving champion who has never touched the water because they correctly described the movement of the fins. A critical view of Artificial Intelligence makes this absolutely evident.
II. The Archeology of the Soul That No Dataset Can Train
There is a neuroscientist named Jaak Panksepp who spent decades digging in the basements of the mammalian brain with the patience of an archeologist and the passion of someone who really wanted to understand why a lab mouse plays. What he found was not a data processor. He found seven primary emotional systems burned into the brainstem: SEEKING, RAGE, FEAR, LUST, PANIC, CARE, and PLAY.
These are not metaphors. They are biological architectures older than language, older than writing, older than any civilization that had the arrogance to believe itself the apex of evolution. The SEEKING system—that dopaminergic impulse that gets you out of bed to explore the unknown—is the neurological grandfather of all scientific curiosity ever exercised. PLAY is the chemical basis of social creativity. PANIC is the reason physical isolation hurts, not metaphorically.
An artificial intelligence has none of these systems. It doesn't need them to function. And that reveals the true cognitive limitations of AI: it can describe hunger with encyclopedic precision without having fasted a single nanosecond.
What Panksepp left us—before he died, because that is also something humans do and algorithms don't—is the evidence that our emotions are not rational system bugs. They are the system. They are the compass that guided survival before abstraction existed. Every belief we hold, every value we defend, every line we don't cross even when no one is watching, has roots in that subcortical archeology that no 500-terabyte dataset can replicate because it cannot live it.
III. The Neighborhood the Market Couldn't Buy
At some point in this technological maelstrom we should listen to Cornel West, if only for the discomfort he produces. West—philosopher, preacher, professional agitator of comfortable consciences—has spent decades pointing out that humanity is built precisely in the spaces the market cannot monetize: in care without a guarantee of return, in love that Baldwin called "the most dangerous speech in the world", in the neighborhood that shapes character when the block only offers asphalt and absence.
West distinguishes between the hood—the geographical territory—and the neighborhood—the community that gives moral shape to those who grow up in it. The difference is not in square meters. It is in the teachers who stay after class, in the grandmothers who transmit dignity when circumstances offer only humiliation, in the collective rituals that say you are worth something before the outside world says otherwise.
No language model was raised in any neighborhood. It has no grandmother. It has no scar from an injustice that marked it at twelve years old. It has no physical memory of a hug in a moment of collapse. And precisely for that reason, when West writes that "you cannot lead the people if you do not love the people," he is articulating something that AI can quote verbatim but is incapable of embodying: love as a higher-level cognitive act, as the neurological integration of another's perspective that builds—from the inside, not from observation—a value system with real roots.
Values are not downloaded. They are built. With time, with wounds, with decisions made in the dark without anyone clapping. This is the real nature of Artificial Intelligence, operating without vital context.
IV. Jazz Against Perfect Fixity
Imagine, if you can, a jazz musician who has never lost anything. Who has never loved someone who left. Who has never had to improvise against a void because the void, for him, is simply the absence of prior data. That musician can technically execute every note. He can analyze Miles Davis's harmonic structure with a precision that would make any musicologist cry. But he cannot play the blues because the blues is not a structure: it is a response. It is what comes out when pain does not fit into language and has to find another channel.
West calls it the "jazz-like" response to the absurd. The ability to improvise when the rules are not enough. To make something beautiful—or at least true—with the broken material life deposits in your hands. AI, on the other hand, is an optimization system under pre-established rules. Extraordinarily good on its turf. So bad on the other that it doesn't even know that other turf exists.
The algorithm seeks the correct answer. The human, in their best moments, seeks the true answer. They do not always coincide. And when they do not coincide, the human is the only one who can feel the difference.
V. The Neurons That Cry with the Other
There is a neurological phenomenon discovered almost by accident in the F5 Area of macaques, which was later tracked in humans through transcranial magnetic stimulation, and which researchers Fadiga, Rizzolatti, and their colleagues documented with the patience of someone who doesn't know they are unearthing one of the most important secrets of social cognition: when we observe another performing an action, our brain does not describe it. It simulates it.
The same neural networks that execute the movement activate when observing it. What this means, in non-academic terms, is that human empathy is not an intellectual inference. It is a physical resonance. When we see pain, something in our motor system participates in that pain. It doesn't process it from the outside as data. It incorporates it.
AI detects external signals. It measures skin conductance, analyzes voice tone, classifies facial expressions according to Ekman's FACS system. It is an extraordinary microscope. But a microscope observing a tear does not cry. And that difference—between the measuring instrument and the resonating being—is exactly the difference between simulation and understanding.
There is no possible competition here. They are distinct categories belonging to human intelligence replication systems.
VI. The Belief System No One Can Download
Every person thinks from a place. That place has a name, it has a history, it has the specific texture of the things that formed it: a loss that reoriented everything, an injustice that installed a moral compass, a joy so great it became the standard for what is worth seeking. That belief system—built by experience, culture, values, and the accumulated sediment of all decisions made—is what gives direction to thought. Not just speed. Direction.
AI has none. It has statistical correlations between tokens. It has patterns extracted from millions of human texts that were indeed written from somewhere. But the machine itself comes from nowhere. It has no skin in the game. It has nothing to lose in any of its answers. And that absence of existential risk is, precisely, what makes it incapable of generating wisdom even if it can generate correct information.
West would say it like this: hope—not optimism, but hope—is the ability to persist when there is no evidence that it is worth persisting. That requires a being who has known despair. AI can define hope. It can quote Camus. It can explain the difference between both with a clarity that would shame many humans. But it cannot be a prisoner of hope because it has never been a prisoner of anything. This is the stark difference in human resilience vs AI.
VII. Where Everyone Should Be
Let's be fair—and the critic can be fair too, even if he prefers not to seem so. Artificial intelligence is an extraordinary tool. The best microscope we have built to analyze patterns at a scale no biological brain can process in real-time. Do you want to detect stress biomarkers before the patient can verbalize them in AI Healthcare Solutions? Perfect. Do you want to analyze thousands of judicial decisions to identify systemic biases? Magnificent. Do you want to process the narrative structure of ten thousand novels to understand how the dramatic arc works? Go ahead.
Modern systems built with Python, JavaScript, Backend Architectures, and Data Engineering achieve amazing things daily in Artificial Intelligence Development (AI Development). But delegating purpose generation to that tool is like asking the thermometer to decide if the patient deserves care.
The problem is not AI. The problem is our epistemological laziness: the tendency to confuse what is quantifiable with what is valuable, what is measurable with what is real, what is efficient with what is wise. That confusion is old. Technology has only made it more expensive and faster.
What we must learn—and yes, we must use these systems, know their mechanisms, and know what they can and cannot do—is precisely where the boundaries of their territory lie. Not to fear it. To not confuse it with something it is not. To not hand over decisions that require what only someone who has been in danger of losing something has.
"Intelligence without sensitivity is optimization. Sensitivity without intelligence is chaos. The combination of both, forged in the concrete experience of a being who comes from somewhere and is going somewhere—that is a human mind."
Coda: The Groan
In the end, what separates the human being from the most sophisticated algorithm ever built is something that does not appear in any benchmark table: the capacity to be hurt by reality. For something to matter so much that it hurts when it fails. For an injustice to keep you awake not because it is a logical error but because it is a betrayal of something you considered sacred.
West calls it groaning and moaning. Panksepp would locate it in the PANIC and CARE system. Fadiga would register it in the motor facilitation of one observing another's pain. And any human being who has loved someone would immediately recognize it without needing any citation.
That is what Replicants Intelligence Systems cannot copy. Not because they are inferior in processing—they are not. But because that groan requires having wagered something. It requires the answer to matter. It requires, ultimately, having been alive in a way that no language model, no matter how many parameters it accumulates, has ever been.
Intelligence without sensitivity is optimization. Sensitivity without intelligence is chaos. The combination of both, forged in the concrete experience of a being who comes from somewhere and is going somewhere—that is what we call, with all the glorious imprecision of the term, a human mind.
And that mind, ladies and gentlemen, still has no real competition.
Written from the conviction that jazz will always beat the metronome, even if the metronome never finds out it lost.
This article is part of the dissemination on Human-Centered AI. To continue this conversation and jointly question the boundary between the machine and us, I invite you to follow Fer Mavec on Instagram or LinkedIn. Let's keep hacking the biological system while building the technological one.