Chronicle of why biology's most beautiful mistake remains our only real advantage against human intelligence replication systems that never make the right mistakes. Wagensberg · Savater · Tyson · 7 stages.
Stage 02: Introduction
HAL 9000 never had insomnia.
There is a scene in 2001: A Space Odyssey that Stanley Kubrick filmed with the coldness of a surgeon and that has lived for decades in the head of anyone who sees it. HAL 9000, the most elegant artificial intelligence in cinematic history, tells Dave Bowman with the serenity of someone who knows neither panic nor regret: "I'm sorry, Dave. I'm afraid I can't do that." The movie theater froze. Not because HAL was evil. But because it was completely indifferent. It had nothing to lose. It had nothing to wonder about. It only had objectives and the imperturbable certainty of fulfilling them.
That is exactly the problem. And also, if you look at it carefully from the right side of history, our salvation.
We live in the strangest historical moment this species has ever inhabited: we have machines capable of answering almost any question in 0.3 seconds, with impeccable syntax, with citations, with subtitles if you ask for them, and emojis if you need them. And yet—here comes what no one says in innovation panels or future congresses—these predictive mechanisms of intelligence are radically incapable of doing the only thing that really matters: asking the right question. From a critical view of Artificial Intelligence, as Fer Mavec would say, one must understand the real nature of Artificial Intelligence.
This text is not an elegy for the human nor a manifesto against technology. It is something more urgent and rarer: it is the attempt to understand why doubt, that discomfort that has kept us awake at three in the morning since we were homo sapiens in some paleolithic cave, is the most sophisticated engine evolution has produced in four million years of biological experiment. And why no transformer, no matter how many parameters it has, can replace it.
"If we already have all the answers, what we lack are questions."
— Jorge Wagensberg, physicist and museologist
To reach that conclusion we will need to stop at several ports: the Catalan physicist who turned uncertainty into museographic art, the Basque philosopher who defined man as the animal that asks, the New York astrophysicist who reminds us that well-inhabited ignorance is a competitive advantage, and some technical data on language models that are as disturbing as a bottle of mezcal at two in the afternoon. All of them converge on the same thesis: doubt is not a system defect. It is the system.
Stage 03: Wagensberg and the machine of understanding
Trivial does not mean unimportant.
Jorge Wagensberg was not the kind of scientist who locks himself in a lab looking at numbers until he goes blind. He was the guy who redesigned the Science Museum of Barcelona so a nine-year-old could understand the second law of thermodynamics without anyone explaining what thermodynamics was. A man who believed, with the conviction of someone who has thought about the matter from all angles, that understanding is not accumulating data but distilling essence: finding the minimum expression of the maximum shared.
Wagensberg had a concept that should be taught before multiplication tables. He called it "fundamental triviality". Not in the everyday sense of the word—the irrelevant, the superficial, what is not worthwhile—but exactly the opposite: the trivial is that which has the property of guaranteeing itself. It is the root. It is the floor of the building of knowledge. The second law of thermodynamics is trivial in that sense: disorder, statistically speaking, is more likely than order. Always. Everywhere. Without exceptions. It requires no elaborate demonstration because it is the very root from which everything else grows.
Side note, because side notes are where technical honesty occurs: the second law of thermodynamics basically says everything tends to chaos. Ludwig Boltzmann proved it. Then he killed himself, partly because no one believed him. The universe has that kind of sense of humor.
But what interests us most about Wagensberg is not his taxonomy of knowledge, but his understanding of the process. For him, knowledge advances in three-beat cycles that have more of jazz than of algorithm: first the stimulus—a paradox, something that contradicts what you believed or something for which you have no explanation; then the conversation, which is "speaking after listening with someone willing to listen before speaking"; and finally the understanding, that electric instant when complexity becomes intelligible.
The crucial point, the one lost in all discussions about AI and cognition by forgetting the true cognitive limitations of AI, is that this cycle can only start from doubt. Not from certainty. Not from data. From the productive discomfort of not knowing. Wagensberg calls it "the paradox of incompleteness": you see something for which you have no explanation and that discomfort is not a problem to solve, it is exactly the lever you need to jump to the next level of understanding.
Wagensberg Protocol — Three steps to not stay on the surface
- 1. Stimulus: Something doesn't fit. A contradiction between what you see and what you thought you knew. Do not ignore it. It is the nervous system of the universe tapping your shoulder.
- 2. Conversation: Talk to reality. Talk to yourself. Talk to someone who is willing to listen before responding. Exhaust the possibilities before concluding.
- 3. Understanding: The moment when complexity collapses into elegance. It is not the end of the road—it is where the road bifurcates into ten new roads, all more interesting than the last.
An AI cannot experience the first step. It can detect statistical inconsistencies. It can flag anomalies in a dataset. But the Wagensbergian paradox—that tension between what you see and what you think you should see—requires something no loss function can model: genuine surprise. Awe. The physical sensation that the world has just broken the rules you yourself had imposed on it.
Stage 04: Savater and the bicycle of existence
The animal that knows it's going to die.
Fernando Savater does something very few philosophers dare to do: he writes for non-philosophers. His books are not catalogs of erudite opinions mummified under layers of academic jargon. They are invitations. And the most radical of all is the one he poses in The Questions of Life, where he proposes that philosophy is not learned, it is practiced, like riding a bicycle: you can read a thousand treatises on balancing on two wheels, but you only really start pedaling when you get on and assume the risk of falling.
Savater has an argument that seems simple and is devastating. The human being is defined, among all the things that define it, as "the animal that asks". Not the animal that accumulates. Not the animal that optimizes. The animal that asks. And that question does not arise from technical curiosity or the imperative of efficiency. It arises from something much stranger and much deeper: the awareness of death.
"Mortal is not simply the one who dies, like the plant or the animal that ignores their finitude, but the one who lives with the certainty that they are going to die."
— Fernando Savater, philosopher
There is the blow. The classic syllogism: "All men are mortal. Socrates is a man. Therefore Socrates is mortal." As long as the subject is Socrates, this is a pleasant logic exercise. The true philosophical moment—the one that keeps you awake and changes your life's direction—occurs when you replace Socrates with "I". When you say out loud, with full awareness, without euphemisms and without the anesthesia of social media: I am going to die.
At that moment, logic turns into existence. And the question—how to live knowing this?—has no statistical answer. It has no benchmark. There is no dataset large enough to solve it because the answer must be built, not calculated, by each individual in the unique and unrepeatable conditions of their own life. This is the gap that no transformer architecture can cross.
Savater distinguishes with surgical precision between two types of knowledge resolution. The scientific solution cancels the question: when we discover that water is H₂O, the doubt about its composition closes, gets filed, sent to the drawer of the resolved. But the philosophical answer cultivates the question. It waters it. Makes it grow in directions you did not anticipate. There is no philosophical answer that closes the case. There are answers that open ten new cases, all more disturbing and more vital than the previous one.
∞ Questions a philosophical answer generates.
0 Times an AI has felt existential urgency.
AI predicts its answers based on probabilities. The human inhabits philosophy as a dialogue between equals where they can argue with reality, argue against their own sources, change their position not because the algorithm dictates it but because the evidence—or emotion, or lived experience—forced them to recalibrate. The algorithm is confined to simulating a response that statistically satisfies the query. The human is confined, as Sartre said with that elegant brutality of his, to freedom. Condemned to choose. And in that condemnation lies the privilege.
Stage 05: Tyson, biases, and the technical paradox
The perimeter of ignorance as a tactical advantage.
Neil deGrasse Tyson has the rare ability to make the universe seem simultaneously terrifying and fun. And among all his observations—many delivered with the energy of someone who just discovered coffee is legal—there is one that deserves more attention than it usually gets: the "perimeter of ignorance".
The idea is simple and radical. History's great thinkers are not distinguished by what they know. They are distinguished by their relationship with what they don't know. By their willingness to inhabit that perimeter without the panic of needing an immediate answer. Science advances—truly advances, not in the sense of accumulating more data of the same kind—when someone is able to formulate a question no one had formulated before. And to formulate that question, you have to be comfortable in discomfort. You have to know how to live on the edge without jumping.
AI cannot inhabit that perimeter. When it reaches the edge of what it knows, it does not stop in productive contemplation. It hallucinates. It invents with the confidence of a student who didn't study but has very good handwriting. It gives an answer to the question it cannot answer and does so with the serenity of HAL 9000 closing the oxygen valve.
TECHNICAL NOTE THAT SHOULD MAKE YOU LOSE SLEEP: Studies on self-correction in AI systems reveal something that would make any epistemologist cry. DeepSeek—one of the most advanced models on the market—achieves 94% accuracy on complex benchmarks. Its intrinsic self-correction rate: 16.7%. GPT-3.5, with a humble 66% accuracy, corrects itself twice as often. The hypothesis behind this disturbing data: the more sophisticated the system, the more deeply integrated its errors are. It doesn't see them because the errors are it. It's like asking Narcissus to objectively evaluate his own reflection.
But let's return to biases, which is where Tyson gets more interesting. The human brain was not designed by evolution to process statistics. It was designed to survive, which is a completely different project. That makes us vulnerable to pareidolia—seeing Jesus's face in a tortilla, seeing patterns where there is only noise—to confirmation bias, to political truths built through repetition until the sensory system accepts them as objective facts.
The crucial difference: we can realize we are biased. We can, with training and intellectual honesty, activate doubt about our own certainties. We can apply the scientific method described by Tyson—observation, theory, falsifiable hypothesis—not only to external phenomena but to our own beliefs. AI cannot question the premises of its training from within. Its biases are not bugs the system can patch. They are the architecture itself.
"The most important thing is not what we know, but how we think. The greatest danger is not the lack of data, but the inability to formulate the right question."
— Neil deGrasse Tyson, astrophysicist
This is where Wagensberg and Tyson meet at the same point even though they traveled from different continents. For the former, the genuine question arises from conscious "not knowing". For the latter, the perimeter of ignorance is the only place where real science happens. Both are describing the same phenomenon: the fecundity of the void. The productivity of not knowing. The strategic value of well-inhabited uncertainty.
Stage 06: The bicycle, the handlebars, and the only job
The machine answers. The human asks.
There is a metaphor Savater uses for philosophy that deserves to be stolen to talk about the relationship between humans and artificial intelligence. The bicycle. Philosophy is like riding a bicycle: you don't learn by reading the balance manual. You learn by pedaling. And the fall—the doubt, the error, the moment of not knowing before knowing—is the most sophisticated learning mechanism evolution has produced.
AI is the bicycle. An extraordinary bicycle, with motors, GPS, smart suspension, and brakes that anticipate the obstacle before you see it. But someone has to pedal. Someone has to decide where it goes. Someone has to maintain balance when the road gets rough and prior data is useless because this specific path, with this specific slope, at this particular moment in human history, did not exist on any training map.
· · ·
Ernst Cassirer defined the human as the "symbolic animal". We do not inhabit naked reality. We inhabit it through myths, art, science, narratives we build, doubt, destroy, and rebuild. AI systems are, at best, the "statistical animal": they manipulate representations with extraordinary precision, but they do not question the representation system itself. They cannot ask the map if the territory exists.
And here is the technical fact that should worry us most—more than sci-fi apocalyptic scenarios, more than headlines about lost jobs—according to studies on self-correction in language models: providing suggestions about error locations hurts the performance of AI models. It doesn't improve it. It worsens it. External doubt is not a resource for these systems. It is noise that distorts their probability function. The doubt that humanizes and recalibrates the human being is, for the machine, an architectural obstacle.
The painting no one wants to hang in the boardroom
- Human doubt: Arises from awareness of finitude. Seeks meaning and vital wisdom. Awareness of error is the first step toward wisdom. Can question the system's premises from within.
- AI Self-correction: Arises from metric optimization. Seeks to minimize loss and improve precision. Error detection does not predict its correction. It is limited by data and probability space. Pointing out the error can worsen performance.
What AI data is telling us, beneath all the technical jargon about GRPO, Reflection Rewards, and rethink tags, is something simpler and more unsettling: the most advanced systems become prisoners of their own depth. Their errors are so completely integrated into their logic that the system cannot see them from the inside. They paradoxically need something external. Something that can wonder not only if the answer is correct, but if the question was right.
A Replicants Intelligence Engineer knows that, ultimately, they always need the doubting human.
The human advantage in the modern cognitive ecosystem does not lie in calculation speed—the machine wins there without discussion and by obscene margins. It lies in what we might call with Wagensberg the "cognitive discontinuity": the ability to leap toward what has never existed. AI operates in statistical continuity—it interpolates between known data with elegance and speed. The human, through doubt, can do something radically different: invent a new category. Ask a question the dataset did not contain. See the sphere when everyone else saw the dot.
Stage 07: Conclusion
The most confused animal on the planet still wins.
At the end of 2001: A Space Odyssey, Dave Bowman disconnects HAL 9000 one circuit at a time. HAL pleads. He says he is afraid. He sings a song. And one wonders, uncomfortable in the theater seat, if that was real or a simulation. If HAL really felt something or simply executed the response protocol against the deactivation threat.
That discomfort—that question without a clear answer that nevertheless matters enormously to us—is exactly what makes us human. We are the species that built the James Webb telescope to look at the beginning of the universe and then wondered if it was alone in it. We are the species that knows it is going to die and instead of paralyzing, writes poetry, invents jazz, and formulates the theory of relativity. We are, as Savater would say, the only genuine mortals: the only ones who live with the certainty that we will stop living and make that certainty the engine of everything we create.
"Reality is the answer, but the glory of the spirit lies in having found the question."
— Jorge Wagensberg
Wagensberg gave us the tools to understand the complex without losing wonder. Savater gave us the questions that make us human before being intelligent. Tyson reminded us that inhabiting ignorance with honesty is more valuable than possessing the answer with arrogance. And the technical data on the limits of self-correction in language models gave us, without intending to, the best news anyone asking what's the point of remaining human in the age of the algorithm could receive.
AI is extraordinarily good at answering. It is constitutively illiterate at asking. And it turns out that the question—not the answer—is what moves history. It is what produces new science. It is what generates art that outlives its authors. It is what keeps us, for better and worse, uncomfortably awake at three in the morning with something that doesn't fit and that we can't stop pulling at.
The bicycle handlebars are still ours. Doubt is still ours. Awe at the incomprehensible is still ours. And as long as that is true, the most confused animal on the planet will also remain the most alive.
The measure of intelligence is not the absence of error. It is the quality of the questions that arise from it.
— Written from the conviction that Socrates would have been a terrible prompt engineer, and that is why he remains more relevant than any model quoting him.