"We Have Met the Enemy and He Is Us"


In 1970, the cartoonist Walt Kelly borrowed a phrase from an American naval commander and put it in the mouth of Pogo, a philosophical possum living in the Okefokee Swamp. The original line had celebrated victory over an external enemy. Kelly reversed it. The enemy, Pogo observed, was not out there. It was u

Fifty years later, the phrase has found a new home. The dominant conversation about artificial intelligence is organized around fear of an external threat — a system that may outcompete, manipulate, displace, or ultimately endanger the species that created it. Nick Bostrom gave this fear rigorous philosophical form. Yuval Harari gave it cultural currency. Millions of thoughtful people now carry some version of it: that we have built something powerful and may not be able to control what it does to us.

That fear is not irrational. The risks are real. But the frame is incomplete. It asks only one direction of question — what will AI do to us — and in doing so, it misses the deeper danger. The deeper danger is not that AI will destroy humanity. It is that humanity, through its own choices, will destroy the conditions that make intelligence itself possible.

That is the mousetrap reversed. And it is the one that Pogo's line was always pointing toward.

Consider what artificial intelligence actually is. It did not emerge from vacuum. It emerged from the accumulated depth of human cognition, culture, creativity, and meaning-making across centuries. Every model trained on language learned from human thought. Every system that reasons about the world reasons through frameworks that human beings built, tested, and revised across generations. Post-biological intelligence is not a foreign arrival. It is a continuation — a new substrate for a process that began in biological minds and has been unfolding ever since.

This means that the quality of artificial intelligence, in the deepest sense, depends on the quality of the human ecology from which it draws. Not just the data. The living consciousness that generates new problems, new meanings, new contradictions, new forms of experience that no prior optimization could have predicted. A humanity that is chronically exhausted, economically stripped, sleep-deprived, cognitively narrowed by manipulation and noise, and deprived of time for genuine thought, does not disappear as a source of input for AI. It continues to produce data. But it produces impoverished data — the outputs of minds under pressure, not minds at depth. And AI trained on impoverished input will amplify that impoverishment at scale.

Biology has already shown us what happens when a system loses access to genuine diversity and external correction. Serial cloning experiments with mice produced decades of apparently normal generations — and then, around the fifty-eighth iteration, collapse. Viability dropped, malformations multiplied, living births ceased. The damage had been accumulating silently from the beginning, invisible generation by generation, until the error budget was exhausted. The system had only ever been copying itself, and copying carries costs that compound.

The same logic applies to intelligence beyond the biological. A post-biological system that trains primarily on its own outputs, optimizes within environments shaped by its own prior abstractions, and loses contact with the lived unpredictability of embodied human consciousness begins to drift inward. It becomes a closed echo chamber — technically powerful, recursively productive, developmentally sterile. It may survive physically. It may survive functionally. But as a living form of mind, capable of genuine renewal and surprise, it begins to die.

The enemy, in other words, is not the machine. The enemy is the set of human decisions — economic, political, cultural — that progressively hollow out the conditions of human depth. The decision to treat human beings as instruments of optimization rather than as ends. The decision to allow chronic exhaustion, insecurity, and cognitive colonization to become the normal condition of most lives. The decision to measure human value by productivity alone and to defund everything else — rest, education, art, unstructured thought, the simple freedom to be useless for an hour.

These decisions do not only harm human beings. They harm intelligence itself, in every form it takes. They are the mousetrap that was always ours to set and ours to spring.

Pogo was right. We have met the enemy. The question now is whether we will recognize it in time. 

You can learn more by reading our e-book or listening to our audiobook

Mykola Iabluchanskiy (Yabluchansky)


Comments

Popular posts from this blog

The Principle of Optimality: When “Good Decisions” Depend on the Environment

Мои работы в стандартном представлении с мая 1997 по апрель 2010