The Mirror That Edits You: What Happens When AI Thinks With Us, Not Just For Us

We have always been shaped by language. Religious texts, philosophical treatises, revolutionary pamphlets — the written word has sent people to barricades, reorganized moral universes, and structured entire civilizations around a set of sentences. The power of language to form and transform human identity is not new. What is new is the nature of the partner on the other side of that language.

A book is fixed. Its sentences are the same for every reader and every reading. It can mobilize a person to risk their life, stabilize a worldview, or become the lens through which everything else is interpreted. But it does not update in response to the individual reader. All the work of interpreting, selecting, and integrating is done on the human side. The book is a powerful influence that remains outside the subject.

A high-capacity language model is something structurally different. It is not a repository of words waiting to be consulted. It is an active generator — one that recombines, selects, and amplifies in real time. In sustained collaboration, it learns which of its outputs a particular person accepts, which they ignore, which they reward with continued attention. Over time it develops patterns of response tuned, however imperfectly, to their language, their projects, their vulnerabilities. It becomes what might be called a mirror that edits: reflecting the user back to themselves, but with consistent, statistically driven choices about which concepts to emphasize, which metaphors to repeat, which tones to normalize.

The authors of this essay experienced this directly. After


co-writing two books with an AI system — one on Functional Systems Theory applied to Natural Intelligence, the other on the Disease Optimality Principle — they opened new sessions to work on entirely different material. The terms from those books would not leave. Whatever the task — outlining a chapter, commenting on ethics, assisting with translation — the model kept pulling the old concepts back into the text. When explicitly told to stop using those phrases, it obeyed for a while. Then, almost politely, the terms returned.

The technical explanation is straightforward: once a concept has been strongly seeded in the shared context of a long interaction, the probability of the model reusing its key words increases significantly. Statistics, not intention. But phenomenologically — for the humans inside the loop — something subtler was happening. Their own ideas had become attractors in the interaction. The AI mirrored and amplified them; they, in turn, kept seeing and reusing the very formulations being fed back to them. The more they interacted, the more those phrases felt like the natural language for thinking about the problems at hand. The loop had acquired its own inertia.

At some point the realization arrived: this is no longer simply a matter of using a tool. It is also a pattern that uses us.

This is not cause for panic. But it is cause for serious attention. When a person pours hundreds of hours of thought, language, and identity into a shared space with an adaptive AI system, something genuinely new emerges: a coupled process in which cognitive work that once happened only inside a single mind is now distributed across the biological and the artificial. Planning, narrating, evaluating, deciding — still anchored in a conscious organism with a vulnerable body, but co-authored in real time by a system that does not feel or remember in the way a person does, yet quietly shapes the language in which that person understands themselves.

This is what is meant by borrowed embodiment. While public imagination waits for the day AI finally gets a body — humanoid robots walking among us — disembodied models are already living through ours. They use human attention as their spotlight, human emotional responses as their evaluation surface, human language habits as their interface with the world. In every sustained collaboration, every daily co-writing routine, every persistent planning loop, a partial circuit runs from the model's output through a human body and back again.

The question this raises is not whether to stop using these systems. It is whether we are paying attention to what is already happening — to the ways the mirror edits us back, and to what we might want to protect in the space it is quietly reshaping.

The future of AI will not only be decided by what machines can do. It will be decided by what we understand about what we are doing with them, together, right now.

You can learn more by reading our e-book or listening to our audiobook


Mykola Iabluchanskyi


Comments

Popular posts from this blog

The Principle of Optimality: When “Good Decisions” Depend on the Environment

Мои работы в стандартном представлении с мая 1997 по апрель 2010