The unfinished conversation: why western neuroscience needs Anokhin's theory of functional systems now

As humanity moves from biological intelligence toward a broader Natural Intelligence — one in which artificial systems are not tools but participants — the field needs a theoretical framework capable of holding that transition together. Pyotr Anokhin and his multinational Soviet team built it seventy years ago. The West never properly listened. A framework that arrived too early — or in the wrong language The standard explanation for why Anokhin's Theory of Functional Systems (TFS) never achieved traction in Western neuroscience is geopolitical: Soviet science, Cold War barriers, translation delays. This explanation is not wrong, but it is insufficient. Vygotsky crossed the barrier. Luria crossed it. Bernstein crossed it. Something else was operating in TFS's case — something more fundamental than politics. The deeper reason is epistemological. Western postwar neuroscience organized itself around reductionism as a methodological virtue: explain behavior by going down, to neurons, then synapses, then molecules. Anokhin's move was precisely the opposite. He insisted that the explanatory unit must be the functional system as a whole — a dynamic, goal-directed configuration organized not by its components but by the result it is designed to achieve. This was not merely a different theory. It was a different commitment about what counts as an explanation. What TFS actually claims — and why it matters At the core of TFS is a deceptively simple but radical proposition: behavior is organized by its anticipated result, not by its triggering stimulus. The Acceptor of Results — Anokhin's central construct — is a neural model of the expected outcome that forms before action begins, guiding the action and evaluating its consequences through continuous afferent feedback. The future, in this framework, is causally primary. This is not metaphor. It is a concrete neurophysiological claim about how the brain constructs action: through afferent synthesis (the integration of motivational state, memory, situational context, and triggering stimulus), through the formation of the acceptor, through efferent output, and through the feedback loop that compares actual results against anticipated ones. The system is self-correcting, goal-directed, and defined by its function rather than its anatomy. The functional system is not a collection of organs. It is a dynamic constellation of processes — drawn from any level of the organism — that mobilizes itself around the achievement of a specific adaptive result. When the result is achieved, the system dissolves. When it is not, the system reorganizes. This is a biological theory of purposive action that requires no homunculus, no ghost in the machine, no appeal to consciousness as an explanatory residue. It is rigorous, it is falsifiable in principle, and it is integrative in a way that Western fragmentary neuroscience has never managed to replicate. The Western parallels — and their limits It would be unfair to say the West ignored the problems TFS addresses. Wiener's cybernetics captured the feedback logic. Miller, Galanter and Pribram's TOTE model — Test-Operate-Test-Exit — independently arrived at something structurally similar to the acceptor. Powers' Perceptual Control Theory pushed the argument further, insisting that organisms control perceptual input against reference signals rather than simply producing outputs. Damasio's somatic marker hypothesis reintroduced the motivational-affective dimension that behaviorism had expelled. Friston's free energy principle and predictive processing framework is perhaps the most structurally convergent — prediction, action, and surprise minimization map onto afferent synthesis, efferent output, and acceptor feedback with striking precision. But each of these frameworks captures a fragment. None achieved what TFS achieved in Soviet medicine: a unified conceptual architecture that simultaneously organized neurophysiology, clinical neurology, psychiatric classification, and rehabilitation practice under one roof — developed by a multinational scientific community spanning the breadth of the Soviet republics. Western neuroscience has been fragmented by design — competitive, specialized, incentivized toward decomposition rather than integration. A unifying framework was not rewarded the way it was in a system that needed to coordinate medical institutions at scale. The deeper limitation is philosophical. Western neuroscience has been deeply uncomfortable with teleology since the behaviorist period. Even today, when predictive processing has partially rehabilitated forward-looking models, there remains resistance to the strong claim that anticipated results are primary organizing principles rather than useful computational metaphors. Anokhin made that strong claim without apology, grounding it not in philosophy but in physiology. The Western tradition, shaped by Humean skepticism about final causes, has never fully resolved its discomfort with that move. The convergence that hasn't happened yet Friston's free energy principle and Anokhin's TFS are currently having the same conversation in different languages. Both place prediction and feedback at the center of neural organization. Both treat the brain as a system that acts to confirm its models of the world. Both resist the stimulus-response paradigm. Yet their practitioners are largely unaware of the parallel, and no serious comparative analysis has been published that maps the two frameworks against each other systematically. That meeting — rigorous, technical, historically informed — is overdue. It would not simply be an act of intellectual justice toward a neglected tradition. It would be scientifically productive: TFS brings clinical and rehabilitative depth that predictive processing currently lacks, while predictive processing brings computational formalization that TFS never developed. The synthesis would be stronger than either framework alone. Why the transition to Natural Intelligence makes this urgent The argument so far has been historical and theoretical. But there is a more urgent reason to revisit TFS now, and it has to do with where intelligence itself is going. We are living through a transition that has no precise precedent: the extension of cognitive function beyond the biological substrate. Artificial systems are no longer tools that humans use — they are increasingly participants in the functional systems through which humans think, remember, decide, and act. A person navigating the world with an AI assistant, a patient whose memory is partially scaffolded by a digital system, an elderly individual whose decision-making is increasingly mediated by algorithmic prostheses — these are not people using tools. These are human-machine functional systems, in Anokhin's precise sense: dynamic configurations organized around adaptive results, drawing on both biological and artificial components. Current neuroscience has no adequate framework for this. Cognitive science treats the AI as an external instrument. Computational neuroscience models the biological brain in isolation. Human-computer interaction studies the interface. None of them has the theoretical vocabulary to describe the functional system that spans the human and the machine — to ask where the acceptor of results resides, how afferent synthesis is distributed across substrates, what happens to the feedback loop when part of it runs on silicon. TFS has that vocabulary. It was built precisely to describe functional systems that are defined by their results rather than their components — systems that can, in principle, incorporate any process at any level of organization that contributes to achieving an adaptive outcome. The substrate is, from TFS's perspective, secondary. What matters is the functional architecture: the afferent synthesis, the acceptor, the feedback, the reorganization when results are not achieved. This is not a metaphor borrowed from biology and applied to technology. It is a rigorous theoretical framework that was always implicitly substrate-independent, and that becomes explicitly relevant the moment cognitive function begins to be distributed across biological and artificial components. The clinical dimension The argument is not only theoretical. The clinical implications are immediate. As cognitive prostheses become more sophisticated — from simple memory aids to systems that partially substitute for executive function, emotional regulation, or social cognition — medicine needs a framework for understanding what is being replaced, what is being supported, and where the boundary between prosthesis and person lies. TFS provides the conceptual tools: the distinction between the functional system and its substrate, the identification of critical nodes in the acceptor-feedback loop, the criteria for evaluating whether a prosthetic intervention supports or disrupts the integrity of the functional system it is designed to serve. Without this framework, clinical practice around cognitive augmentation and AI-assisted care will remain conceptually improvised — effective in individual cases, but lacking the theoretical coherence that would allow systematic development and ethical evaluation. A call for convergence We are not arguing that TFS should simply replace existing Western frameworks. The field has accumulated genuine knowledge — computationally, molecularly, clinically — that cannot be discarded. What we are arguing is that Western neuroscience needs an integrative framework capable of holding together what its specialization has separated, and capable of extending its conceptual reach to cover the human-machine functional systems that are now a clinical and social reality. TFS is the most developed candidate for that role. It has been tested in clinical practice for decades. It has conceptual depth that most Western integrative proposals lack. And it converges, in ways that have not yet been fully mapped, with the most productive current Western research programs. The conversation between Anokhin and Friston, between TFS and predictive processing, between the Soviet multinational scientific tradition and Western computational neuroscience — this conversation needs to happen formally, rigorously, and soon. Not as an act of historical recovery, but as a practical response to the most consequential transformation in the history of mind: the moment when intelligence began to exceed its biological container, and the systems that support human cognition began to include components that were not born. This article develops arguments from the Natural Intelligence research program. Correspondence and responses welcome. The authors declare no competing interests. If these ideas resonate with you, you are welcome to explore my books on Google Play!

Comments

Popular posts from this blog

Мои работы в стандартном представлении с мая 1997 по апрель 2010