# The Ontological Rift: Adjudication Report

*Methodological appendix to ["The Organismic Prophecy"](/blog/the-organismic-prophecy/).*

This document is a Gemini Deep Research output, run on 2026-04-28. Adrian framed the four adjudication questions and uploaded a working draft of the post plus two NotebookLM synthesis reports as anchor context; Gemini conducted the literature review and wrote the report. It is published verbatim (with bare citation markers stripped for readability) as the source material that informed the post's second-pass revision. It is **not** authored by Adrian and should not be read as his prose.

The four questions Gemini was asked to adjudicate:

1. Is the Veissière foundational-hyperprior construct broadly accepted, contested, or fringe?
2. Are functional grounding (Patel & Pavlick) and metabolic grounding (enactivist tradition) orthogonal, competing, or different levels of description?
3. Has Schaeffer's mirage-of-emergence argument held up against 2024–2026 mechanistic-interpretability findings of internal world models?
4. Is Active Inference Therapy a treatment modality with RCT outcome data, or a research framework without clinical validation?

---

## Introduction
The intersection of cognitive science, artificial intelligence, and computational psychiatry is currently defined by a profound ontological rift. As theoretical frameworks attempt to mathematically model human cognition and machine intelligence, deeply entrenched schisms have emerged regarding the nature of meaning, the mechanics of culture, and the reality of emergent phenomena. Synthesis papers circulating in the 2026 academic ecosystem frequently attempt to harmonize these domains, projecting an illusion of consensus. However, a rigorous, sceptic-driven analysis reveals critical fault lines. This exhaustive report investigates four highly contested pressure points in this theoretical landscape.

First, it scrutinizes Samuel Veissière’s "foundational hyperprior" within the Thinking Through Other Minds (TTOM) framework, assessing whether its reliance on innate probability spaces survives contemporary developmental and interactivist critiques. Second, it investigates the tension between "functional grounding" in large language models (LLMs) and "metabolic grounding" in biological organisms, challenging the prevailing orthodoxy that these are merely orthogonal dimensions rather than fundamentally competing definitions of cognition itself. Third, the analysis confronts Rylan Schaeffer’s highly influential "mirage of emergence" hypothesis with the latest 2024–2026 mechanistic interpretability literature, synthesizing a definitive resolution to the apparent paradox of structural world models. Finally, the report investigates the clinical status of "Active Inference Therapy" (AIT), delineating the boundary between established evidence-based intervention and theoretical heuristic in order to provide an honest, empirical grounding for contemporary trauma treatment protocols. Through this comprehensive synthesis, the underlying architectural tensions governing both biological and artificial intelligence are systematically illuminated.

## The Veissière "Foundational Hyperprior": A Sceptic’s Audit
The "Thinking Through Other Minds" (TTOM) framework, introduced by Samuel Veissière and colleagues, represents one of the most ambitious attempts to apply the Free Energy Principle (FEP) and predictive processing (PP) to human enculturation and social cognition. Central to this variational approach is the construct of a "foundational hyperprior"—posited as an innate, neurobiologically hardwired expectation that the world is populated by other minds, and that an agent can continuously minimize uncertainty by learning and conforming to shared cultural parameters. Within this model, human culture operates as a form of "variational niche construction," where shared behavioral regimes reliably guide agents toward minimizing collective prediction errors. The foundational hyperprior supposedly dictates that humans are born with an evolutionary mandate to seek out the mental states of others to resolve their own internal entropy.

While frequently cited with confidence in overarching syntheses of computational psychiatry, a rigorous sceptic's read of the contemporary developmental and cognitive science literature reveals that the foundational hyperprior is highly contested. Outside of orthodox predictive processing enclaves, the construct is frequently viewed with intense skepticism. Among interactivist, enactivist, and developmental theorists, it is often treated as bordering on fringe due to structural and philosophical fatal flaws that undermine its utility as a model of human culture.

### The Trap of Foundationalism and Innate Normativity
The primary critique leveled against the foundational hyperprior concerns its absolute reliance on "foundationalism". The mathematical architecture of the TTOM framework requires that hyperprior probabilities—and crucially, the fundamental "spaces" over which these probabilities are distributed—are innate and immutable. Critics, notably Robert Mirski, Mark H. Bickhard, and their colleagues, argue that this assumption effectively hardwires complex normativity into the human organism prior to any actual worldly experience.

If the foundational spaces are strictly innate, the model provides no coherent mechanical explanation for the emergence of ontologically novel normativities. Real-world enculturation involves the continuous generation of entirely new values, goals, aesthetic preferences, and behavioral standards that are not mere evolutionary derivatives. A framework locked into innate phylogenetic hyperpriors fails to account for profound developmental divergence. For example, the developmental reality that genetically identical twins, raised in slightly divergent socio-cultural micro-environments, routinely develop radically opposing sets of values and life goals stands in direct contradiction to a model where normative targets are pre-programmed.

As critics fiercely argue, encultured persons are actively *constituted* by these emergent phenomena, not merely guided by pre-programmed statistical set-points. Consequently, the TTOM model leaves virtually "no room for normative learning, no development, no socialization, and no enculturation" in the truest, most generative sense of those terms. It reduces the vast complexity of cultural evolution to a simple process of error reduction within a fixed, biologically predetermined parameter space.

### The Failure of the Epistemic-Gain Motivation
A secondary, yet equally devastating vulnerability of the TTOM construct is its strict adherence to the Free Energy Principle’s mandate that all biological and social behavior is ultimately driven by the overarching motivation for "epistemic gain". Under the FEP, every action is theoretically calculable as an attempt to reduce global uncertainty and minimize prediction error. When this strict mathematical formalism is applied to human culture, it results in what critics describe as a deeply reductionist "manipulationist" view of society, wherein human interactions, rituals, and institutions are reduced to mere informational redundancies created solely for efficient epistemic processing.

This perspective fundamentally fractures when confronted with observable, widespread human cultural behaviors that are explicitly anti-epistemic. The motivation for epistemic gain completely fails to explain common phenomena such as an individual re-watching a favorite film for the hundredth time. This action provides zero new information or epistemic gain, yet it constitutes a significant, emotionally resonant cultural activity. Similarly, in many communities, cultivated ignorance, apathy, or detachment is culturally valued (often codified as "coolness"); such normative standards are flatly hostile to the pursuit of information and uncertainty reduction.

Furthermore, the model fails to grapple with the reality that humans routinely override their phylogenetically hardwired survival normativities in service of abstract, encultured ideals. The historical and contemporary reality of individuals voluntarily suffering, starving, or dying for a political, religious, or artistic cause is entirely incompatible with the FEP model. The FEP’s mathematical architecture, which relies heavily on returning organisms to expected innate biological states (a dilemma famously known as the "dark-room problem"), cannot gracefully model organisms that actively seek to dismantle their own expected physiological states to fulfill an emergent cultural narrative. Because the FEP cannot handle normative phenomena per se, any behavioral exceptions—such as seeking pain (e.g., eating exceptionally hot peppers) or engaging in extreme sports—must be awkwardly explained away as pre-programmed exceptions rather than learned cultural developments.

### Synthesis on TTOM's Academic Status
In the academic literature spanning 2024–2026, the foundational hyperprior remains an elegantly structured mathematical abstraction within the specific sub-field of computational psychiatry. It is accepted as a highly useful modeling tool for isolating certain top-down metacognitive biases, such as those observed in psychiatric illnesses where expected uncertainty is demonstrably misestimated. For example, in models where interoceptive prediction errors elicit psychosomatic hallucinations, the hyperprior construct effectively maps the circular, enactive causation between expected confidence in the world and visceral regulation.

However, as a foundational theory of sociology, culture, and ontogenetic human development, it is fiercely contested and widely rejected by developmental theorists. The model finesses the "hard problem" of normative emergence by simply declaring the boundaries of meaning to be innate. Consequently, a rigorous sceptic's reading concludes that while TTOM is a dominant and respected heuristic in purely neuro-computational circles exploring error-minimization, its core construct—the foundational hyperprior—is viewed as a reductionist mischaracterization of the encultured mind by philosophers and cognitive scientists dedicated to the study of genuine emergent complexity.

## The Grounding Schism: Functional vs. Metabolic Ontologies
A central and escalating debate regarding the capabilities of Large Language Models (LLMs) hinges on whether artificial systems can achieve true "understanding," semantic representation, or environmental grounding. In contemporary cognitive science and AI literature, this debate has crystallized into a stark confrontation between "functional grounding" and "metabolic grounding." Many high-level synthesis reports and commercial AI overviews finesse this distinction, portraying functional and metabolic grounding as merely orthogonal dimensions of intelligence that can operate independently or complementarily. A sharper, more rigorous analysis reveals that they are not orthogonal metrics; they are fundamentally competing definitions of what constitutes meaning, consciousness, and cognition itself, rooted in an irreconcilable ontological rift.

### The Architecture of Functional Grounding
"Functional grounding," as championed by researchers such as Roma Patel and Ellie Pavlick, derives its theoretical lineage from structuralist linguistics and functional role semantics. The core axiom of this paradigm is that the "meaning" of an entity, word, or concept is defined exhaustively by its functional roles, interactions, inferential linkages, and constraints within a given system.

The classic pedagogical analogy utilized to explain this is the bishop in a game of chess. A bishop piece does not refer to any physical, real-world object; its identity, utility, and "meaning" are completely dictated by the rules of the game (its restriction to moving and capturing along diagonal paths). Under the framework of functional grounding, LLMs achieve profound understanding through generative self-learning. By processing vast, continuous sequences of text, LLMs extract extensive causal and statistical regularities, effectively mapping conceptual domains—such as spatial coordinates, temporal flow, and color spectrums—into incredibly dense, high-dimensional geometries within their neural networks.

Recent algorithmic advances have actively sought to bind these latent functional representations to actionable, physical environments through online Reinforcement Learning (RL). Frameworks such as GLAM (Grounding Language Models with Online RL) formalize this process by leveraging the LLM as an active policy. Specifically, the system learns a policy that maximizes expected discounted sums of rewards for any given goal based on the LLM's internal structural priors. In this functionalist paradigm, meaning is entirely relational, computational, and substrate-independent. It posits that a perfectly mapped functional network—one that accurately predicts physical rules, suggests viable robotic actions, and aligns with environmental affordances—is functionally indistinguishable from true comprehension.

### The Rebuttal of Metabolic Grounding
Conversely, "metabolic grounding" draws heavily from the traditions of enactivism, embodied cognition, and the biology of autopoiesis (self-production), championed historically by figures like Francisco Varela, Humberto Maturana, and Evan Thompson, and contemporarily by theorists like Ezequiel Di Paolo. This biological perspective outright rejects functional role semantics as a sufficient or valid basis for meaning.

According to metabolic grounding, meaning, normativity, and consciousness are inextricably linked to thermodynamic vulnerability and the biological imperative for self-preservation. Contemporary theorists such as Miller et al. (2019) and Meincke (2023) conceptualize metabolic self-maintenance as the absolute primordial foundation of the "self". In this biological view, values and meaning are not illusions projected onto a meaningless universe, nor are they mere statistical correlations; they are intrinsic properties of living systems. "Normativity is grounded in thermodynamics, not in transcendental ideals" or isolated mathematical systems. A value judgment of "good" simply and fundamentally means an event is conducive to maintaining autopoiesis, while "bad" means it threatens the organism's boundary integrity and metabolic continuity.

From this vantage point, an LLM—no matter how functionally dense its internal representations of world state regularities become, or how well it performs in RL simulated environments—utterly lacks the thermodynamic criticality required for affect and genuine meaning. Because the LLM is inherently disembodied and cannot biologically "die," it has no intrinsic existential stake in the outcomes of its computations. To the strict enactivist, the LLM remains a sophisticated "philosophical zombie"; it is a system executing complex statistical correlations without the affective prioritization (the capacity for genuine "care") that transforms raw, objective data into a subjectively meaningful Umwelt. As the literature notes, while LLMs exhibit linguistic fluency, they lack the "deterministic metabolic grounding" required to inherently understand biochemical stoichiometry or visceral reality.

| Feature | Functional Grounding | Metabolic Grounding |
| :---- | :---- | :---- |
| **Primary Mechanism** | Statistical correlation, functional role semantics, reinforcement learning | Autopoiesis, thermodynamic regulation, boundary defense |
| **Definition of Meaning** | Relational and computational; defined by system constraints (e.g., chess rules) | Existential and affective; defined by survival utility and self-maintenance |
| **View of LLMs** | Capable of achieving true understanding via complex causal world models | Philosophical zombies lacking thermodynamic criticality and true "care" |
| **Philosophical Root** | Structuralism, Computational Theory of Mind | Enactivism, Neurophenomenology, Embodied Cognition |

### Competing Ontologies, Not Orthogonal Metrics
The tension here is not merely a technical disagreement about system architecture; it is profoundly definitional. If functional grounding is objectively true, then continuously scaling LLMs, expanding their context windows, and augmenting them with RL policies will eventually yield artificial systems with equivalent (or superior) semantic comprehension to human beings. If metabolic grounding is true, this endeavor is categorically impossible, as true cognition and meaning are immanent solely within living organization.

The industry synthesis papers that "paper over" this deep theoretical distinction generally operate under a persistent Cartesian assumption. They treat consciousness, meaning, and understanding as isolated variables or emergent algorithms that can simply be tacked onto a sufficiently complex functional substrate. However, as the highly interdisciplinary literature of 2026 increasingly acknowledges, artificial intelligence represents a severe "limit case" where these interdisciplinary exclusions and philosophical contradictions become acutely visible and unworkable. Functional grounding creates an immaculate, high-fidelity simulation of a world; metabolic grounding asserts that without the fragile vulnerability of biological life, the simulation remains forever hollow. They are not independent variables on a chart; they are competing definitions of reality itself.

## The Mirage of Emergence vs. Mechanistic Interpretability (2024–2026)
In 2023, Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo published a highly disruptive and influential paper arguing that the widely celebrated "emergent abilities" of Large Language Models—the phenomenon where models display sudden, unpredictable jumps in capability at specific parameter scales—were largely a "mirage". Schaeffer rigorously demonstrated that these sudden jumps often vanished entirely when researchers changed their evaluation metrics. Specifically, when non-linear, discontinuous measures (such as exact-match accuracy or multiple-choice pass/fail) were replaced with continuous, smooth metrics (such as Brier scores or token-level log probabilities), the sudden "emergence" smoothed out into a predictable, linear progression of improvement. This finding heavily bolstered the "stochastic parrot" narrative, suggesting that LLMs were simply smoothing out statistical distributions at scale without undergoing any fundamental, qualitative phase transition in their internal cognitive capabilities.

However, there is profound tension between Schaeffer's 2023 metric argument and the explosive findings in the specialized field of Mechanistic Interpretability (MI) spanning 2024 to 2026\. A close reading of the recent MI literature reveals that while Schaeffer was technically correct regarding the *behavioral* evaluation of emergence, the argument fundamentally fails to account for the *structural* emergence of true internal world models. The synthesis papers often paper over this tension by treating the behavioral and structural arguments as mutually exclusive, when in reality, they address different layers of the neural architecture.

### Linear Representations of Space, Time, and Truth
The pioneering work of Wes Gurnee, Max Tegmark, and others has systematically dismantled the stochastic parrot hypothesis by developing tools to peer directly inside the "black box" of LLMs. MI researchers have empirically demonstrated that as models scale, they do not merely memorize increasingly complex statistical surface correlations; they undergo distinct internal phase transitions where they implicitly construct **structured world models**.

In highly cited work, Gurnee and Tegmark (2024) proved that LLMs generate linear abstractions of spatial and temporal data embedded deep within their network geometry. Through sparse probing techniques, researchers revealed that these models successfully and autonomously map geographic topology (the physical distance between cities or landmarks) and historical timelines across multiple distinct scales. Similarly, researchers like Marks and Tegmark (2023) utilized activation patching to identify generalized "truth directions" within the model's latent space. They demonstrated that a simple linear probe could consistently distinguish truthful statements from false ones across wildly diverse topics and datasets, proving that advanced models develop an internal, structural concept of truth independent of the specific textual context.

This body of evidence strongly aligns with the "simulation hypothesis" of LLMs: the mathematical objective of minimizing next-token prediction error eventually forces an advanced learning system to efficiently compress data by simulating the underlying causal processes and physical laws that generated the text in the first place.

### "Too Big to Fool": The Definitive Rebuttal
The definitive resolution to the Schaeffer tension arrived with subsequent MI research focused specifically on the robustness of these internal world models. The highly impactful paper "Too Big to Fool: Resisting Deception in Language Models" directly addresses the emergence debate by evaluating how models handle intentionally deceptive, in-context information.

The theoretical premise is elegant: if emergence were purely a statistical mirage—a mere trick of metric selection—larger models should be just as vulnerable to in-context deception as smaller ones, perhaps simply regurgitating the deceptive prompt more eloquently due to superior pattern matching. Instead, researchers discovered a profound divergence. Larger models exhibit exceptionally high resilience against deception. Because large models possess robust, holistic internal world models (as proven by Gurnee and Tegmark), they possess the architectural capacity to actively validate new, misleading prompt information against their own deeply encoded internal knowledge graphs.

Conversely, small models, which lack these deeply structured causal maps, experience a complete collapse in qualitative reasoning and defer to the deceptive cues provided in the prompt. Rigorous methodological controls were applied to ensure this resilience was genuine. By comparing models overfitted on test data with those guaranteed to be free of contamination, researchers demonstrated that the improved resilience stems directly from the ability to "better integrate conflicting in-context information with their world model knowledge" rather than mere data leakage or ignoring the prompt. When given truthful hints, the large models utilized them effectively, achieving near-perfect accuracy, proving they were not simply ignoring the context window.

### The Synthesis of the Tension
Ultimately, the synthesis papers that "paper over" this tension fail to articulate a critical nuance: **Behavioral emergence and structural emergence are distinct, decoupleable phenomena.** Schaeffer's 2023 critique holds up perfectly against *behavioral* evaluations; human researchers can indeed create artificial mirages of sudden intelligence by choosing non-linear evaluation metrics.

However, Schaeffer’s argument definitively does not hold up against the *structural* findings of 2024–2026. Through the sub-field of "Developmental Interpretability," MI tools track the incremental, physical crystallization of internal structures and phase transitions during the training run itself. The sudden ability of an LLM to consistently resist complex deception, or to linearly compute geographic topography, is not a metric artifact. It is the empirical manifestation of an internal architectural paradigm shift from shallow statistical memorization to profound causal simulation. In the context of internal representations, the mirage of emergence has been largely superseded by the mechanistic proof of implicit, functional world models.

## Active Inference Therapy (AIT): Clinical Reality vs. Theoretical Heuristic
In contemporary psychiatric, psychodynamic, and neuro-computational literature, "Active Inference Therapy" (AIT) is frequently cited with an air of immense clinical authority. Broad synthesis texts frequently group AIT alongside established modalities, implying parity in evidence-based standing and institutional validation. For frontline practitioners—particularly those working in high-stakes environments dealing with complex trauma and dissociation—it is vital to cleanly demarcate empirically validated clinical treatments from theoretical frameworks. A rigorous review of the 2026 literature unequivocally reveals that AIT remains an exploratory heuristic and theoretical research framework; it does not possess a standalone corpus of Randomised Controlled Trials (RCTs) proving independent clinical efficacy.

### The Tripartite Psychotherapy Model and the Mechanistic Lens
AIT is primarily encountered within highly integrated theoretical meta-frameworks, most notably Grant H. Brenner’s (2026) "Tripartite Psychotherapy Model". Brenner's ambitious and sophisticated model synthesizes three distinct paradigms: Personalized Self-State Mapping (PSSM) for charting dissociated psychological architectures; Experiential Field Theory (EFT) for understanding the co-created relational dyad; and Active Inference Therapy (AIT).

In this Tripartite construct, AIT is not deployed as a distinct set of novel verbal interventions, but rather provides the underlying mechanistic logic for how therapeutic transformation actually occurs at the neurological level. Rooted heavily in Karl Friston’s Free Energy Principle, AIT views complex psychopathology (such as severe trauma responses or treatment-resistant depression) as a state governed by overly rigid, maladaptive priors that fiercely resist updating in the face of new, safe evidence. Psychotherapy is subsequently cast through this lens as "emergent self-reorganization"—a process of self-evidencing where the therapeutic relationship creates a neurologically safe environment of "entropy-mediated plasticity." This plasticity allows rigid, traumatized priors to safely relax and integrate new information.

### The Mirage of AIT Clinical Data
The widespread confusion regarding AIT's empirical standing stems directly from how recent synthesis papers format their citations of outcome data. Literature surrounding AIT heavily cites robust clinical outcome data, creating a halo of empirical validity. However, a deep dive into the bibliographies reveals that this data invariably belongs to *other*, established modalities that are being used to justify the AIT theoretical framework post-hoc.

For example, papers detailing the transdiagnostic "Conflict-Square Algorithm" (CSA)—which uses AIT heavily as an interpretive framework—cite extensive RCT meta-analyses. However, a closer inspection reveals these are actually citations for Intensive Short-Term Dynamic Psychotherapy (ISTDP) (e.g., Abbass et al., 2012\) and Affect-Phobia Therapy. The CSA utilizes AIT to provide a "principled account" of *why* ISTDP’s specific techniques of graded dosing and protection of positive affects are successful, explaining the dynamic as an "iterated perception-action cycle" driven by "small, safe prediction errors". The RCT data proves that ISTDP works; AIT is simply providing a new, computational vocabulary to explain the mechanism of action.

Similarly, significant clinical shifts are noted in the literature in the context of TMS-Assisted Psychotherapy (TAP) and studies on synchronized Transcranial Magnetic Stimulation. While researchers discuss composite outcomes of patients resolving dissociative distance and silencing intrusive thoughts through accelerated TMS protocols, these are clinical observations of a specific neuromodulation protocol. Furthermore, specific studies cited, such as He et al. (2025) on synchronized TMS targeting frontal cortex alpha rhythms, actually found "no significant differences" regarding antidepressant clinical outcomes between synchronized and non-synchronized groups, despite stronger network activations. None of this constitutes an isolated RCT of Active Inference Therapy operating as an independent modality.

### Hedging for Trauma Treatment Protocols
To their credit, the primary authors of these computational models are highly transparent about this developmental status when scrutinized directly. Brenner explicitly notes in his 2026 paper that the synergies proposed in the Tripartite model currently "speculate without direct testing" and that the core hypotheses "require systematic empirical validation through carefully designed studies". Furthermore, the Conflict-Square Algorithm is explicitly presented as a "proof-of-concept feasibility demonstration"—using transcriptions from published ISTDP training videos to prove that coding therapist interventions is mathematically feasible—rather than an established clinical reality. The literature explicitly outlines a *proposed future* pragmatic RCT comparing CSA-guided care to usual care, further confirming that such data does not presently exist.

Therefore, for practitioners, researchers, or clinical directors drafting trauma interventions or institutional research protocols, absolute honesty demands that AIT is heavily hedged. It is not an actionable, heavily evidence-based treatment modality. It is a highly sophisticated epistemic lens—a translational vocabulary bridging the historical depths of neuropsychoanalysis with the cutting-edge mathematics of computational neuroscience. It offers practitioners a powerful framework to conceptualize an intervention—for instance, conceptualizing a sudden therapeutic rupture as a "catastrophic update" due to crossing a critical safety threshold—but it does not independently dictate the standard of care, nor does it possess the isolated clinical RCT data required for institutional validation as a standalone therapy.

## Conclusion
The theoretical landscape of 2026 is characterized by immensely ambitious models attempting to bridge the seemingly insurmountable gaps between mathematics, biology, consciousness, and clinical reality. As demonstrated, Veissière’s foundational hyperprior represents a structurally rigid application of the Free Energy Principle to human culture. It fails to adequately account for the ontogenetic emergence of novel, non-epistemic normativities observed in human enculturation, remaining a highly contested, heuristic abstraction rather than an accepted developmental reality.

Simultaneously, the discourse surrounding artificial intelligence suffers from persistent attempts to equate functional and metabolic grounding. While LLMs excel at mapping functional role semantics and achieving complex goal-oriented alignment via reinforcement learning, they remain functionally bound. Metabolic grounding strictly insists that true semantic meaning is inseparable from thermodynamic vulnerability and the biological drive for autopoiesis—a schism that defines the ultimate limits of machine cognition and rejects the premise that simulation equates to consciousness.

Yet, within their specific functional domains, LLMs exhibit undeniable structural emergence. Schaeffer's "mirage of emergence" correctly identified the pitfalls of discontinuous evaluation metrics but failed to anticipate the profound empirical findings of mechanistic interpretability. Large language models do not merely regurgitate text; they crystallize internal, linear world models capable of actively resisting deception and charting the latent geometries of space, time, and truth.

Finally, in the clinical realm, the translation of computational neuroscience into frontline practice must be carefully and honestly parsed. Active Inference Therapy offers a profound theoretical vocabulary for understanding psychopathology as rigid Bayesian priors, yet it fundamentally lacks the independent Randomized Controlled Trials necessary to classify it as an established clinical modality. It remains a powerful heuristic lens, borrowing its empirical justification from adjacent, historically validated therapies. Navigating these modern fault lines requires strict analytical rigor, distinguishing consistently between mathematical elegance, empirical structure, and biological reality.

#### Works cited
1. A hole in a piece of cardboard and predictive brain: the incomprehension of modern art in the light of the predictive coding paradigm \- PMC, accessed on April 28, 2026, [https://pmc.ncbi.nlm.nih.gov/articles/PMC10725754/](https://pmc.ncbi.nlm.nih.gov/articles/PMC10725754/)  
2. Thinking through other minds: A variational approach to cognition ..., accessed on April 28, 2026, [https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/thinking-through-other-minds-a-variational-approach-to-cognition-and-culture/9A10399BA85F428D5943DD847092C14A](https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/thinking-through-other-minds-a-variational-approach-to-cognition-and-culture/9A10399BA85F428D5943DD847092C14A)  
3. Encultured minds, not error reduction minds | Behavioral and Brain Sciences | Cambridge Core, accessed on April 28, 2026, [https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/encultured-minds-not-error-reduction-minds/844B62391953E4F55AC4CF0EBE376D4A](https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/encultured-minds-not-error-reduction-minds/844B62391953E4F55AC4CF0EBE376D4A)  
4. Thinking through other minds: A variational approach to cognition and culture \- Sander Van de Cruys, accessed on April 28, 2026, [https://sandervandecruys.be/pdf/2020\_VandeCruysHeylighenBBS\_Dark\_side.pdf](https://sandervandecruys.be/pdf/2020_VandeCruysHeylighenBBS_Dark_side.pdf)  
5. Paradoxical Geopolitical Implications of Dynamics of Self-Other ..., accessed on April 28, 2026, [https://laetusinpraesens.org/docs20s/selfothe.php](https://laetusinpraesens.org/docs20s/selfothe.php)  
6. Plea for a Processual Perspectivism: Toward a Philosophy of Enactive Inference, accessed on April 28, 2026, [https://www.preprints.org/manuscript/202511.2291/v1](https://www.preprints.org/manuscript/202511.2291/v1)  
7. “Understanding AI”: Semantic Grounding in Large Language ... \- arXiv, accessed on April 28, 2026, [https://arxiv.org/pdf/2402.10992](https://arxiv.org/pdf/2402.10992)  
8. Mechanistic Interpretability for AI Safety — A Review | Leonard F ..., accessed on April 28, 2026, [https://leonardbereska.github.io/blog/2024/mechinterpreview/](https://leonardbereska.github.io/blog/2024/mechinterpreview/)  
9. Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning \- arXiv, accessed on April 28, 2026, [https://arxiv.org/html/2302.02662v5](https://arxiv.org/html/2302.02662v5)  
10. GLAM, accessed on April 28, 2026, [https://sites.google.com/view/grounding-llms-with-online-rl/](https://sites.google.com/view/grounding-llms-with-online-rl/)  
11. Mind in Life and Life in Mind PowerPoint Presentation, free download \- SlideServe, accessed on April 28, 2026, [https://www.slideserve.com/ulema/mind-in-life-and-life-in-mind](https://www.slideserve.com/ulema/mind-in-life-and-life-in-mind)  
12. Interoceptive inference: From computational neuroscience to clinic ..., accessed on April 28, 2026, [https://www.researchgate.net/publication/324696848\_Interoceptive\_inference\_From\_computational\_neuroscience\_to\_clinic](https://www.researchgate.net/publication/324696848_Interoceptive_inference_From_computational_neuroscience_to_clinic)  
13. How Does the Brain Know Itself? \- Psychology Today, accessed on April 28, 2026, [https://www.psychologytoday.com/us/blog/experimentations/202603/how-does-the-brain-know-itself](https://www.psychologytoday.com/us/blog/experimentations/202603/how-does-the-brain-know-itself)  
14. MetaKnogic-Alpha: A Hyper-Relational Knowledge Base for Grounded Metabolic Reasoning | bioRxiv, accessed on April 28, 2026, [https://www.biorxiv.org/content/10.64898/2026.02.05.704050v1.full-text](https://www.biorxiv.org/content/10.64898/2026.02.05.704050v1.full-text)  
15. ccs-ucb/llms-cogsci: Psych 290Q S23 @ UC Berkeley: Large Language Models and Cognitive Science \- GitHub, accessed on April 28, 2026, [https://github.com/ccs-ucb/llms-cogsci](https://github.com/ccs-ucb/llms-cogsci)  
16. Too Big to Fool: Resisting Deception in Language Models, accessed on April 28, 2026, [https://openreview.net/pdf?id=tet8yGrbcf](https://openreview.net/pdf?id=tet8yGrbcf)  
17. Position: Stop Making Unscientific AGI Performance Claims \- arXiv, accessed on April 28, 2026, [https://arxiv.org/html/2402.03962v3](https://arxiv.org/html/2402.03962v3)  
18. Emergent Capabilities of LLMs for Software Engineering Conor S. O'Brien Alexandria, Virginia, United States Bachelor of Science,, accessed on April 28, 2026, [https://www.cs.wm.edu/\~denys/pubs/Emergent%20Capabilities%20of%20LLMs%20for%20Software%20Engineering.pdf](https://www.cs.wm.edu/~denys/pubs/Emergent%20Capabilities%20of%20LLMs%20for%20Software%20Engineering.pdf)  
19. arxiv.org, accessed on April 28, 2026, [https://arxiv.org/html/2501.16496v1](https://arxiv.org/html/2501.16496v1)  
20. arxiv.org, accessed on April 28, 2026, [https://arxiv.org/html/2601.02907v2](https://arxiv.org/html/2601.02907v2)  
21. A Tripartite Psychotherapy Model | Neuromodec, accessed on April 28, 2026, [https://neuromodec.org/2026/01/tripartite-psychotherapy-model/](https://neuromodec.org/2026/01/tripartite-psychotherapy-model/)  
22. Machine Psychoanalysis: Sharing Computoanalysis | by Grant H Brenner MD DFAPA, accessed on April 28, 2026, [https://granthbrennermd.medium.com/machine-psychoanalysis-introducing-computoanalysis-98e6db5fc595](https://granthbrennermd.medium.com/machine-psychoanalysis-introducing-computoanalysis-98e6db5fc595)  
23. A transdiagnostic conflict-square algorithm: a four-node ... \- Frontiers, accessed on April 28, 2026, [https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2026.1687372/full](https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2026.1687372/full)  
24. A transdiagnostic conflict-square algorithm: a four-node ... \- PMC \- NIH, accessed on April 28, 2026, [https://pmc.ncbi.nlm.nih.gov/articles/PMC13033735/](https://pmc.ncbi.nlm.nih.gov/articles/PMC13033735/)

