How does FC relate to the "Big Five" theories of consciousness?
The Short Answer
FC was not designed to compete with or replace IIT, GWT, HOT, PP, or AST. It was designed to produce a "consciousness meter" for real systems. However, we discovered during the process that FC captures core tenets of all five theories — while deliberately leaving their metaphysical superstructure untouched. For each theory, there is a part FC covers and a part that sticks out beyond FC's scope.
Integrated Information Theory (IIT) — Tononi et al.
What IIT claims: Consciousness is identical to integrated information (Φ) — a measure of how much a system’s causal structure works as a unified whole that cannot be decomposed into independent parts. The integration must be intrinsic to the system’s causal architecture, not imposed by an external observer.
What FC covers: The intuition that integration matters. In FC, the FCS score rises exponentially when multiple self-models are cross-linked into a single reasoning space. A Waymo taxi that can jointly reason over its energy, trajectory, and passenger self-models scores far higher than one reasoning over each in isolation. This is FC’s engineering analogue of Φ.
What sticks out: IIT’s integration is intrinsic and causal — it must be a property of the system’s own causal structure across time, not an observer’s description of it. FC’s integration is observer-described and reasoning-based. IIT also requires architectural statefulness: the system’s state at time t must genuinely arise from, and be irreducible to, its own previous state. FC makes no such claim. Finally, Φ is famously computationally intractable — FC trades that intractability for the problem of enumerating self-models completely, which is hard but tractable.
See deep-dive on the FC analogue of Φ →
Global Workspace Theory (GWT) — Baars, Dehaene
What GWT claims: Consciousness arises when information is broadcast widely across the brain — made globally available to many different cognitive processes simultaneously. The “workspace” is a central bottleneck through which selected information passes and becomes conscious.
What FC covers: Global availability. In FC, a self-model becomes functionally active when attention selects it and makes its contents available to the global reasoning engine (P). This is structurally equivalent to GWT’s broadcast: the self-model’s content enters the global reasoning space and influences behavior across subsystems.
What sticks out: GWT is specifically about a broadcasting mechanism and a bottleneck architecture — a central workspace with a specific neural implementation. FC uses attention and reasoning as the availability mechanism, which is compatible with GWT but not equivalent. GWT also has strong neuroscientific grounding (ignition events, P300 signatures) that FC makes no claims about.
Higher-Order Theories (HOT/HOP) — Rosenthal, Lycan
What HOT claims: A mental state is conscious only when it becomes the object of a higher-order representation — the mind must represent its own states in order for those states to be conscious. Consciousness requires meta-representation.
What FC covers: This is the tightest overlap. Self-models in FC are precisely higher-order representations: they encode first-order internal states (s_i) in a form that can be processed by the reasoning system. The meta-attention and meta-self-awareness self-models in the SBR catalog explicitly model the system’s own ongoing cognitive processes, instantiating the recursive structure HOT requires.
What sticks out: HOT theories are typically framed as accounts of phenomenal consciousness — the higher-order representation is what makes a state feel like something. FC explicitly does not make this claim. FC’s self-models produce meta-access, not phenomenal experience. HOT also debates whether the higher-order state must be actual or merely dispositional, a distinction FC does not address.
See deep-dive on HOT operationalization →
Predictive Processing (PP) — Friston, Clark, Hohwy
What PP claims: The brain is a prediction machine that continuously generates and updates a generative model of itself and the world, minimizing prediction error (free energy). Consciousness is the process of this self-modeling under uncertainty.
What FC covers: FC’s representational capacity (R) is explicitly grounded in predictive information theory (Bialek et al.) — it measures the subset of information a self-model encodes that actually helps predict future states. Richer self-models reduce prediction error about the system’s own future states. FC’s state-space expansion (P) is also a direct measure of how much a reasoning cycle extends the system’s predictive reach.
What sticks out: PP is a broad framework for all cognition, not only consciousness. It also carries strong commitments about hierarchical generative models, precision-weighted prediction error, and active inference that FC does not replicate. PP’s account of phenomenal consciousness — that experience just is the content of the brain’s generative model — is a claim FC neither endorses nor disputes.
Attention Schema Theory (AST) — Graziano, Webb
What AST claims: The brain builds a simplified internal model — an “attention schema” — of its own attention process. Consciousness is the brain’s model of itself attending. This model is necessarily imprecise, which is why our introspective reports are systematically inaccurate.
What FC covers: AST is the most direct inspiration for FC’s architecture. The meta-attention self-model in the SBR catalog is exactly an attention schema: an internal representation of what the system is currently attending to and why. FC formalizes this by scoring it like any other self-model — breadth, depth, and reasoning power.
What sticks out: Graziano’s claim is not just functional but explanatory: the attention schema is what causes the illusion of phenomenal experience. FC makes no claim about whether self-models cause experience, generate illusions of experience, or are entirely unrelated to phenomenal experience. AST is also committed to a specific neural architecture (cortical attention circuits) that FC does not require.
Summary Table
| Theory | Core claim | What FC covers | What sticks out |
|---|---|---|---|
| IIT | Consciousness = intrinsic integrated information (Φ) | Integration rises exponentially with cross-linked self-models | Intrinsic causality, architectural statefulness, temporal constitutivity |
| GWT | Consciousness = global broadcast of information | Self-models made globally available via attention + reasoning | Broadcasting mechanism, neural bottleneck architecture |
| HOT | Consciousness = higher-order meta-representation | Self-models are higher-order representations of internal states | Phenomenal claims, actual vs. dispositional HOT debate |
| PP | Consciousness = minimizing prediction error via self-model | R grounded in predictive information; P measures predictive reach | Hierarchical generative models, active inference, phenomenal claims |
| AST | Consciousness = brain’s model of its own attention | Meta-attention self-model directly instantiates the attention schema | Phenomenal illusion claim, specific neural architecture |
The bottom line
FC captures the functional substrate that all five theories treat as necessary for consciousness: internal states becoming richly represented and available for further reasoning. Whether that substrate is sufficient for consciousness — in any of its forms — is precisely what the five theories disagree about. FC does not resolve that disagreement. What it does is give you a number, so the disagreement can be grounded in something measurable rather than argued purely in the abstract.