FAQ

Frequently Asked Questions

Answers to the most common questions about the Functional Consciousness framework, its implications, and its limitations.

What is "consciousness"? How is our answer different from previous answers?

Consciousness has two faces. One is the inner felt quality of experience — what it's like to see red, feel pain, or notice your own thoughts. This is the "hard problem", and nobody has solved it. The other is the functional capacity to access and reason about your own internal states — to know what you know, notice what you don't know, and use that self-knowledge to act. This second face is what FC measures.

Previous theories each captured important pieces but couldn't produce numbers you could calculate for real systems. FC sets aside the "hard problem" entirely and focuses on what can actually be measured. The result is a metric that produces real numbers for real systems and lets you compare them on a common scale.

Read longer discussion →
What does FC mean for my job — can AI really replace human thinking?

FC-enhancement makes AI better at knowing what it doesn't know — which is genuinely useful and genuinely changes the risk profile for some professional roles. But the cognitive shape charts show something reassuring: the domains where humans are most distinctively human — emotional depth, social trust, ethical judgment, embodied presence — are precisely the domains where even the most advanced AI systems remain far behind. The gap is not closing as fast as the headlines suggest, and FC gives us a way to measure exactly how far it still is.

Read longer discussion →
Will FC optimization accelerate AGI?

Yes, but in a more controlled and potentially safer way than the current trajectory — and FC optimization might actually be one of the few paths toward AGI that doesn't end badly.

FC-optimized systems would be more capable of the kind of self-monitoring that makes any intelligent system — human or artificial — less dangerous. A mind that knows its own limitations, tracks its own reasoning errors, and has functional representations of its own ethical constraints is structurally safer than one that doesn't, regardless of raw capability. Whether this accelerates or decelerates the path to AGI depends on choices humans make about what to optimize for. FC gives us better instruments for those choices. It doesn't make the choices for us.

Read longer discussion →
Why "consciousness" and not "self-modeling score"?

Because the math forced our hand. We started with something modest — "the observable capacity of a system to reason about its own states". Then FC turned out to operationalize Higher-Order Thought theory (a state contributes to FCS if and only if it's HOT-conscious), yield a computable analogue of IIT's Φ, require Global Workspace Theory-style availability by definition, need an Attention Schema Theory-style filter, and ground representational capacity in predictive mutual information in line with Predictive Processing. Five independent convergences, none of them planned.

Read longer discussion →
How does FC relate to the "Big Five" theories of consciousness?

FC was not designed to compete with or replace IIT, GWT, HOT, PP, or AST. It was designed to produce a "consciousness meter" for real systems. However, we discovered during the process that FC captures core tenets of all five theories — while deliberately leaving their metaphysical superstructure untouched. For each theory, there is a part FC covers and a part that sticks out beyond FC's scope.

Read longer discussion →
Does FC operationalize Higher-Order Thought (HOT)?

Yes — and it's the cleanest correspondence in the paper.

HOT says a state is conscious when it becomes the target of a higher-order representation available to the system's reasoning. FC's Definition 2 requires exactly that: a self-model variable mᵢ represents internal state sᵢ and must be available to global reasoning. Under mild assumptions, a state contributes to FCS > 0 if and only if it is HOT-conscious — making FC a quantitative formalization of HOT's binary criterion. It even naturally handles recursive "thoughts about thoughts" through meta-cognitive self-models, explaining why our introspection is finite rather than an infinite regress. HOT tells you which states are conscious; FC tells you how much.

Read longer discussion →
How does FC relate to IIT?

FC and IIT share the intuition that consciousness requires both differentiation (rich internal representations) and integration (those representations working together). In FC, differentiation maps onto R and integration onto P — specifically, how much reasoning power depends on self-models being cross-linked across subsystems.

FC defines a computable analogue of IIT's Φ:

Φ_FCS = P(S) − Σⱼ P(moduleⱼ)

Unlike IIT's Φ, which is computationally intractable, Φ_FCS is directly computable for white-box systems. FC captures IIT's core functional intuition in a tractable form without inheriting its metaphysical overhead.

Read longer discussion →
Scott Aaronson defined the "Pretty Hard Problem of Consciousness" and showed that IIT fails to solve it. Does FC succeed where IIT failed?

We believe yes. FC produces actual numbers, grounded in predictive mutual information and reasoning power of self-models, demonstrated by scoring 9 agents on a common scale. Aaronson's counterexamples all share a property: they integrate information without representing themselves. A Vandermonde matrix transforms inputs to outputs with maximal integration, but has no model of its own states — so FC correctly scores it at zero. The cost: FC trades IIT's intractability for a new problem — enumerating all self-models of a system correctly and completely.

Read longer discussion →