What is "consciousness"? How is FC's answer different from previous definitions?

The Short Answer

Consciousness has two faces. One is the inner felt quality of experience — what it's like to see red, feel pain, or notice your own thoughts. This is the "hard problem", and nobody has solved it. The other is the functional capacity to access and reason about your own internal states — to know what you know, notice what you don't know, and use that self-knowledge to act. This second face is what FC measures.

Previous theories — IIT, Global Workspace Theory, Higher-Order Theories — each captured important pieces but couldn't produce numbers you could calculate for real systems. FC makes a deliberate trade: it sets aside the "hard problem" entirely and focuses on what can actually be measured. The result is a metric that produces real numbers for real systems — a Waymo taxi, a language model, a human being — and lets you compare them on a common scale.

What FC cannot tell you is whether any of these systems feel anything. That question remains open. What it can tell you is how richly each system models itself and how powerfully it reasons over those models. For engineers building AI, that turns out to be exactly the right question to ask.

The honest landscape

Consciousness is one of the oldest and least resolved questions in human thought. Despite centuries of philosophy and decades of neuroscience, there is still no agreed definition, no agreed measurement, and no agreed explanation. This is not a failure of intelligence — it reflects genuine difficulty. The question resists easy answers because it sits at the intersection of subjective experience and objective description.

Most serious attempts to define consciousness cluster around two very different things that often get confused.

The first is phenomenal consciousness — what it feels like to be something. The redness of red. The painfulness of pain. The specific quality of your experience right now reading these words. Philosophers call this qualia. David Chalmers called the difficulty of explaining it the hard problem — hard not because it requires more research but because it is unclear whether any scientific explanation could ever fully capture it. Even a complete map of every neuron firing when you see red would not obviously explain why it feels like something rather than nothing.

The second is access consciousness — the availability of information within a system for reasoning, reporting, and behavioral control. When you can describe what you're thinking, correct your own errors, report your own limitations, and use knowledge of your own states to guide decisions — that is access consciousness. It is functional, observable, and in principle measurable.

Most people, when they ask "is AI conscious?", are actually asking about phenomenal consciousness — does it feel like something to be ChatGPT? But most scientific and engineering work on consciousness actually addresses access consciousness, because that's the part you can study from the outside.

Previous answers and their problems

The history of consciousness theories is a graveyard of partial answers, each capturing something real while leaving something essential unexplained.

Descartes proposed a sharp separation between mind and body — consciousness was a non-physical substance. This raised more questions than it answered, particularly about how a non-physical mind could interact with a physical brain.

Behaviourism said consciousness was just behavior — what a system does, not what it experiences. This was scientifically tractable but felt obviously wrong. A philosophical zombie that behaves exactly like a human but experiences nothing would be conscious by this definition.

Global Workspace Theory (Baars, 1988) proposed that consciousness arises when information is broadcast widely across the brain — made globally available to many different cognitive processes simultaneously. This is a rich and influential idea with real neuroscientific support, but it describes the mechanism of consciousness more than its nature, and it doesn't easily yield a measurement.

Integrated Information Theory (Tononi, 2004 onwards) proposed that consciousness is identical to integrated information — a quantity called Φ that measures how much a system's parts work together as a unified whole rather than independently. IIT has the virtue of being mathematically precise. It has the vice of being computationally intractable — calculating Φ for any system of biological interest is effectively impossible. It also produces counterintuitive results, such as simple grid networks scoring higher than the human brain in some configurations, which led 124 prominent neuroscientists to sign an open letter questioning whether it should be treated as a leading theory.

Higher-Order Theories (Rosenthal and others) proposed that a mental state is conscious only when it becomes the object of another mental state — when the mind thinks about its own thinking. This captures something important about metacognition but is difficult to operationalize.

Predictive Processing (Friston, Clark and others) describes the brain as a prediction machine that constantly models itself and the world, minimizing the gap between predictions and reality. Deeply influential in contemporary neuroscience but more a framework for cognition generally than a specific theory of consciousness.

What all of these share is a gap between theoretical elegance and practical measurement. None of them produce a number you can calculate for a Waymo taxi or a Generative Agent and compare to a human baseline.

How FC's answer is different

FC does something none of the above theories do: it gives up on answering the hard problem entirely, and in doing so becomes useful.

This is not intellectual cowardice — it is a deliberate methodological choice with a specific payoff. By focusing exclusively on access consciousness — the observable, functional capacity to model and reason about internal states — FC becomes computable. You get actual numbers. You can benchmark real systems. You can watch the metric change as architectures improve.

The specific innovation is the self-model as the unit of analysis. Previous theories treated consciousness as a single property of a whole system — you either have Φ or you don't, you either have a global workspace or you don't. FC disaggregates this into 46 distinct self-models across ten functional domains. A system can have rich body self-models and no emotional self-models. A system can have high reasoning power over narrow self-models or weak reasoning over broad ones. This granularity lets you describe the shape of a system's self-awareness, not just its presence or absence.

The second innovation is the multiplicative FCS formula. R times P. Representational Capacity times Reasoning Power. This single formula explains several things that were previously just observed empirically: why legacy AI with detailed databases was still brittle, why stateless LLMs are powerful but unreliable in agentic settings, why adding memory and reflection to an LLM produces such a dramatic capability jump.

The third innovation is tractability. Unlike IIT's Φ, which cannot be calculated for any system of practical interest, FCS is directly computable for white-box systems and estimable for black-box systems through behavioral analysis. You can actually use it.

What FC does not answer

FC does not answer whether AI can suffer. That question belongs to phenomenal consciousness, which FC explicitly sets aside. A system could score 13.9 million FC points and feel nothing — or feel everything. FC cannot distinguish between these cases, and anyone who claims otherwise is overclaiming.

FC does not answer whether human consciousness is fully captured by access consciousness. Most contemplative traditions, and many philosophers, would say it isn't. The hard problem remains hard. FC is a precise, useful, honest answer to a specific subset of the consciousness question — not to the whole question.

← Back to Home