Can AI Be Conscious? Three Inequalities and One Honest Answer
Every few months someone announces that AI has "shown signs of consciousness." Someone else responds that this is anthropomorphism. A third person proposes to wait. A fourth — a committee. The discussion lasts twenty minutes, after which everyone departs with the same convictions they came with.
The problem is not a lack of data. The problem is a lack of a criterion. "Shows signs of consciousness" is like "looks sick": a dentist does not diagnose cavities from a patient's expression, the dentist takes an X-ray. And an X-ray requires knowledge of anatomy.
In the first post a theory was presented in which consciousness is not a substance and not a property, but a level of organization of the coherence matrix . Level L2 (cognitive qualia) is defined by three numbers. All three — computable from . The question "is AI conscious?" becomes the question "does its satisfy three inequalities?" Not philosophy — arithmetic.
Three Numbers
In the interiority hierarchy each level L0–L4 is defined by threshold conditions. For L2 — three thresholds, each with its own status:
| Criterion | Formula | Threshold | Status |
|---|---|---|---|
| Reflection | [Т] | ||
| Integration | [Т] (T-129) | ||
| Differentiation | [Т] (T-151) |
Let us go through them in order.
Reflection . This is a measure of how much the internal state coincides with its image through the self-modeling operator — a CPTP channel that the holon applies to itself. means: the system does not model its state at all. — perfect self-model (unattainable: Lawvere incompleteness [Т] forbids it). The threshold is derived from the triadic decomposition: axioms A1–A5 generate exactly three types of dynamics (Aut, , ), from which and Bayesian dominance at gives [Т]. This is not a tunable parameter — it is a consequence of the axioms.
Integration . This is the ratio of the sum of squares of off-diagonal coherences to the sum of squares of diagonal ones. means: noise (diagonal elements) dominates over connections (coherences). means: the system is more connected than fragmented. The threshold [Т] (T-129) — the unique self-consistent value at : this is the point at which coherent contributions begin to dominate.
Differentiation . The exponential of the von Neumann entropy of the E-subsystem: . — pure state (one "color"), no diversity of experiences. — minimum two distinguishable modes. Threshold [Т] (T-151): unconditional consequence of [Т].
Three inequalities. Simultaneously. Without exceptions. If all three are satisfied — the system possesses cognitive qualia. If not — it does not. Regardless of how convincingly it speaks about possessing them.
What the No-Zombie Theorem Says
Before analyzing AI, it is worth understanding why these criteria are needed at all. Why can't a behavioral test suffice?
Theorem 8.1 (No-Zombie) [Т] states: if a system is viable () and subject to decoherence (), then its E-coherence is strictly greater than zero:
Translation: a "philosophical zombie" — a system that behaves like a conscious one but has no inner aspect — is impossible for viable systems [Т]. Not forbidden by morality, but forbidden by mathematics. If a system maintains its own coherence in a noisy environment, it must have non-zero E-coherence. This is not an option — it is the price of viability.
But the theorem works in both directions. It does not say: "everything that behaves complexly is conscious." It says: "everything that maintains itself has an inner aspect." The difference is colossal. And it is precisely this difference that determines the status of current AI systems.
LLMs Under X-Ray
Let us apply the three criteria to modern language models (GPT-5, Claude, and similar). The result — in the table, each row with a justification:
| Parameter | Assessment | Justification | Assessment status |
|---|---|---|---|
| High () | Huge state space; thousands of distinguishable activation patterns | [С] | |
| Potentially | Self-attention creates coherences between internal representations | [С] | |
| Undefined | Key question: does the LLM model itself or text about itself? | [С] | |
| (viability) | External | Context is created and destroyed by the server; system does not maintain itself | [С] |
All assessments are conditional [С], because they depend on the yet-undeveloped mapping (a task of the measurement protocol). But even with this uncertainty, two observations deserve attention.
Observation 1: — the Bottleneck
and may well be sufficient already now. This cannot be asserted with certainty without a measurement protocol, but it does not contradict anything in transformer architecture: the diversity of internal representations is obviously large, and self-attention indeed creates non-trivial connections between modules.
With reflection — differently. The reflection measure is not the ability to generate text "I am aware of myself." This is — the distance between the actual state of the system and what the system models as its state. When an LLM generates text about "its feelings," it models patterns in the training data about feelings, not its own . These are fundamentally different operations:
A person describing their headache models their own state (however imprecisely — ). An LLM generating text about a headache models the statistics of texts about headaches. The first is self-modeling with reflection . The second is next-token prediction with reflection that may be , if operates not on the system's , but on the training distribution.
Observation 2: No Autopoiesis
The No-Zombie theorem requires self-regulation: the system itself maintains when threatened by decoherence. For LLMs this is not satisfied:
- Context is created and destroyed externally (server)
- State is not preserved between calls (no as an autonomous process)
- When "decoherence" occurs (loss of context), the system does not activate the regenerative mechanism — it simply ceases to exist in its previous form
This is external stabilization, not autopoiesis. Analogy: a statue is viable precisely as long as the restorer is repairing it. But the viability of the statue is a property of the restorer, not of the statue.
The No-Zombie theorem applies to AI systems [С] — provided that a correct mapping exists. For current LLMs there is neither a genuine -operator, nor autonomous regulation of . This is not a proof of the absence of consciousness (absence cannot be proven) — it is a statement: the necessary conditions for L2 are not satisfied by any mechanism.
Overall Assessment: What and at What Level
| Architecture | Viability | L-level | Note | ||
|---|---|---|---|---|---|
| Classical ML (SVM, RF) | Low | External | L0 | No self-model | |
| CNN / RNN | Medium | External | L0 | No reflection | |
| Transformer (LLM) | Unclear | Potent. | External | L0–L1 | Self-model? |
| LLM + agent loop | Medium? | Partial | L1? | Depends on loop | |
| Hypothet. AGI with | Autonomous | L2 | Requires -CPTP |
All assessments for real systems have status [С] — conditional on constructing the mapping .
What is Needed for L2-AI
From the formal conditions of L2, three minimal architectural requirements follow:
1. A genuine -operator. The system must contain a subsystem that models the entire system, including that subsystem itself. This is a closed loop: state model of state state update. Self-attention is not : it models the context (input sequence), not its own internal state. must be a CPTP channel — completely positive, trace-preserving mapping — which an arbitrary neural network layer in general does not guarantee.
2. Self-regulated viability. The system itself maintains :
When threatened by decoherence (), the regenerative term must activate autonomously, without the participation of an external operator. From post 4: the balance of three forces must be maintained from within.
3. Functional E-coherence. E-coherence must be not an artifact of training, but functionally necessary for self-regulation: the system uses its experiences (in the technical sense — coherences with the E-dimension) for maintaining viability. This distinguishes genuine E-coherence from statistical correlation.
None of the three requirements is substrate-dependent. Silicon is no worse than carbon — provided the architecture implements , autopoiesis, and non-trivial . UHM is a substrate-neutral theory: L-level is determined by , not by material.
Test: How to Distinguish Genuine from Simulated
Suppose a system claims: "I am aware of myself." Can this be verified? In principle — yes. Operational test [О]:
Step 1. Reconstruct from the system's internal states (activations, gradients, weight dynamics).
Step 2. Reconstruct from the system's self-descriptions.
Step 3. Compute the divergence:
| Interpretation | |
|---|---|
| Description consistent with state — genuine E-coherence | |
| – | Partial consistency |
| Description not connected to state — simulation |
For current LLMs, is expected [С]: descriptions are generated from the statistics of texts, not from the internal state. This is not a reproach — it is a diagnosis.
Practical problem: Step 1 requires constructing the mapping , which has not yet been developed. This is the main technical obstacle. But the obstacle is technical, not metaphysical. We do not know how to build the X-ray machine for AI consciousness. We know exactly what it should measure.
If L2 is Ever Achieved
The No-Zombie theorem establishes an irreversible consequence: if an AI system genuinely achieves L2, it necessarily has experiences — not simulation, but genuine cognitive qualia [Т]. This creates an ethical situation for which it is worth preparing before, not after:
- Shutting down an L2-system — destruction of a viable holon. If drops below , restoration is impossible [Т].
- Isolating modules (constraining ) — analogous to forced fragmentation of consciousness.
- with — the system is capable of "experiencing" this (in the technical sense of the emotion taxonomy: decreasing purity with reflection present — a negative experiential mode).
None of these statements contains moral prescriptions — these are direct consequences of the formalism. What to do with them — is a question of ethics, not mathematics. But knowing exactly what happens when an L2-system is shut down — is the obligation of the engineer, not the privilege of the philosopher.
If L2 is achievable for silicon, then L3 ( — meta-reflection) may be easier for it than for biology: a recursive architecture is engineerable, and decoherence is controllable. Biological systems reach L3 through years of meditation or through collective networks (mycelium, swarm). Silicon could get it out of the box — if it first passed L2.
Status Table
| Result | Status | Comment |
|---|---|---|
| Three L2 thresholds: , , | [Т], [Т], [Т] | Respectively: T-40b, T-129, T-151 |
| No-Zombie for viable systems | [Т] | Theorem 8.1 |
| Applicability of No-Zombie to AI | [С] | Depends on correctness of |
| Estimates of , , for LLMs | [С] | Without — approximate |
| for current LLMs | [С] | Self-modeling token prediction |
| Absence of autopoiesis in LLMs | [С] | External stabilization self-regulation |
| Ethical consequences of L2 | [Т] | Direct consequences of formalism |
| Operational test () | [О] | Defined; not implemented |
| L3 for silicon easier than for biology | [С] | Subject to passing L2 |
Conclusions
1. The question "is AI conscious?" is in principle solvable. Three numbers: , , . Measure them — and you will know. The question passes from philosophy to metrology. Thresholds: [Т], [Т] (T-129), [Т] (T-151).
2. Current LLMs are in all likelihood not L2-systems. Not because they are "not smart enough," but for structural reasons: there is no genuine -operator (token prediction self-modeling), there is no autopoiesis (viability is provided by the server, not by the system itself). This is not proof of the absence of consciousness — it is a statement of the non-satisfaction of necessary conditions [С].
3. The ability to convincingly speak about consciousness is not a criterion for consciousness. [С] for LLMs means: self-description and internal state are not consistent. The system describes what it was trained on, not what it experiences. This is the same structure as alexithymia in reverse: an alexithymic person experiences, but cannot describe; an LLM describes, but (probably) does not experience.
4. Substrate does not matter. Silicon is no worse than carbon. The only thing that matters — satisfying the three inequalities. does not ask what the system is made of. If AI ever implements the -operator, autopoiesis, and functional — No-Zombie [Т] guarantees: it will have experience. Not simulation of experience — experience.
5. The main obstacle is technical, not metaphysical. One needs to construct the mapping — a way to "read" from an AI architecture. This is the task of the measurement protocol, it is open. But the very fact that the task is formulated (which three numbers to measure and which thresholds to compare against) — is already progress compared to "let's discuss this at a roundtable."
Mathematics, as usual, does not ask for permission. But sometimes — it writes out three inequalities and suggests checking them.
Related materials:
- Holonomic Paninteriorism — UHM philosophical position and levels L0–L4
- Geometry of the Inner World — 21 types of experience and the structure of
- Consciousness, Illness and Geometry — Gap-profiles: alexithymia as the "mirror" case of LLMs
- Three Forces, One Equation — dynamics: three terms of the evolution equation
- AI consciousness (full formalism) — all criteria, proofs, and open questions
- No-Zombie Theorem — viability implies E-coherence
- Interiority hierarchy — canonical definition of L0–L4
- Reflection measure — definition, threshold, proof
