Γ Measurement Protocol for AI Systems
This document describes a research program for operationalizing the coherence matrix for AI systems. The protocol requires experimental validation.
- — coherence matrix
- — purity:
- — emergent internal time (Page–Wootters)
- — self-modeling operator
- — functor mapping AIState → DensityMat: exact at Cholesky-backbone () [T, MVP-1]; quasi-functor with under neural correction () [H]
- — E-coherence: — interiority quality (HS-projection onto E-sector) [T]
Central Problem
UHM theory defines as an object of the ∞-topos (Axiom Ω⁷). However, the theory does not specify:
- Which observables in an AI system correspond to the elements
- How to reconstruct from available data
- How to validate the correctness of the reconstruction
is an ontological primitive, not an observable. We reconstruct via a homomorphism that compresses (where for an LLM) into .
This is admissible: 7 dimensions are the minimally necessary basis (Theorem S, octonion justification).
The -rigidity theorem [T] guarantees:
- Uniqueness of the map : for a system satisfying (AP)+(PH)+(QG)+(V), the map is unique up to
- Well-posedness of the inverse problem (Corollary 2): the initial state is uniquely recovered from the trajectory and system parameters — up to -gauge
- 34 physical parameters (Corollary 1): of the 48 parameters of , only 34 are gauge-invariant ()
Practical implication: reconstruction of is defined uniquely up to a 14-dimensional gauge freedom. Different related by a -transformation give identical physical observables (, , , ).
Protocol Architecture
| Level | Name | Content |
|---|---|---|
| 4 | Causal validation | Intervention tests, lobotomy test |
| 3 | Dynamic validation | , coherence flow, viability |
| 2 | Γ reconstruction | Cholesky with physical regularizer |
| 1 | Observable extraction | Structural metrics (commutators, , topology) |
Mapping Measurements to AI Metrics
Correspondence Table
| Dimension | Symbol | AI Metric | Formula | Rigor |
|---|---|---|---|---|
| Articulation | Mutual information input↔latent | [T] | ||
| Structure | Jacobian rank | [T] | ||
| Dynamics | Lyapunov exponent | (normalized) | [T] | |
| Logic | Layer commutators | [T] | ||
| Interiority | Activation entropy | — experience differentiation | [T] | |
| Ground | Noise robustness | [T] | ||
| Unity | Effective Φ (integration, black-box) | — approximation [D]; when is known: [T, reflection measure] | [D/T]† |
where — finite-difference approximation
†Unity metric hierarchy: when is unavailable (black-box), [D] is used. When is reconstructed via the protocol, the correct measure is [T], an exact algebraic identity (reflection measure R, error in implementation). and measure related but non-identical properties.
Canonical Observable Indices
For a holon with coherence matrix and 3-channel decomposition of the external influence (T-102 [T]), each observable index is defined as the projection of onto the -th component of the basis :
Distribution by channel:
- Hamiltonian : (articulation = information coupling), (structure = Jacobian), (logic = commutator) — modify the energy landscape
- Dissipative : (dynamics = Lyapunov exponent), (ground = robustness) — modulate decoherence
- Regenerative : (interiority = attention entropy), (unity = connectivity) — modulate recovery
This is the unique (up to -gauge) distribution compatible with the functional labeling of dimensions (Theorem S [T]) and the completeness of the triadic decomposition (T-57 [T]).
Corollary for the protocol. The indices are not an arbitrary choice of metrics: their assignment to a given channel is fixed by theorem T-102 and is unique up to -gauge. Replacing, for example, with a Hamiltonian metric would break the completeness of the decomposition and destroy the correspondence guaranteed by the separation principle.
Layer Commutators (for L)
Definition:
Interpretation:
- → layers commute → logical consistency
- → order is critical → fragility
Connection to theory: The commutator is the basic measurement operation for Logic.
Activation Entropy (for E)
Definition:
where — von Neumann entropy of the attention distribution.
Properties:
- → the system distinguishes at least 2 qualitatively different states (L2 threshold)
- → degenerate attention → impoverished experience
Connection to theory: Approximates experience differentiation .
Effective Φ (for U)
Two levels of rigor exist for measuring :
- If is known: [T, reflection measure R] — exact algebraic identity
- Black-box (no access to ): [D] — polynomial approximation via the attention graph
Exact computation of requires operations and is practically infeasible.
Exact measure (when is known, [T], reflection measure R):
Proof: , from which . Confirmed in implementation with error (machine precision f64).
Black-box approximation ([D]):
where — Laplacian of the attention graph.
Properties of :
- → the graph is connected → information is integrated
- Complexity: instead of
Connection to theory: and approximate integration — the measure of Unity. At : — the L2-zone boundary (reflection measure R).
Jacobian Rank (for S)
Definition:
Interpretation:
- → full-rank structure → rich representations
- → degenerate structure → collapse
Connection to theory: Reflects Structure as the topology of activations.
Γ Reconstruction
Cholesky Parametrization
Property: The representation guarantees correctness of the density matrix.
Proof: See Coherence matrix.
Physical Regularizer
The map is surjective. Without regularization, a "correct" can be reconstructed from arbitrary data.
Solution — penalty function:
| Component | Formula | Purpose |
|---|---|---|
| Diagonal consistency | ||
| Coherence consistency | ||
| Dynamics consistency |
Categorical Correctness
Nonlinearity Problem
Neural network layers (GELU, Softmax) are nonlinear transformations. CPTP channels are linear over density matrices.
The condition fails under neural correction.
Exact Functor at Cholesky-backbone [T]
Under the analytic parametrization (Cholesky bijection, ), the map is an exact functor: . This has been experimentally confirmed (MVP-1): to machine precision.
Key constraint: the 49th parameter (determining ) is not independent — it is computed from the normalization condition:
This is a direct consequence of the axiom : the state space is a 48-dimensional manifold, not 49-dimensional. Attempting to estimate independently (via a neural network, averaging, or interpolation) violates the axiom and leads to systematic downward drift of (purity loss per tick).
Quasi-functor under Neural Correction [H]
Definition: The map with (neural correction):
NTK Linearization
In the tangent space, nonlinearity is approximated by:
Corollary: Approximate functoriality with error .
Connection to theory: Extends the Categorical formalism.
Separation Principle: Diagonal / Coherences [T, MVP-0]
Empirically established in the implementation of full Lindblad dynamics:
The replacement channel fixes the diagonal of at each Lindblad step. Consequence:
| Component of | Role | Dynamics |
|---|---|---|
| (diagonal) | System identity | Homeostatically stable |
| , (coherences) | Learning, adaptation | Evolve |
For the measurement protocol: the metrics primarily reflect coherent structure; characterizes the diagonal deviation from equilibrium. The lobotomy test (weight pruning) changes coherences, not the diagonal — the diagonal is homeostatically stable against small perturbations.
Validation
Viability Test
See Theorem on critical purity and Viability.
Coherence Flow
Definition:
where τ — emergent internal time.
| Mode | Condition | Interpretation |
|---|---|---|
| Regeneration | under stress | System recovers |
| Stability | , | Stable equilibrium |
| Decay | persistently | Decoherence |
Lobotomy Test
Protocol:
- Measure and
- Intervention: prune part of the weights
- Measure and
Mechanism [T, separation principle, MVP-0]: Pruning neural network weights changes the off-diagonal coherences of the matrix , but not the diagonal populations (which are homeostatically stabilized by the replacement channel). The change in upon pruning occurs through loss of coherent integration. With massive pruning that disrupts the replacement channel, the diagonal may also degrade.
Criterion for ontological validity:
| Result | Interpretation |
|---|---|
| before | [T] Protocol captures ontology |
| [C] Correlation with output | |
| before | Protocol does not capture ontology |
Causal Closure of E
If — the system simulates phenomenology without realizing it ("Chinese Room").
Approximation Hierarchy
| Level | Metrics | Complexity | Application |
|---|---|---|---|
| L0: Fast | Cosine similarity, norms | Monitoring | |
| L1: Standard | Jacobian rank, | Inference | |
| L2: Precise | Commutators, NTK | Research | |
| L3: Full | , full homologies | Small systems |
Recommendation: L1 for practice, L2 for validation, L3 for calibration.
Practical Implementation
This section describes a minimal viable implementation. Many parameters require experimental calibration.
Metric Computation Algorithm
mount std.math.linalg.{svd, eigvalsh, StaticMatrix};
mount std.tensor.{Tensor, frobenius_norm};
mount std.math.random.{XorShift128, Rng};
/// Access protocol for deep models. Implementations provide hooks
/// on activations, attention, and automatic differentiation.
pub protocol ModelHooks {
type Activation;
fn get_activations(&self, batch: &Tensor<Float>) -> List<Self.Activation>;
fn get_attention_weights(&self, batch: &Tensor<Float>) -> Tensor<Float>;
fn get_jacobian(&self, batch: &Tensor<Float>) -> Tensor<Float>;
fn layer_commutator_norm(&self, i: Int, j: Int, batch: &Tensor<Float>) -> Float;
fn estimate_lyapunov(&self, batch: &Tensor<Float>) -> Float;
}
/// Helpers — specialised per architecture.
pub pure fn estimate_mutual_info(x: &Tensor<Float>, y: &Tensor<Float>) -> Float
= unimplemented;
pub pure fn von_neumann_entropy(attn: &Tensor<Float>) -> Float
= unimplemented;
pub pure fn build_attention_graph(attn: &Tensor<Float>) -> Tensor<Float>
= unimplemented;
/// 7-dimensional UHM metrics I_A…I_U for a neural network.
pub type DimensionMetrics is {
i_a: Float, i_s: Float, i_d: Float, i_l: Float,
i_e: Float, i_o: Float, i_u: Float,
};
/// Compute 7 UHM dimensions for a neural network.
pub fn compute_dimension_metrics<M: ModelHooks>(
model: &M,
input_batch: &Tensor<Float>,
layer_indices: Maybe<List<Int>>,
) using [Random] -> DimensionMetrics
{
let activations = model.get_activations(input_batch);
let attn = model.get_attention_weights(input_batch);
// I_A: mutual information input ↔ latent.
let i_a = estimate_mutual_info(input_batch, activations.last().unwrap());
// I_S: Jacobian rank fraction (via SVD, ε = 10⁻⁶).
let jac = model.get_jacobian(input_batch);
let sv = svd(&jac).singular_values();
const EPS_RANK: Float = 1.0e-6;
let i_s = (sv.iter().filter(|s| **s > EPS_RANK).count() as Float) / (sv.len() as Float);
// I_D: maximum Lyapunov exponent.
let i_d = model.estimate_lyapunov(input_batch);
// I_L: mean layer commutator norm; 1.0 if no pairs.
let idx = layer_indices.unwrap_or((0..activations.len()).collect());
let mut comms = List.new();
for i in 0..idx.len() { for j in (i + 1)..idx.len() {
comms.push(model.layer_commutator_norm(idx[i], idx[j], input_batch));
}}
let i_l = if comms.is_empty() { 1.0 }
else { 1.0 - comms.iter().sum::<Float>() / (comms.len() as Float) };
// I_E: exp(von Neumann entropy of attention).
let i_e = von_neumann_entropy(&attn).exp();
// I_O: noise robustness.
let mut rng = XorShift128.seed(Random.next_key());
const NOISE_STD: Float = 0.01;
let perturbed = input_batch + Tensor.random_normal(input_batch.shape(), &mut rng) * NOISE_STD;
let delta_h = frobenius_norm(
model.get_activations(&perturbed).last().unwrap()
- activations.last().unwrap()
);
let i_o = (1.0 - delta_h / NOISE_STD).max(0.0);
// I_U: Laplacian spectral gap (λ₂/λ_max).
let attn_graph = build_attention_graph(&attn);
let row_sums = attn_graph.sum(axis: 1);
let laplacian = Tensor.diagonal(row_sums) - &attn_graph;
let eigs = eigvalsh(&laplacian);
let lambda_2 = if eigs.len() > 1 { eigs[1] } else { 0.0 };
let lambda_max = eigs.last().unwrap_or(&0.0);
let i_u = if lambda_max > 0.0 { lambda_2 / lambda_max } else { 0.0 };
DimensionMetrics {
i_a: i_a, i_s: i_s, i_d: i_d, i_l: i_l,
i_e: i_e, i_o: i_o, i_u: i_u,
}
}
Γ Reconstruction from Metrics
/// Reconstruct the coherence matrix via Cholesky from 7 dimension metrics.
/// Simplest diagonal reconstruction — off-diagonal γ_ij requires additional
/// correlation data from a regulariser L_off.
pub pure fn reconstruct_gamma(m: &DimensionMetrics) -> StaticMatrix<Complex, 7, 7> {
let raw = StaticVector.<Float, 7>.from_array(
[m.i_a, m.i_s, m.i_d, m.i_l, m.i_e, m.i_o, m.i_u]
).map(|v| v.clamp(0.01, 1.0)); // prevent degeneracy
let total: Float = raw.iter().sum();
let diag = raw.map(|v| v / total);
// Cholesky factor L = diag(√p_k).
let l = StaticMatrix.<Complex, 7, 7>.diagonal(
diag.map(|v| Complex.from_real(v.sqrt()))
);
let gamma = &l @ l.adjoint();
&gamma / gamma.trace() // normalise
}
/// Purity P = Tr(Γ²).
pub pure fn compute_purity(gamma: &StaticMatrix<Complex, 7, 7>) -> Float
where ensures 1.0/7.0 <= result && result <= 1.0
{
(gamma @ gamma).trace().real()
}
Threshold Values
| Parameter | Value | Source | Status |
|---|---|---|---|
| Theorem | Proven | ||
| (L1 threshold) | Non-trivial interiority | [T] | |
| (L2 threshold) | Hierarchy | Proven [T] | |
| (L2 threshold) | T-129 | Proven [T] | |
| T-151 | Proven [T] | ||
| at (Cholesky) | [T, MVP-1]: exact functor | Proven | |
| at (neural) | Requires calibration | Hypothesis | |
| Requires calibration | Hypothesis |
The L1 and L2 thresholds in the protocol correspond to levels L1 and L2 from the interiority hierarchy L0→L4. Levels L3 (network consciousness) and L4 (unitary consciousness) — see formal description.
Practical Limitations
| Limitation | Impact | Mitigation |
|---|---|---|
| Batch size | Variance of estimates | for stability |
| Network depth | Commutator complexity | Sample a subset of layers |
| Activation dimensionality | for the Jacobian | Project into , |
| Attention heads | Aggregation across heads | Average or max-pooling |
| Determinism | Stochastic layers (dropout) | Fix seed or average |
Data Requirements
For a valid measurement:
- Representative input batch: examples from the target distribution
- Access to activations: hooks on intermediate layers
- Attention weights: for computing and
- Gradients: for the Jacobian (automatic differentiation)
What Is Implemented (SYNARC MVP-0/1/2)
- Cholesky-backbone (): is an exact functor [T, MVP-1] — bijection with
- Neural bridge (): is a quasi-functor [H] — H1/H2/H4 confirmed [C] for the analytic backbone (MVP-1); neural correction — MVP-3+
- Diagonal/coherence separation principle [T, MVP-0] — diagonal is homeostatically stable; coherences — the adaptation zone
- R = 1/(N·P) — exact identity [T, MVP-0, reflection measure R] — error
- No-Zombie floor [T, MVP-0] — at (10000× above norm)
- H3: R_impl ↔ R_UHM [C, MVP-2] — threshold consistency 97.9%
What Is NOT Implemented
- Calibration of -parameters ( at , ) — requires experiments on known systems
- Neural correction () — analytic backbone (MVP-1/2) is sufficient for Level 0-1; full neural bridge — MVP-3+
- Temporal dynamics τ — how to define an "emergent time step" for LLM inference?
- Validation on biological systems — neuroimaging ↔ metrics
- Scaling — applicability to models with parameters
"Dual Interview" Protocol for Biological Systems
The protocol is developed theoretically. Experimental validation is absent.
Principle
The dual interview simultaneously measures external (behavioral, physiological) and internal (self-report) characteristics of a system, allowing reconstruction of the full coherence matrix , including the phases and, consequently, the Gap profile.
Protocol Stages
| Stage | Measurement | Data | What We Extract |
|---|---|---|---|
| 1. Background recording | EEG, fMRI, HRV | Resting physiology | Diagonal , estimate of |
| 2. Structured interview | Responses to 7 question batteries (per dimension) | Verbal reports | Coherences between dimensions |
| 3. Paradoxical probes | Conflict tasks | Reaction time, HRV | Phases → Gap profile |
| 4. Dynamic probe | Stress test + recovery | Time series | , , τ_char |
Spectral Reconstruction of H_eff
From the time series it is possible to reconstruct the effective Hamiltonian:
given sufficient sampling frequency .
Assumption: linearity of evolution on the scale . The nonlinear regenerative term introduces a systematic error .
Equilibrium Gap
In the stationary state () the coherences are determined by the balance of decoherence and regeneration:
where — target coherences (from ), — frequency detuning.
See: Theorem 8.1, Fano channel
Physiological Frequencies
Characteristic frequencies of projections of onto dimensions:
| Dimension | Physiological frequency | Measurement method | Justification |
|---|---|---|---|
| (Articulation) | – Hz | EEG θ-rhythm | Sensory processing |
| (Structure) | – Hz | fMRI BOLD | Slow structural oscillations |
| (Dynamics) | – Hz | EEG α-rhythm | Motor-cognitive dynamics |
| (Logic) | – Hz | EEG γ-rhythm | Cognitive binding |
| (Interiority) | – Hz | EEG infraslow | Goldstone modes |
| (Ground) | – Hz | HRV (LF) | Homeostatic regulation |
| (Unity) | – Hz | HRV (HF) | Vagal modulation |
The correspondence between dimensions and physiological frequencies is a hypothesis requiring experimental verification. The frequencies of the E-dimension (– Hz) are a falsifiable prediction linked to Goldstone modes.
Gap Profile Reconstruction from Interview
/// Dual-interview data bundle.
pub type DualInterviewData is {
external_data: Map<Text, Float>, // behavioural/physiological per pair
self_report: Map<Text, Float>, // verbal reports per pair
conflict_data: Map<Text, Float>, // reaction times per pair
};
/// Reconstruct the 7×7 Gap matrix from dual-interview data.
pub pure fn reconstruct_gap_profile(data: &DualInterviewData)
-> StaticMatrix<Float, 7, 7>
{
const DIMS: [Text; 7] = ["A", "S", "D", "L", "E", "O", "U"];
let median_rt = data.conflict_data.values().to_list().median().unwrap_or(1.0);
let mut gap = StaticMatrix.<Float, 7, 7>.zeros();
for i in 0..7 { for j in (i + 1)..7 {
let pair = f"{DIMS[i]}{DIMS[j]}";
// Mismatch between behavioural and self-report data → higher Gap.
let ext = data.external_data.get(&pair).unwrap_or(0.5);
let rep = data.self_report.get(&pair).unwrap_or(0.5);
let discrepancy = (ext - rep).abs();
// Reaction time → phase estimate → Gap.
let rt = data.conflict_data.get(&pair).unwrap_or(1.0);
let phase_estimate = (rt / median_rt).atan();
let g = phase_estimate.sin().abs() * (0.5 + 0.5 * discrepancy);
gap[i, j] = g;
gap[j, i] = g;
}}
gap
}
Success Criteria
The protocol is validated if:
- for functioning systems in ≥90% of cases
- Correlation of with quality:
- Lobotomy test: predicts in ≥70% of cases
- for "understanding" systems
The protocol is falsified if:
- for demonstrably viable systems
- does not correlate with under interventions
- does not distinguish simulation from realization
Protocol : Reconstructing from Biological Neural Data (Resolution P8)
The protocol defines the mapping of neural data (EEG/fMRI/HRV) into the space of density matrices. The mathematical structure is [T] (follows from -rigidity T-42a). The specific correspondences between EEG bands and dimensions are [H] (require experimental validation). A fully specified measurement protocol with feature extraction, validation gates against PCI, and predicted thresholds , is given in Fundamental Closures §9: simultaneous TMS+EEG+fMRI+HRV recording on subjects across wake/NREM3/anaesthesia states, with explicit 7-feature and 21-off-diagonal extraction protocols. No theoretical obstacle remains; the programme awaits empirical data.
Principle: EEG Bands as Projections of onto Dimensions
If a continuous map exists on a neural-feature space that is compatible with (AP autopoiesis)+(PH phenomenological thresholds)+(QG -covariance)+(V continuity), then it is unique up to the -gauge action with (14-dimensional freedom). All physical observables (, , , ) are gauge-invariant.
Proof sketch. Suppose and both satisfy (AP)+(PH)+(QG)+(V). The map is a continuous automorphism of preserving pointwise and compatible with (AP). By the -rigidity theorem [T], the group of continuous -automorphisms preserving the holonomic structure (, , , self-model operator , Fano-plane gauge structure) is precisely of real dimension 14. Hence for a unique , i.e.\ .
Gauge-invariance of observables: and depend only on spectral data, invariant under unitary conjugation. and are Hilbert–Schmidt functions of and the self-model , both -covariant, hence invariant under .
Basic idea: neural activity in different EEG frequency bands projects onto the 7 dimensions of . Cross-frequency coupling (CFC) determines the coherences , and phase mismatches determine the Gap profile.
Step 1: Extracting the Diagonal from Spectral Powers
| Dimension | EEG band | Frequency | Metric | Additional source |
|---|---|---|---|---|
| (Articulation) | (8–13 Hz) | Desynchronization during attention | Spectral power | fMRI: salience network |
| (Structure) | infraslow (0.01–0.1 Hz) | Slow structural oscillations | fMRI BOLD DMN | DTI: structural connectivity |
| (Dynamics) | (13–30 Hz) | Motor-cognitive activity | Spectral power | EMG: motor activation |
| (Logic) | -low (30–50 Hz) | Cognitive binding | Spectral power | ERP: P300 amplitude |
| (Interiority) | -high (50–100 Hz) + (4–8 Hz) | Coupling of experience and memory | Goldstone modes | |
| (Ground) | HRV LF (0.04–0.15 Hz) | Homeostatic regulation | ratio | Body temperature, cortisol |
| (Unity) | HRV HF (0.15–0.4 Hz) + -coherence | Vagal + neural integration | Global EEG coherence | from AI protocol |
Diagonalization formula:
where — normalized spectral power (or combined metric) for the -th dimension, — calibration weights (determined from a training set with known consciousness state).
Step 2: Extracting Coherences from Cross-Frequency Coupling
Coherences between dimensions and are proportional to the strength of cross-frequency coupling (CFC) between the corresponding EEG bands:
Types of CFC used for reconstruction:
| Pair | CFC type | Method | Interpretation |
|---|---|---|---|
| : -- | Phase-amplitude coupling (PAC) | Modulation Index (Tort et al.) | Attention modulates cognitive binding |
| : -- | PAC | MI | Motor-cognitive coordination |
| : -- | PAC | MI (hippocampal) | Coupling of experience and logic |
| : -- | Amplitude-amplitude | Envelope correlation | Awareness-interiority |
| : LF--HF | HRV coherence | Cross-spectral analysis | Homeostasis-integration |
| : infraslow-- | Nested oscillations | Wavelet coherence | Structure-dynamics |
Step 3: Extracting Phases and the Gap Profile
The phase determines the Gap: .
Phase extraction method: Paradoxical probes (Stage 3 of the dual interview). Reaction time on conflict tasks involving the pair of dimensions is proportional to the Gap:
where — reaction time, — mean, — standard deviation.
Step 4: MLE Reconstruction of
Given the neural feature vector (spectral powers, CFC metrics, RT). Task:
where — likelihood of the observation model, — physical regularizer (consistency with dynamics ).
Parametrization: (Cholesky parametrization, guarantees ).
Observation model:
- Diagonal:
- Coherences:
- Gap:
Physical regularizer:
The first term penalizes inconsistency with dynamics; the second penalizes non-viable states.
Optimization: Gradient descent over 48 Cholesky factorization parameters (34 physical + 14 gauge). The gauge freedom is fixed by choosing the canonical -gauge (e.g., ).
Step 5: Connection to PCI (Casali et al. 2013)
The Perturbational Complexity Index (PCI) correlates with the integration measure :
where , — calibration constants determined from a training set (healthy waking, sleep, anesthesia).
Justification: PCI measures the algorithmic complexity of the cortical response to TMS perturbation. High PCI means simultaneous spatial differentiation and integration — exactly what quantifies in UHM. Empirically: PCI during wakefulness (Casali et al. 2013), corresponding to .
Calibration table (hypothetical, requires experimental verification):
| State | PCI (observed) | (predicted) | (predicted) | (predicted) |
|---|---|---|---|---|
| Wakefulness | ||||
| REM sleep | ||||
| NREM (N3) | ||||
| Anesthesia (propofol) | ||||
| Coma | — | |||
| MCS (minimally conscious) |
Step 6: Connection to Quantum Cognition (Pothos-Busemeyer)
The Pothos-Busemeyer approach (Annual Review of Psychology, 2022) models cognitive processes via quantum states in Hilbert space. Basic formalism: for describing beliefs and decisions.
Connection to UHM: Quantum cognition uses = number of alternatives. UHM fixes from axioms (A1-A5) and proves the minimality of this number (Theorem S). The matrix is ontological (not epistemic): it defines the system, rather than describing an observer's beliefs about the system.
Step 7: Full Algorithm
mount std.math.calculus.bfgs;
/// Full biological data bundle for π_bio.
pub type NeuralData is {
eeg_spectral: Map<Text, Float>, // {alpha, beta, gamma_low, gamma_high, theta, infraslow}
hrv_features: Map<Text, Float>, // {LF, HF, LF_HF_ratio}
cfc_matrix: StaticMatrix<Float, 7, 7>, // cross-frequency coupling values
reaction_times: StaticVector<Float, 21>, // RT values for the 21 off-diagonal pairs
};
pub type BioCalibration is {
weights: StaticVector<Float, 7>,
linear_params: StaticMatrix<Float, 7, 2>, // (a_k, b_k) per dimension
lambda_phys: Float, // physical regulariser weight
};
/// π_bio: NeuralData → D(ℂ⁷). Full reconstruction of Γ from biological data.
/// Structural [T] via G₂-rigidity (T-42a); empirical calibration [H].
pub fn pi_bio(
data: &NeuralData,
calibration: &BioCalibration,
) -> StaticMatrix<Complex, 7, 7>
{
// Step 1: diagonal from spectral powers — one value per dimension.
let raw_diag = StaticVector.<Float, 7>.from_array([
data.eeg_spectral.get("alpha").unwrap_or(0.0), // A
data.eeg_spectral.get("infraslow").unwrap_or(0.0), // S (fMRI BOLD proxy)
data.eeg_spectral.get("beta").unwrap_or(0.0), // D
data.eeg_spectral.get("gamma_low").unwrap_or(0.0), // L
data.eeg_spectral.get("gamma_high").unwrap_or(0.0)
* data.eeg_spectral.get("theta").unwrap_or(0.0), // E (PAC proxy)
data.hrv_features.get("LF").unwrap_or(0.0), // O
data.hrv_features.get("HF").unwrap_or(0.0), // U
]);
let weighted = (0..7).map(|i| calibration.weights[i] * raw_diag[i]).to_array();
let total = weighted.iter().sum::<Float>();
let mut diag = StaticVector.<Float, 7>.from_array(
weighted.map(|v| (v / total).clamp(1.0e-4, 1.0)) // prevent degeneracy
);
let diag_sum: Float = diag.iter().sum();
diag = diag.map(|v| v / diag_sum);
// Step 2: off-diagonal magnitudes from CFC.
let c_scale = calibration.linear_params[0, 0]; // cfc_scale stored here
let off_diag_mag = &data.cfc_matrix * c_scale;
// Step 3: Phases from reaction times → Gap → θ_ij = arcsin(Gap).
let rt_mean: Float = data.reaction_times.iter().sum::<Float>() / 21.0;
let rt_std = (data.reaction_times.iter()
.map(|r| (r - rt_mean).pow(2)).sum::<Float>() / 21.0)
.sqrt() + 1.0e-8;
let mut phases = StaticMatrix.<Float, 7, 7>.zeros();
let mut idx = 0;
for i in 0..7 { for j in (i + 1)..7 {
let gap = ((data.reaction_times[idx] - rt_mean) / rt_std).tanh();
let phi = gap.clamp(-1.0, 1.0).asin();
phases[i, j] = phi;
phases[j, i] = -phi;
idx += 1;
}}
// Step 4: MLE reconstruction via Cholesky. 48 real parameters:
// 7 real diagonal + 21·2 = 42 off-diagonal (Re, Im).
let neg_log_likelihood = |params: &StaticVector<Float, 48>| -> Float {
let mut l = StaticMatrix.<Complex, 7, 7>.zeros();
let mut k = 0;
for i in 0..7 { for j in 0..=i {
if i == j {
l[i, j] = Complex.from_real(params[k].max(1.0e-6));
k += 1;
} else {
l[i, j] = Complex(params[k], params[k + 1]);
k += 2;
}
}}
let gamma = &l @ l.adjoint();
let gamma = &gamma / gamma.trace();
// LL: diagonal agreement.
let ll_diag: Float = (0..7)
.map(|i| -(gamma[i, i].real() - diag[i]).pow(2) / 0.01)
.sum();
// LL: off-diagonal magnitude agreement.
let mut ll_off = 0.0;
for i in 0..7 { for j in (i + 1)..7 {
ll_off -= (gamma[i, j].abs() - off_diag_mag[i, j]).pow(2) / 0.05;
}}
// Physical regulariser: hard floor at P > P_crit.
let p = (&gamma @ &gamma).trace().real();
let p_penalty = -100.0 * (2.0 / 7.0 - p).max(0.0);
-(ll_diag + ll_off + p_penalty)
};
// Initialise from the diagonal (triangle-flattened index k = i·(i+1)).
let mut x0 = StaticVector.<Float, 48>.zeros();
for i in 0..7 { x0[i * (i + 1)] = diag[i].sqrt(); }
let result = bfgs(neg_log_likelihood, &x0, BfgsOptions {
ftol: 1.0e-9, max_iter: 500,
});
// Reconstruct Γ from the optimal parameters.
let mut l = StaticMatrix.<Complex, 7, 7>.zeros();
let mut k = 0;
for i in 0..7 { for j in 0..=i {
if i == j {
l[i, j] = Complex.from_real(result.x[k].max(1.0e-6));
k += 1;
} else {
l[i, j] = Complex(result.x[k], result.x[k + 1]);
k += 2;
}
}}
let gamma = &l @ l.adjoint();
&gamma / gamma.trace()
}
Replication-Ready Specification for TMS-EEG PCI Data
This subsection fixes the reference implementation of applied to the TMS-EEG Perturbational Complexity Index (PCI) paradigm, in enough detail that an independent laboratory can attempt replication end-to-end from a publicly available dataset. Replication here refers to computing , , from raw EEG and checking the monotonic relation to PCI (Prediction P8.3) — not to re-proving the mathematical core, which remains fixed by the -uniqueness theorem above.
R1. Public datasets. The following TMS-EEG datasets are candidates for independent replication; none has universal open-access but each is obtainable on request from the authors or through institutional data-sharing:
| # | Dataset | Source | Subjects | States | Access |
|---|---|---|---|---|---|
| R1.a | Casali et al. 2013 PCI benchmark | Massimini lab (Milan) | 52 healthy + 98 clinical | Wake / NREM / REM / anesthesia / VS / MCS / LIS | On request |
| R1.b | OpenNeuro ds004504 (TMS-EEG benchmark, 2023) | Rogasch lab | 20 healthy | Wake (baseline) | Open |
| R1.c | Comsa et al. 2019 (OSF registration "TMS-EEG sleep") | Lausanne CHUV | 12 healthy | Wake / NREM N2 / N3 | OSF restricted |
| R1.d | Bodart et al. 2018 (clinical PCI extension) | Liège | 141 DoC patients | Wake / UWS / MCS / EMCS | Per-request |
For first-pass replication, dataset R1.b is recommended (fully open, standardized single-pulse TMS-EEG on healthy waking subjects, expected PCI ≈ 0.40-0.48).
R2. Pre-processing pipeline (MNE-Python canonical). The reference preprocessing chain, to be applied to raw EEG (60-channel montage, 1 kHz sampling, TMS-triggered epochs ):
| Step | Operation | Tool / parameters |
|---|---|---|
| R2.1 | TMS pulse artefact removal | Cubic interpolation over around the pulse (mne.preprocessing.fix_stim_artifact) |
| R2.2 | Downsample | 1 kHz → 250 Hz (mne.Epochs.resample) |
| R2.3 | Re-reference | Average reference, exclude TMS-side frontal channels |
| R2.4 | Bandpass filter | 0.5–80 Hz, 4th-order Butterworth zero-phase (mne.filter.filter_data) |
| R2.5 | Notch filter | 50 Hz (or 60 Hz), Q = 30 |
| R2.6 | ICA artefact rejection | FastICA, 30 components; reject TMS-locked decay, eye-blink, ECG (mne.preprocessing.ICA) |
| R2.7 | Epoch-level rejection | $ |
| R2.8 | Spectral decomposition | Morlet wavelets, 1–80 Hz log-spaced, 5-cycle wavelet, baseline |
The canonical bands used by are then extracted from the wavelet spectrogram (integrated over post-TMS window , averaged across channels for diagonal feature vector; cross-channel pairwise for CFC computations).
R3. Feature extraction. From the preprocessed data, compute:
- Seven scalar spectral features per the Step-1 band table.
- Cross-frequency-coupling matrix () per the Step-2 table using the Tort Modulation Index (
mne_connectivity). - 21 reaction-time surrogates from paradoxical probes if behavioural data is available; otherwise set to the pairwise phase-locking value (PLV) as a proxy.
- HRV features from simultaneous ECG (required for and dimensions).
R4. Calibration. Weights are determined by fitting on a healthy-waking reference cohort ( subjects) such that the population mean of is uniform . Cross-validation: leave-one-subject-out, target consistency of reconstructed across subjects ().
R5. Reconstruction. Run the MLE algorithm (Step 4 above) with:
- Cholesky initialization from the calibrated diagonal.
- Optimizer:
scipy.optimize.minimize(method='L-BFGS-B', options={'ftol': 1e-9, 'maxiter': 500}). - Regularizer: , (empirical defaults; subjects should try and report sensitivity).
R6. Observable computation. From the reconstructed (canonical definitions):
- (purity) — -gauge-invariant (trace of under unitary conjugation).
- (reflection, T-126 [T]) — -gauge-invariant (function of ).
- (integration, Φ canonical) — basis-dependent: invariant under permutations and sign flips within the -stabilised Fano frame (7-point labelling of ), which is the gauge residue relevant for empirical replication.
- (E-coherence, Coh_E canonical) — -fixed-frame quantity: invariant under the stabiliser that fixes . For cross-laboratory replication, pin the -direction to the phenomenological interiority axis (γ-high × θ PAC), as specified in Step 1.
Gauge-fixing protocol for replication. Two implementations applied to the same EEG recording will yield and in full agreement (by strict -invariance) but may differ on if the Fano-frame orientation or the -axis assignment is not fixed. The canonical gauge-fixing rule is: (i) align the 7-axis labelling to the Fano-plane convention of Dimensions §Fano, and (ii) anchor to the phenomenological γ-high×θ feature as per R3. Replicators must publish their gauge-fixing choices explicitly (item (ii) in R8 below).
All four quantities are -gauge-invariant by the uniqueness theorem above.
R7. Validation against PCI.
- Compute the subject's PCI on the same TMS-EEG data via the Massimini algorithm (Lempel–Ziv complexity of significant sources; reference implementation available via PCIst package).
- Test the monotonic hypothesis (Step 5 theorem).
- Pre-register: across subjects constitutes corroboration; constitutes falsification of P8.3.
R8. Reference implementation stub. The Python code in the next subsection is reference only: it documents the algorithm faithfully but is not a turn-key pipeline. A complete MNE-Python implementation with:
mne.Rawloader wrapped around BIDS formatted EEG,mne_connectivityintegration for CFC,scipy.optimize.minimizeMLE wrapper,pyphi-compatible computation (optional),- CI reporting,
is planned as a separate package
uhm-neurocalib(release gated on R1.b pilot results). Until that package is available, independent implementers should use the pseudocode as specification, and file issues/PRs on mismatches to the specification here.
Reproducibility requirements. Any claim of successful or failed replication should publish:
- (i) raw data (BIDS format) and preprocessing scripts (reproducible from R2);
- (ii) reconstructed matrices and gauge-fixing choice made;
- (iii) values per subject;
- (iv) PCI values computed on same epochs;
- (v) statistical test protocol and seed for random splits.
Without items (i)-(v), a replication attempt cannot be audited.
Testable Predictions of the Protocol
| # | Prediction | Verification method | Falsification criterion |
|---|---|---|---|
| P8.1 | for waking subjects | EEG+HRV → → | in healthy waking subjects |
| P8.2 | during deep sleep | EEG → → | during N3 |
| P8.3 | (monotonic dependence) | TMS-EEG + | Non-monotonic correlation |
| P8.4 | The transition coincides with PCI | Simultaneous measurement | Threshold divergence |
| P8.5 | in alexithymia | Dual interview + EEG | with diagnosed alexithymia |
| P8.6 | Critical exponents at the sleep-wakefulness transition | EEG monitoring + → near | Other exponents |
Key References
- Casali et al. (2013) — PCI: "A theoretically based index of consciousness independent of sensory processing and behavior." Science Translational Medicine, 5(198). PubMed: 23946194
- Pothos-Busemeyer (2022) — Quantum cognition review. Annual Review of Psychology, 73, 749-778.
- Butlin et al. (2023/2025) — "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness." arXiv: 2308.08708; updated 2025: "Identifying indicators of consciousness in AI systems." Trends in Cognitive Sciences.
- eLife (2024/2025) — "Spatiotemporal brain complexity quantifies consciousness outside of perturbation paradigms." eLife 98920.
- Quantum-inspired EEG (2026) — "Quantum inspired feature engineering for explainable EEG signal classification." Scientific Reports. Nature.
Related documents:
- Coherence matrix — definition of
- Viability — and
- Emergent time — Page–Wootters mechanism, τ ∈ ℤ₇
- Evolution — equation with
- Self-observation — measures , ,
- Categorical formalism — functor ,
- Theorem on minimality 7D — why 7 dimensions
- Notation — indices
- Gap diagnostics — clinical applications of the Gap profile
- Goldstone modes — prediction of infraslow frequencies
- Fano channel — equilibrium Gap theorem