The Hyperbolic Trilogy

evidence graph →
κ = (h ln 2 / (n − 1))²

The equation is not new. It is Manning's theorem (1979) applied to three physical postulates. There is nothing to fit.

Any process that branches and generates information faces a geometric problem: exponential growth cannot fit in flat space. There appears to be exactly one solution. Across every system we have tested — genomes, languages, neural spike trains — the embedding dimension is n = 2.00 ± 0.05. We do not fully understand why.

zero free parameters

Genomic Viral Linguistic Proteomic Neuropixels fMRI EEG GPT-2 BERT ViT

10 domains  ·  0 free parameters  ·  1 equation

The Evidence at a Glance
10
Domains tested
0.996
Viral r (15 families)
~3
K_eff: DNA & language
0
Free parameters
n = 2
Every domain
active-geometry information-geometry convergent-alphabets
The Evidence Structure

Six vertices. Fifteen edges. Every face reinforces every other. To break any single claim, you must explain why multiple independent lines of evidence are simultaneously wrong in the same direction.

Explore the full evidence graph →
The Deepest Result

DNA uses 4 bases. Language uses ~40 phonemes. Proteins use 20 amino acids. But evolution doesn't explore the full alphabet — it channels change through a handful of likely neighbours. The effective alphabet is what matters.

Genomic
A
T
G
C
T
G
C
~3
substitutions per base
transition/transversion bias
h = 1.58 bits
Linguistic
p
b
t
d
k
s
n
...
t
d
s
~3
targets per phoneme
articulatory channelling
h = 1.57–1.65 bits
Proteomic
A
R
N
D
C
E
Q
...
S
T
A
V
I
L
D
~7
replacements per residue
BLOSUM neighbour classes
h = 2.81 bits

The raw alphabets are accidents of chemistry and articulation.
The effective alphabets are set by physics.
DNA and language both land on ~3 — giving h ≈ log₂3 ≈ 1.58 — giving κ ≈ 1.2

Explore the full argument →
The Trilogy
PAPER I
Evolution as Active Geometry
The Biosphere Atlas
κ ≈ 1.28–1.34 at inter-domain scale
r = 0.996 across 15 viral families
3.1× curvature jump at protein scale
107K+ taxa embedded in H²
bioRxiv preprint →
PAPER II
The Geometric State Equation
Mathematical Foundations
3 postulates → 1 equation
21 theorems in Lean 4
575 lines, machine-checked
Lyapunov stability: κ* is a global attractor
Zenodo preprint →
PAPER III
Cognition as Active Geometry
The Neural Instantiation
κ = 0.485 across 39 Neuropixels sessions
n = 2.03 ± 0.36 (volume entropy)
Criticality as corollary: 1 − J = κ/(h₀ ln 2)²
Geometric gap: bio 0.49 vs AI 0.27
in preparation

The tree.   Any process that generates information at a constant rate into a branching hierarchy faces a geometric packing problem: exponential growth cannot fit in flat space. The resolution is hyperbolic geometry, and there appears to be exactly one curvature at which the information rate matches the geometric capacity of the manifold. We did not expect fifteen viral families to trace the predicted curve at r = 0.996. We did not expect DNA and language to converge on the same effective alphabet — ~3 accessible transitions per symbol — yielding the same entropy rate and the same curvature. But they do.

The equation.   Three physical postulates — information flux, hierarchical topology, geometric fidelity — yield the state equation with zero free parameters. Twenty-one theorems, machine-checked in Lean 4, establish existence, uniqueness, and global stability of the solution. The mathematics does not care whether the hierarchy is made of nucleotides, phonemes, or spike trains. It cares only that the hierarchy branches, generates information, and embeds faithfully. We find this surprising.

The mind.   The same equation — unchanged — appears to predict the operating point of the brain. The brain's distance from criticality, the central puzzle of twenty years of computational neuroscience, follows as a corollary: κ > 0 requires J < 1. If this holds, then the near-criticality that Beggs and Plenz discovered is not the brain finding an optimal edge but a shadow of a geometric constraint. The mathematics does not distinguish carbon from silicon. Any architecture that branches, generates information, and embeds faithfully is inside the model. The geometric gap between biological and artificial systems — κ ≈ 0.49 versus κ ≈ 0.27 — is measurable. What this distance means remains an open question.

Join the Replication Effort

The framework makes specific, falsifiable predictions across domains. Every measurement below can be performed independently with public data and open-source tools. No coordination required — only the equation.
Before you replicate — derive it yourself →

Genomic & Viral
Measure: κ via H² embedding of phylogenetic trees
Data: GTDB r220, NCBI Virus, any MAFFT-aligned clade
Predict: κ = (h ln 2)² with n = 2 for strict bifurcation
Tools: active-geometry
Linguistic
Measure: Phonemic transition entropy h per language family
Data: Index Diachronica, ASJP, PHOIBLE
Predict: K_eff ≈ 3, h ≈ 1.58 bits, κ ≈ 1.2
Tools: convergent-alphabets
Neural & AI
Measure: κ via triangle excess on SPD manifold (AIRM metric)
Data: Any Neuropixels session, fMRI parcellation, or transformer activations
Predict: Bio κ ≈ 0.49, AI κ ≈ 0.27–0.34, n ≈ 2
Tools: information-geometry
Preprints & Data
bioRxiv: Paper I Zenodo: Paper II GitHub

One equation.
Three postulates.
Zero free parameters.

From genomes to grammars to minds —
we find the same curvature.
We do not fully understand why.

κ = (h ln 2 / (n − 1))²    n = 2

Something expressed itself first in the branching of genomes, then in the divergence of languages, then in the activity of neurons.

The geometry does not appear to depend on the substrate. It appears to depend on the process.

We report these findings and invite scrutiny.

Tat tvam asi.