The equation is not new. It is Manning's theorem (1979) applied to three physical postulates. There is nothing to fit.
Any process that branches and generates information faces a geometric problem: exponential growth cannot fit in flat space. There appears to be exactly one solution. Across every system we have tested — genomes, languages, neural spike trains — the embedding dimension is n = 2.00 ± 0.05. We do not fully understand why.
zero free parameters
Six vertices. Fifteen edges. Every face reinforces every other. To break any single claim, you must explain why multiple independent lines of evidence are simultaneously wrong in the same direction.
DNA uses 4 bases. Language uses ~40 phonemes. Proteins use 20 amino acids. But evolution doesn't explore the full alphabet — it channels change through a handful of likely neighbours. The effective alphabet is what matters.
The raw alphabets are accidents of chemistry and articulation.
The effective alphabets are set by physics.
DNA and language both land on ~3 — giving h ≈ log₂3 ≈ 1.58 — giving κ ≈ 1.2
The tree. Any process that generates information at a constant rate into a branching hierarchy faces a geometric packing problem: exponential growth cannot fit in flat space. The resolution is hyperbolic geometry, and there appears to be exactly one curvature at which the information rate matches the geometric capacity of the manifold. We did not expect fifteen viral families to trace the predicted curve at r = 0.996. We did not expect DNA and language to converge on the same effective alphabet — ~3 accessible transitions per symbol — yielding the same entropy rate and the same curvature. But they do.
The equation. Three physical postulates — information flux, hierarchical topology, geometric fidelity — yield the state equation with zero free parameters. Twenty-one theorems, machine-checked in Lean 4, establish existence, uniqueness, and global stability of the solution. The mathematics does not care whether the hierarchy is made of nucleotides, phonemes, or spike trains. It cares only that the hierarchy branches, generates information, and embeds faithfully. We find this surprising.
The mind. The same equation — unchanged — appears to predict the operating point of the brain. The brain's distance from criticality, the central puzzle of twenty years of computational neuroscience, follows as a corollary: κ > 0 requires J < 1. If this holds, then the near-criticality that Beggs and Plenz discovered is not the brain finding an optimal edge but a shadow of a geometric constraint. The mathematics does not distinguish carbon from silicon. Any architecture that branches, generates information, and embeds faithfully is inside the model. The geometric gap between biological and artificial systems — κ ≈ 0.49 versus κ ≈ 0.27 — is measurable. What this distance means remains an open question.
The framework makes specific, falsifiable predictions across domains.
Every measurement below can be performed independently with public data
and open-source tools. No coordination required — only the equation.
Before you replicate — derive it yourself →
One equation.
Three postulates.
Zero free parameters.
From genomes to grammars to minds —
we find the same curvature.
We do not fully understand why.
Something expressed itself first in the branching of genomes,
then in the divergence of languages,
then in the activity of neurons.
The geometry does not appear to depend on the substrate.
It appears to depend on the process.
We report these findings and invite scrutiny.
Tat tvam asi.