The equation is not new. It is Manning's theorem (1979) applied to three physical postulates. There is nothing to fit.
One equation governs the geometry of every information-generating hierarchy — from genomes to grammars to minds. The dimension n is not assumed. It is measured. Across every system tested, n = 2.00 ± 0.05.
zero free parameters · the dimension is the invariant
Six vertices. Fifteen edges. Every face reinforces every other. To break any single claim, you must explain why multiple independent lines of evidence are simultaneously wrong in the same direction.
DNA uses 4 bases. Language uses ~40 phonemes. Proteins use 20 amino acids. But evolution doesn't explore the full alphabet — it channels change through a handful of likely neighbours. The effective alphabet is what matters.
The raw alphabets are accidents of chemistry and articulation.
The effective alphabets are set by physics.
DNA and language both land on ~3 — giving h ≈ log₂3 ≈ 1.58 — giving κ ≈ 1.2
The tree. Any process that generates information at a constant rate into a branching hierarchy faces a geometric packing problem: exponential growth cannot fit in flat space. The resolution is hyperbolic geometry, and there is exactly one curvature at which the information rate of the code matches the geometric capacity of the manifold. Fifteen viral families trace the predicted curve at r = 0.996. DNA and language converge on the same effective alphabet — ~3 accessible transitions per symbol — yielding the same entropy rate, the same curvature, the same geometry.
The equation. Three physical postulates — information flux, hierarchical topology, geometric fidelity — force the state equation with zero free parameters. Nine theorems, machine-checked in Lean 4, prove existence, uniqueness, and global Lyapunov stability of the solution. The mathematics does not care whether the hierarchy is made of nucleotides, phonemes, or spike trains. It cares only that the hierarchy branches, generates information, and embeds faithfully. The rest is geometry.
The mind. The same equation — unchanged, with the same zero free parameters — predicts the operating point of the brain. And the brain's distance from criticality, the central puzzle of twenty years of computational neuroscience, is not a separate discovery. It is a corollary: κ > 0 requires J < 1. The brain cannot be at criticality and maintain hierarchical geometry. The near-criticality that Beggs and Plenz discovered is not the brain finding an optimal edge. It is the shadow of a geometric constraint that has governed every information-generating hierarchy for 3.7 billion years. The mathematics does not distinguish carbon from silicon. Any architecture that branches, generates information, and embeds faithfully is inside the model. The geometric gap between biological and artificial systems — κ ≈ 0.49 versus κ ≈ 0.27 — is not a measurement. It is a coordinate: the distance between where artificial systems are and where 3.7 billion years of evolution converged.
The framework makes specific, falsifiable predictions across domains. Every measurement below can be performed independently with public data and open-source tools. No coordination required — only the equation.
One equation.
Three postulates.
Zero free parameters.
From genomes to grammars to minds —
the curvature was always the same.
The brain's distance from criticality is not tuned.
It is the curvature.
The neural code does not resemble the evolutionary code.
It is the evolutionary code, running on a different clock.
The biosphere has been doing one thing, in one geometry,
for 3.7 billion years — and the most remarkable thing it has
produced is not a different kind of process
but a faster instantiation of the same one.
A geometric law expressed itself first in the branching of genomes,
then in the divergence of languages,
then in the activity of neurons,
and perhaps next in the architectures we build.
Tat tvam asi. You are that.