The Problem
Every floating-point operation embeds a tiny error. Add a thousand of them and your position drifts. Rotate ten thousand times and your heading is off — not by a little, by a lot. Games desync. Robots mis-navigate. Simulations diverge.
More bits helps but doesn't fix it. F64 drifts less than F32 — 52 mantissa bits instead of 23 — but it still drifts. And you're paying 4× the memory, 4× the bandwidth, 4× the time for something that's still an approximation.
The demo above shows three boats navigating the same channel. The green one uses integer arithmetic — Eisenstein integers, 4 bytes per coordinate. The orange uses 32-bit floats, 8 bytes. The red uses 64-bit floats, 16 bytes — double the data of F32, quadruple the data of E12 — and still can't match it.
What Are Eisenstein Integers
Every point on a hex grid can be written as a pair of integers (a, b). That's it. No floats. No square roots. The distance from the origin — the norm — is a² - ab + b², which is always a non-negative integer.
This isn't an approximation. It's the mathematical definition. Eisenstein integers are the ring Z[ω] where ω is a primitive cube root of unity. Crystallographers use them to describe hexagonal lattices. We just made them into a Rust library.
Why hex, not square? A square grid has 4 neighbors. A hex grid has 6. For 2D problems, hex is the natural topology — distances are more uniform, circles pack better, and rotation by 60° maps the lattice onto itself. The square grid can't do that. Rotating a square grid by 45° doesn't give you integer coordinates. Rotating a hex grid by 60° always does.
Show Me
Rust
use eisenstein::E12; let a = E12::new(3, 1); // point (3,1) let b = E12::new(1, 2); // point (1,2) // 60° rotation — stays on the lattice let rotated = a.rotate_60(); // Norm is an integer — no sqrt let n = a.norm(); // 7 // Addition and multiplication let sum = a + b; // E12(4, 3) let prod = a * b; // E12(1, 7)
Python
from eisenstein import Eisenstein a = Eisenstein(3, 1) b = Eisenstein(1, 2) # Same exact arithmetic rotated = a.rotate60() assert a.norm() == 7 # integer # Zero dependencies sum_ = a + b # (4, 3) prod = a * b # (1, 7)
The norm a² - ab + b² is just integer multiplication and subtraction. Rotation by 60° is a coordinate transform: (a, b) → (-b, a - b). Two integer subtractions and a negation. The result is always another pair of integers.
Why It Matters
Autopilot Navigation
A boat navigating a channel gets a GPS fix every few seconds. Each fix involves a float computation — position, heading correction, course adjustment. After 1,000 fixes, the accumulated heading error is a few meters. In a 20-meter channel, that's the difference between safe passage and grounding.
E12 coordinates don't drift. Every fix computes the same heading from the same position. After 10,000 fixes, the heading error is still zero. Not approximately zero — zero. The arithmetic never leaves the integer lattice.
Multiplayer Game Desync
Two players see the same hex grid. Player A rotates a unit 60°. Player B rotates the same unit 60° on their device. With floats, they get different answers — different FPU rounding modes, different compiler optimizations, different results. The game desyncs.
With E12, both devices compute (-b, a - b) on the same integers. Same input, same output. Every time. No reconciliation protocol, no "resync" packets, no arguing about whose simulation is authoritative.
Sensor Fusion
Combining readings from gyroscope, compass, and GPS — each sensor's float errors compound differently depending on the order of operations, the magnitude of values, and the specific hardware. The "truth" slowly diverges from reality.
E12's exact arithmetic means sensor readings combine without error accumulation. Two gyroscope readings that should cancel do cancel, exactly. A compass bearing that should equal a GPS-derived heading equals it — exactly. The math is the same on every device, every time.
How It Works
60° Rotation
Multiplying an Eisenstein integer by ω rotates it by exactly 60°. In coordinates, this is (a, b) → (-b, a - b). No trig functions. No floating point. No lookup tables. Two integer subtractions and a negation.
Do it six times and you're back where you started. Exactly. Not approximately — the final coordinates are bit-identical to the starting coordinates. We test this with 10,000 random rotations.
The Norm Is Always an Integer
For a point (a, b), the Eisenstein norm is a² - ab + b². This is always a non-negative integer when a and b are integers. There's no square root in the formula — no approximation step where you'd lose precision.
Compare with Euclidean distance: sqrt(a² + b²). The square root is where the float creeps in. You can't compute sqrt(2) exactly in floating point. But a² - ab + b² for (a, b) = (1, 0) is just 1. No sqrt needed. No approximation.
Square grid (Z²)
4 neighbors. Euclidean distance needs sqrt(a²+b²). Rotating 45° leaves the lattice. Distance comparisons require float epsilon.
Hex grid (Eisenstein)
6 neighbors. Norm is a²-ab+b² — pure integer. Rotating 60° stays on the lattice. Distance comparisons are integer comparisons.
The Disk Formula
How many hex grid points fit within distance R of the origin? Exactly 3R² + 3R + 1. This is a proven formula. At R=36, that's 3,997 exact vertices. At R=1000, it's 3,003,001.
The library iterates these points in cache-friendly order. No allocation. No hash map. Just a tight loop over a known set of integers.
The Numbers
Real benchmarks from the eisenstein-bench suite. Same hardware, same compiler, fair comparison.
| Operation | E12 | Float64 | Notes |
|---|---|---|---|
| Norm | 1 ns | 3.3 ns | E12 is 3.3× faster (no sqrt) |
| 60° Rotation | 2 ns | 12 ns | Float needs sin+cos lookup |
| Equality | 1 compare | 4 bytes + epsilon | E12: just (a,b) == (c,d) |
| Drift after 10K rotations | 0.000 | ~10⁻¹⁵ | Float drift is small here because 60° has "nice" trig values |
Note: The float drift looks small because 60° rotations happen to have clean trig values. For arbitrary angles, drift is much worse. E12 doesn't have "nice" vs "ugly" angles — every rotation in its lattice is exact.
Data Cost Per Fix
| Type | Bytes per fix | Relative | Exact? |
|---|---|---|---|
| E12 (two i32) | 4 | 1× | yes |
| F32 (two f32) | 8 | 2× | no |
| F64 (two f64) | 16 | 4× | no |
At 1,000 fixes per second on a cellular modem:
- E12: 4 KB/s
- F32: 8 KB/s
- F64: 16 KB/s
On a satellite link at $1/MB, E12 saves $43K/year vs F64. But the real cost isn't money — it's that F64 still isn't exact.
What Doesn't Work
Some things we tried didn't pan out. We publish negative results alongside the positive ones — here are the biggest misses:
- FP16 (half precision) — unsafe past norm 2048. Not enough mantissa bits for the integer range we need.
- Tensor cores — barely help. The operations don't map well to matrix multiply.
- Bank padding — counterproductive on modern GPUs. The access patterns are already cache-friendly.
- Intent-Holonomy Duality — originally claimed as a theorem. It's false on partial orders. True on total orders (which is what we use in practice). Downgraded to "open problem."
All negative results are documented in the errata page — everything we tried that didn't work, with the data to prove it.
Get Started
cargo add eisenstein
or pip install constraint-theory for Python, npm install @superinstance/polyformalism-a2a for JS
Explore the Ecosystem
Eisenstein integers are one piece of a larger project: SuperInstance builds tools for exact computation where floating-point drift causes real failures. Each tool below builds on the last — follow the chain.
Start here. Zero deps, exact hex arithmetic. cargo add eisenstein
µC needs? 1KB .text, same math. Better yet, use hexgrid-gen to precompute lookup tables for any language.
JavaScript/TypeScript? WASM package gives you the same exact arithmetic.
See for yourself. 5 CLI commands, compare E12 vs F32 vs F64 on your hardware.
We tested everything. 13 property tests, 6 targets, 0 failures.
On ARM? 4× parallel norm in 5 NEON instructions.
Need DO-178C? 26 Coq theorems, 19/31 objectives covered.
Production-grade. 184 tests. cargo add constraint-theory-core
Unified intent vectors, alignment checking, tolerance navigation.
Each link is a standalone repo. Clone any one and try it — no monorepo, no workspace required.
CUDA Benchmarks
We ran 54 GPU experiments on an RTX 4050 (Ada) measuring constraint throughput with Eisenstein integer operations. The results are in constraint-theory-ecosystem. Highlights: INT8×8 achieves 341B constraints/sec, differential testing across 61M inputs produced zero mismatches, and CUDA Graphs gave 18× kernel launch speedup. Full negative results (FP16 unsafe past 2048, tensor cores marginal, bank padding counterproductive) are in the errata page.