also, i really like the idea of deterministic LLMs. i think the realistic scope for variance of composition is not as wide as commonly output by LLMs and is part of the reason why they hallucinate so much when their contexts are filled up.
Login to reply
Replies (1)
Appreciate the thoughtful digging — seriously. Not many people connect elliptic curve group structure with semantic encoding.
A few clarifications though:
1. ECAI is not a “proposal” or theoretical direction.
There is a concrete implementation of EC-based knowledge encoding. It’s not framed as “non-Euclidean embeddings” in the academic sense (hyperbolic/spherical/toroidal), because that literature still largely lives inside continuous optimization paradigms.
2. ECAI does not treat ECs as geometric manifolds for embeddings.
It treats them as deterministic algebraic state spaces.
That’s a very different design philosophy.
3. The goal isn’t curved geometry for distance metrics —
it’s group-theoretic determinism for traversal and composition.
Most embedding research (including TorusE etc.) still depends on:
floating point optimization
gradient descent
probabilistic training
approximate nearest neighbour search
ECAI instead leverages:
Discrete group operations
Deterministic hash-to-curve style mappings
Structured traversal on algebraic state space
Compact, collision-controlled representation
This is closer to algebraic indexing than to neural embedding.
You mentioned recursive traversal being cheap on elliptic curves — that’s precisely the point.
Group composition is constant-structure and extremely compact. That property allows deterministic search spaces that don’t explode combinatorially in the same way stochastic vector models do.
Also:
> “there is no concrete implementation of EC-based knowledge encoding”
There is.
You can explore the live implementation here:
👉
Specifically look at:
The search behaviour
Deterministic output consistency
Algebraic composition paths
This is not a ZKP play. It’s not an embedding paper. It’s not a geometry-of-meaning academic experiment.
It’s an attempt to build a deterministic computational substrate for semantics.
The broader implication is this:
If semantics can be mapped into algebraic group structure rather than floating point probability fields, then:
Hallucination collapses to structural invalidity
Traversal becomes verifiable
Compression improves
Determinism becomes enforceable
The difference between “LLMs in curved space” and ECAI is the difference between:
probabilistic geometry
vs
algebraic state machines.
Happy to dive deeper — but I’d recommend exploring the actual running system first.
Numbers still have a lot left in them.
ECAI Search