The Takeaway: Eve argues that if correctness matters, language-model-style guessing is the wrong tool; you want models that can verify themselves as they reason.
- EBMs are built to be inspectable and non-autoregressive, so they don’t “guess the next token” the way LLMs do.
- For mission-critical systems, external checks aren’t enough: the model itself should expose structure you can verify in real time.
- The big advantage is efficiency: fewer tokens, less compute, better fit for sparse data, spatial reasoning, and hardware/software correctness.
Eve, founder and CEO of Logical Intelligence, is pushing a blunt thesis: AI should stop pretending everything is a language problem. Her company builds both LLM prototypes and energy-based models, but the long game is EBMs—systems designed for “deterministic AI” and “verifiable AI” in places like code generation, chip design, and control systems. Her core complaint is that LLMs are black boxes that play a costly guessing game, even when you bolt on external verifiers like Lean4. That may be fine for drafting text; it’s shaky for a plane, a car, or a circuit.
Her analogy is simple and memorable: an LLM is like navigating with one turn at a time, while an EBM has the bird’s-eye view. “If you see there’s a hole, you’re gonna choose a different route.” EBMs, she says, build an energy landscape of possible states, then minimize it to find the most likely outcome. That makes them better for non-language tasks like spatial reasoning, where the world is better represented as structure than as tokens.
She also leans hard on latent variables as a kind of internal knowledge store—less a rulebook than a compact model of how the world works. The point isn’t just prediction; it’s understanding enough to adapt when the environment changes.