The Takeaway: Eve is betting that the next useful AI won’t be a better guesser — it’ll be a model you can inspect, verify, and trust.
- LLMs are great at generating output, but they’re still “playing a guessing game,” which makes them expensive and shaky for mission-critical tasks.
- Energy-based models (EBMs) skip tokens entirely, so they can evaluate whole states at once instead of predicting one word or action at a time.
- The real edge is not just accuracy — it’s visibility: EBMs can be checked internally during training and externally after the fact.
Eve, founder and CEO of Logical Intelligence, is building around a blunt premise: if AI is going to touch software, chips, cars, or anything else where failure matters, “correctness” can’t be an afterthought. Her company works on both LLMs and EBMs, but the long-term bet is on energy-based reasoning models with latent variables — a mouthful she also calls Kona. Her argument is that LLMs are black boxes that only let you judge the final answer, while EBMs let you see the model’s internal state as it learns.
She keeps coming back to a simple contrast: language models are forced to translate everything into tokens, even when the task is spatial, physical, or mechanical. That’s why she says using LLMs for things like driving or circuit control is like using “the literature department everywhere.” EBMs, by contrast, build an energy landscape and search for the lowest, most probable state. In her couch example, the model isn’t guessing the next word — it’s learning the most likely configuration.
Her core philosophy is practical, not mystical: if you need milliseconds, sparse data, or verifiable output, don’t ask a language model to pretend it’s a physics engine. “We don’t have to,” she says. “There are EBMs.”