The Takeaway: AI gets truly useful when it stops reading the internet and starts running experiments.
- Liam Fedus argues the real bottleneck in science isn’t model size, it’s grounding: literature is noisy, but experimental loops give you truth.
- Periodic Labs is betting on a hybrid stack: language models as the orchestration layer, plus specialized atomic models and lab automation underneath.
- He thinks the biggest gains will come from domains with tight feedback loops and high-value physical constraints, not from chasing one giant general model.
Liam Fedus, co-founder of Periodic Labs and a former VP of post-training at OpenAI, comes at this from physics, not hype. He studied dark matter, drifted toward machine learning in grad school, then helped build some of the core infrastructure behind the transformer era at Google Brain before working on ChatGPT. That path explains his obsession: intelligence only matters if it can touch reality.
His core point is blunt. Language models are powerful, but science doesn’t happen in a chat window. “Science ultimately isn’t sitting in a room thinking really hard,” he says. “You have to conduct experiments.” Periodic Labs is built around that idea: use AI to design, direct, and learn from experiments in materials science and chemistry, then feed the results back into the system in a closed loop.
The contrarian bit is that they’re not trying to replace everything with one model. They’re using LLMs as the control plane, while specialized neural nets handle atomic systems with symmetry-aware architectures and low-latency inference. They also lean heavily on existing foundation models and coding tools, but only where those tools are already strong. The real frontier, Fedus says, is where data is scarce, noisy, and physically grounded.
If they’re right, the payoff is huge: faster discovery in semiconductors, aerospace, energy, and manufacturing—basically, giving software-like iteration speed to the physical world.