The Takeaway: AI gets truly useful in science only when it stops guessing from text and starts learning from experiments.
- Physicists fit AI well because they’re trained to be principled, skeptical, and comfortable with hard systems, not vibes.
- For materials and chemistry, literature is noisy; the real edge comes from a closed loop of simulation, experiment, and error-checking.
- The winning architecture isn’t one giant model—it’s a language-model orchestrator calling specialized atomic models and tools.
Liam Fedus, co-founder of Periodic Labs and one of the creators of ChatGPT, keeps circling back to the same idea: language models were the start, not the destination. His path runs from physics and dark matter research to Google Brain, then OpenAI, where he worked on productionizing GPT-4 and helped build ChatGPT. That background matters, because his view of AI is unusually grounded: the next leap won’t come from bigger chatbots alone, but from systems that can touch reality.
His core argument is blunt: “Science ultimately isn’t sitting in a room thinking really hard. You have to conduct experiments.” Periodic is built around that premise. Instead of treating data as a static corpus, it uses experimental results as part of an active loop—spotting anomalies, comparing against simulations and literature, then choosing the next experiments. That’s a very different game from training on the internet.
Fedus is also skeptical of the idea that intelligence just scales smoothly. In his view, these systems are “spiky”: world-class in one domain, surprisingly weak in another. That’s why Periodic leans on language models as an orchestration layer, while specialized neural nets handle atomic systems, symmetry, and control. The bigger vision is simple and ambitious: give humanity “agency for atomic rearrangement synthesis” and speed up the physical world the way software has already accelerated the digital one.