AI Builders Brief
?

Follow builders, not influencers.

2026.04.05

25+ builders tracked

TL;DR

Karpathy pushed “your data, your files, your AI.” Levie argued context beat raw model IQ in enterprise AI. Garry Tan said GStack kept shipping security fixes fast, while No Priors spotlighted Periodic Labs’ bet on atoms, not just text.

BUILDER INSIGHTS
3
01
Andrej Karpathy Andrej Karpathy CTO

Your data, your files, your AI

He’s pushing a simple thesis: personal AI should be explicit, portable, and under your control — not some opaque memory blob inside a vendor’s system. His “file over app” take says the right unit is a local wiki of universal files, with any model plugged in on top, and he thinks agent fluency is becoming a core skill.

X
02
Aaron Levie Aaron Levie CEO, box

Context, not raw model IQ, wins enterprise AI

He argues the real bottleneck in enterprise AI is the context layer: models can’t know what each user is allowed to see, and continual learning at the model level breaks on access controls and private workflows. That’s why applied AI at Box has to behave more like a human searcher-reader than a chunk-and-pray retriever. He also says better tool use, bigger context windows, and stronger reasoning are pushing agents into a qualitatively new tier.

X
03
Garry Tan Garry Tan CEO, ycombinator

GStack keeps shipping security fixes fast

He says GStack just landed 14 security bug fixes, and half came from community PRs — a nice sign the project’s getting real outside help. He’s also pushing toward more adaptive reviews and a smarter “L8 software factory,” but this batch is mostly about steady, unglamorous hardening.

X
PODCAST HIGHLIGHTS
1

Periodic Labs wants AI to learn from atoms, not just text

The Takeaway: AI gets truly useful when it stops reading the internet and starts running experiments.

  • Liam Fedus argues the real bottleneck in science isn’t model size, it’s grounding: literature is noisy, but experimental loops give you truth.
  • Periodic Labs is betting on a hybrid stack: language models as the orchestration layer, plus specialized atomic models and lab automation underneath.
  • He thinks the biggest gains will come from domains with tight feedback loops and high-value physical constraints, not from chasing one giant general model.

Liam Fedus, co-founder of Periodic Labs and a former VP of post-training at OpenAI, comes at this from physics, not hype. He studied dark matter, drifted toward machine learning in grad school, then helped build some of the core infrastructure behind the transformer era at Google Brain before working on ChatGPT. That path explains his obsession: intelligence only matters if it can touch reality.

His core point is blunt. Language models are powerful, but science doesn’t happen in a chat window. “Science ultimately isn’t sitting in a room thinking really hard,” he says. “You have to conduct experiments.” Periodic Labs is built around that idea: use AI to design, direct, and learn from experiments in materials science and chemistry, then feed the results back into the system in a closed loop.

The contrarian bit is that they’re not trying to replace everything with one model. They’re using LLMs as the control plane, while specialized neural nets handle atomic systems with symmetry-aware architectures and low-latency inference. They also lean heavily on existing foundation models and coding tools, but only where those tools are already strong. The real frontier, Fedus says, is where data is scarce, noisy, and physically grounded.

If they’re right, the payoff is huge: faster discovery in semiconductors, aerospace, energy, and manufacturing—basically, giving software-like iteration speed to the physical world.

STAY UPDATED

Daily builder insights, straight to your inbox.

Prefer RSS? Subscribe via RSS

ARCHIVE
2026-04-04 9 items

Claude plugged into Microsoft 365 everywhere, Swyx said Devin one-shot blog-to-code, and Peter Steinberger called out GitHub’s API as still not built for agents. Aaron Levie hit the context wall, while Garry Tan shipped a DX review tool from his own stack.

2026-04-03 10 items

Claude landed computer use on Windows, Karpathy argued LLMs should build your wiki, and Amjad Masad pushed Replit deeper into enterprise sales. Peter Yang said Cursor 3 got out of the agent’s way, while Peter Steinberger warned AI slop was flooding kernel security with real bugs.

2026-04-02 12 items

Steinberger called plan mode training wheels, while Thariq gave Claude Code a mouse-friendly renderer and Cat Wu showed sessions jumping phone-to-laptop. Masad framed Replit as an OS for agents, Rauch said Vercel signups compounded fast, and Anthropic’s infra tweaks swung coding scores by 6 points.

2026-04-01 4 items

Levie said AI productivity hit the enterprise risk wall, while Weil argued proofs got cleaner, not just better. Agarwal floated public source code as the new prod debugging, and Data Driven NYC claimed one founder could run a company if agents handled the layers below.

2026-03-31 15 items

Karpathy warned unpinned deps can turn one hack into mass pwnage, while Rauch and Levie said agents still need human guardrails and redesigned workflows. Meanwhile Claude Code got enterprise auto mode, Replit added built-in monetization, and Swyx spotted “Sign in with ChatGPT” already live.

2026-03-29 7 items

Andrej Karpathy highlighted how LLMs can argue any side, suggesting we use it as a feature. Guillermo Rauch finally shipped his dream text layout, bringing his vision to life. Meanwhile, Amjad Masad claimed AI is democratizing app building and elevating top engineers.

2026-03-28 7 items

Andrej Karpathy suggested leveraging LLMs' ability to argue any side as a feature. Guillermo Rauch turned text layout dreams into reality with Vercel's latest feature. Meanwhile, Amjad Masad claimed AI is democratizing app building, liberating top engineers for bigger challenges.