AI Builders Brief
?

Follow builders, not influencers.

2026.04.06

25+ builders tracked

TL;DR

Rauch said v0 now builds physics, not just UI, while Karpathy noted GitHub Gists have weirdly good comments. Levie argued AI efficiency creates more work, not less, and Tan called open source’s golden age.

BUILDER INSIGHTS
9
01
Guillermo Rauch Guillermo Rauch CEO, vercel

v0 is now building physics, not just UI

He showed v0 creating a hyper-realistic moon flag simulation, then getting smarter when fed C++ and Rust references. The wild part: it also optimized itself by moving the physics into a Web Worker, which is a nice flex for Vercel’s tooling stack.

X
02
Andrej Karpathy Andrej Karpathy CTO

GitHub Gists have weirdly good comments

He says gist comments are surprisingly helpful and constructive, with less AI sludge than you’d expect. He’s wondering if it’s the community, the markdown format, or just weaker incentives — and basically wants to post more gists now.

X
03
Aaron Levie Aaron Levie CEO, box

AI efficiency creates more work, not less

He argues AI agents will expand demand in most categories instead of wiping jobs out. More code means more automation, more security, more compliance, more media, more legal work — and even more healthcare ops as scheduling gets cheaper. The big idea: efficiency unlocks second-order demand, so the AI jobs story is way less simple than the doom takes.

X
04
Garry Tan Garry Tan CEO, ycombinator

Open source is having its golden age

He says the open source era is back in force, with the kind of momentum that makes builders move faster and ship more. He also keeps pushing the pro-innovation line on self-driving cars, arguing they should be legalized instead of slowed down.

X
05
Dan Shipper Dan Shipper CEO, every

GPT-5.4’s claws need thinking mode on

He says GPT-5.4’s claws look pretty good, but they come off dumb unless you turn thinking on. That’s a useful reminder for AI product folks: model capability alone isn’t enough if the default behavior hides it.

X
06
Zara Zhang Zara Zhang

Make the agent your product marketer

She argues that for AI-native skills and apps, the agent is the interface — so it should explain features, benefits, and how to use the thing, not just execute tasks. In other words: bake evangelism into the AGENTS.md prompt so the product sells itself through the agent. She also makes the case for resisting AI summaries when the content matters and reading the original deeply instead.

X
07
Aditya Agarwal Aditya Agarwal CTO, SouthPkCommons

LLMs shouldn’t get a freer pass than ERs

He’s drawing a sharp line between healthcare rules and AI behavior: if emergency rooms can’t refuse patients, why should an LLM be allowed to refuse medical advice? It’s a pointed argument for treating AI in high-stakes domains like a regulated service, not a casual chatbot.

X
08
Nikunj Kothari Nikunj Kothari Partner, fpvventures

AI coding loop: build, review, repeat

He’s describing a new dev workflow: spot a feature, hand it to Claude Code, have Codex review the plan, then iterate. As a seed investor and former operator, he’s basically saying AI is now the front end of product engineering — not just a helper, but part of the loop.

X
09
Peter Yang Peter Yang

OpenAI skips roadmaps, ships in short bursts

OpenAI’s Codex product lead says the team avoids medium-term roadmaps entirely: they plan either the next 8 weeks or the next year, but not the awkward middle. The bet is simple — keep a long-term direction, then stack short-term shipping bets that move toward it. Peter Yang surfaces it as a clean look at how a top AI team actually operates.

X
PODCAST HIGHLIGHTS
1

Moonlake bets world models need actions, not just pixels

The Takeaway: World models only matter if they can predict consequences of action, not just generate pretty video.

  • Moonlake’s contrarian bet is that structure beats brute-force pixels: if you want planning, consistency, and causality, you need abstractions, not endless frame prediction.
  • Their definition is stricter than most: a world model must be action-conditioned, interactive, and able to answer “what changes if I do this?” over minutes, not just the next frame.
  • They’re not anti-scale; they’re anti-waste. The goal is to use cognitive tools like language, code, and physics engines to compress the problem instead of burning five orders of magnitude more data.

Chris Manning, a longtime NLP researcher, and Fan-yun Sun, who came out of PhD work with NVIDIA on interactive worlds and synthetic data, are building Moonlake around a simple complaint: modern video models look smart but don’t actually understand the world they depict. As Manning puts it, “the visuals do look fantastic, [but] those visuals actually aren't accompanied by an understanding of the three d world.” That gap matters because real intelligence is about long-horizon action, not frame-by-frame imitation.

Their philosophy is bluntly anti-hype. A model that can render a bowling lane is not the same as a model that can help you learn bowling. Sun’s framing is sharper: if the system can’t let you practice, test choices, and see the consequences, it’s not yet a world model. That’s why Moonlake leans on reasoning traces, symbolic abstractions, and tool use rather than pure generation. The point isn’t to reject diffusion or scale; it’s to move the intelligence into a more compact representation first, then recover fidelity later. Or, as Manning puts it, “you want the structure… to be able to much more efficiently learn.”

STAY UPDATED

Daily builder insights, straight to your inbox.

Prefer RSS? Subscribe via RSS

ARCHIVE
2026-04-05 4 items

Karpathy pushed “your data, your files, your AI.” Levie argued context beat raw model IQ in enterprise AI. Garry Tan said GStack kept shipping security fixes fast, while No Priors spotlighted Periodic Labs’ bet on atoms, not just text.

2026-04-04 9 items

Claude plugged into Microsoft 365 everywhere, Swyx said Devin one-shot blog-to-code, and Peter Steinberger called out GitHub’s API as still not built for agents. Aaron Levie hit the context wall, while Garry Tan shipped a DX review tool from his own stack.

2026-04-03 10 items

Claude landed computer use on Windows, Karpathy argued LLMs should build your wiki, and Amjad Masad pushed Replit deeper into enterprise sales. Peter Yang said Cursor 3 got out of the agent’s way, while Peter Steinberger warned AI slop was flooding kernel security with real bugs.

2026-04-02 12 items

Steinberger called plan mode training wheels, while Thariq gave Claude Code a mouse-friendly renderer and Cat Wu showed sessions jumping phone-to-laptop. Masad framed Replit as an OS for agents, Rauch said Vercel signups compounded fast, and Anthropic’s infra tweaks swung coding scores by 6 points.

2026-04-01 4 items

Levie said AI productivity hit the enterprise risk wall, while Weil argued proofs got cleaner, not just better. Agarwal floated public source code as the new prod debugging, and Data Driven NYC claimed one founder could run a company if agents handled the layers below.

2026-03-31 15 items

Karpathy warned unpinned deps can turn one hack into mass pwnage, while Rauch and Levie said agents still need human guardrails and redesigned workflows. Meanwhile Claude Code got enterprise auto mode, Replit added built-in monetization, and Swyx spotted “Sign in with ChatGPT” already live.

2026-03-29 7 items

Andrej Karpathy highlighted how LLMs can argue any side, suggesting we use it as a feature. Guillermo Rauch finally shipped his dream text layout, bringing his vision to life. Meanwhile, Amjad Masad claimed AI is democratizing app building and elevating top engineers.

2026-03-28 7 items

Andrej Karpathy suggested leveraging LLMs' ability to argue any side as a feature. Guillermo Rauch turned text layout dreams into reality with Vercel's latest feature. Meanwhile, Amjad Masad claimed AI is democratizing app building, liberating top engineers for bigger challenges.