AI Builders Brief
?
← BACK TO TODAY

Follow builders, not influencers.

2026.04.14

25+ builders tracked

TL;DR

Rauch said the moat moved from code to the code factory, while Levie argued every team now needed an agent wrangler. Cursor leaned into customizable multi-agent views, Replit added region controls, and No Priors backed Periodic Labs’ bet that AI could learn atoms by running experiments.

BUILDER INSIGHTS
9
01
Guillermo Rauch Guillermo Rauch CEO, vercel

The moat moves from code to the code factory

He says off-the-shelf coding agents break down in big monorepos, so companies are building their own AI software factories with custom knowledge, workflows, and integrations. Vercel just open sourced Open Agents, a reference platform for cloud coding agents, and he’s betting the real advantage is now the means of production — not the code itself.

X
02
Aaron Levie Aaron Levie CEO, box

Every team needs an agent wrangler now

He says enterprises are about to create a new role: the agent deployer/manager, someone who finds the highest-leverage workflows and wires agents into them. Think 100x faster lead triage, contract review, onboarding, and knowledge ops — with the hard part being context, evals, human handoffs, and ongoing KPI management. As Box’s CEO, he’s basically arguing this becomes a core operating function, not a centralized AI side quest.

X
03
Swyx Swyx dxtipshq

Agent engineering is clustering in a tiny hotspot

He says ~80% of the world’s agent and AI engineering happens in just three square miles — a blunt reminder that the ecosystem is still highly concentrated. He also notes Cognition usage has roughly doubled globally after two launches, with people getting creative once they can compose agents and make them proactive.

X
04
Dan Shipper Dan Shipper CEO, every

AI coding splits into pirates and architects

Software engineering in 2026, he says, needs two roles: the pirate who hacks fast to find value, and the architect who turns the mess into something durable. As CEO of Every, he’s basically arguing that AI speeds up exploration, but the real edge is still in cleaning up and systematizing what works.

X
05
Ryo Lu Ryo Lu Cursor_ai

Cursor is leaning into customizable multi-agent views

They teased more ways to split the workspace up, down, left, and right — plus more customizations and multi-agent views coming to Cursor. It reads like a push to make the editor feel less like one chat box and more like a configurable command center for serious AI coding.

X
06
Amjad Masad Amjad Masad CEO, replit

Replit adds region control for compliance-heavy apps

You can now configure app hosting region in Replit, a practical move for teams dealing with privacy rules and data residency. It’s a small feature with big enterprise implications: less friction for regulated customers, more reason to trust the platform.

X
07
Nikunj Kothari Nikunj Kothari Partner, fpvventures

Unlimited tokens may be the best retention perk

He says frontier labs have a sneaky advantage: truly unlimited tokens. In his telling, people join a hot agentic startup, hit token caps and cost constraints, then bounce back to the lab where they can just build without watching every inference bill.

X
08
Garry Tan Garry Tan CEO, ycombinator

GBrain gets voice, search, and security polish

He says GBrain is basically his own OpenClaw/Hermes agent setup, now with opinionated search, skill packs, and a voice agent built on OpenAI Realtime — with Gemini Live next. He also shipped v0.9.3, adding search tuning, evals, CJK query support, better health checks, and security hotfixes.

X
09
Peter Yang Peter Yang

AI tooling is becoming a productivity trap

He says the real question with OpenClaw, Claude Code, and similar tools is whether they’re actually getting work done — or just making the setup itself feel productive. He also argues OpenAI has a problem if GPT integration isn’t as good as Opus inside OpenClaw, because that’s now the baseline users expect.

X
PODCAST HIGHLIGHTS
1

Periodic Labs wants AI to learn atoms by running experiments

The Takeaway: AI gets truly useful in science only when it stops guessing from text and starts learning from experiments.

  • Physicists fit AI well because they’re trained to be principled, skeptical, and comfortable with hard systems, not vibes.
  • For materials and chemistry, literature is noisy; the real edge comes from a closed loop of simulation, experiment, and error-checking.
  • The winning architecture isn’t one giant model—it’s a language-model orchestrator calling specialized atomic models and tools.

Liam Fedus, co-founder of Periodic Labs and one of the creators of ChatGPT, keeps circling back to the same idea: language models were the start, not the destination. His path runs from physics and dark matter research to Google Brain, then OpenAI, where he worked on productionizing GPT-4 and helped build ChatGPT. That background matters, because his view of AI is unusually grounded: the next leap won’t come from bigger chatbots alone, but from systems that can touch reality.

His core argument is blunt: “Science ultimately isn’t sitting in a room thinking really hard. You have to conduct experiments.” Periodic is built around that premise. Instead of treating data as a static corpus, it uses experimental results as part of an active loop—spotting anomalies, comparing against simulations and literature, then choosing the next experiments. That’s a very different game from training on the internet.

Fedus is also skeptical of the idea that intelligence just scales smoothly. In his view, these systems are “spiky”: world-class in one domain, surprisingly weak in another. That’s why Periodic leans on language models as an orchestration layer, while specialized neural nets handle atomic systems, symmetry, and control. The bigger vision is simple and ambitious: give humanity “agency for atomic rearrangement synthesis” and speed up the physical world the way software has already accelerated the digital one.

STAY UPDATED

Daily builder insights, straight to your inbox.

Prefer RSS? Subscribe via RSS

ARCHIVE
2026-04-15 15 items

Woodward said Gemini’s turning into a test-prep machine, Albert called Claude Code the whole workspace, and Cat Wu shipped a desktop control center with parallel sessions and review tools. Rauch also argued agent builders need elastic Postgres, not vibes.

2026-04-13 10 items

Amjad Masad said Apple’s 50th has turned into a PR disaster, while Aaron Levie argued agents would create more work, not cut jobs. Rauch pushed engineers into the customer hot seat, and Claude warned teams to harden security fast.

2026-04-12 11 items

Thariq said Claude Code now handles TurboTax pain, while Rauch called microVM sandboxes the new compute layer. Aditya Agarwal pushed memory over loops, and Levie argued AI won’t shrink law—it’ll inflate it.

2026-04-11 16 items

Claude pushed into Word with tracked edits, and Claude Code moved planning to the web with auto mode approvals. Garry Tan called agents the Altair BASIC era, while Aaron Levie warned software without a real API gets left behind.

2026-04-10 12 items

Karpathy said free ChatGPT lagged while frontier coding models didn’t. Albert pushed cheap-to-smart escalation, Rauch said cloud infra went agent-native, and OpenAI’s next leap looked like autonomy—not chat.

2026-04-09 16 items

Woodward gave Gemini a second brain with Notebooks, while Anthropic shipped Managed Agents to move Claude from prompt to production. Rauch called the web AI’s native OS, and Levie, Masad, and Shipper all bet agents will do the work, not the people.

2026-04-08 12 items

Albert teased Anthropic’s Mythos Preview, Cat Wu juiced Claude Code’s CLI tricks, and Peter Steinberger patched CodexBar with 2 providers plus billing fixes. Levie said agents are eating knowledge work, while Nikunj Kothari preached retention over launch hype.

2026-04-07 8 items

Levie said agents won’t erase work, just push it up a layer; Yang argued they’ll shrink teams, not ambition. Garry Tan flagged an unpatched file leak in Claude’s coding env, while Kothari called Anthropic’s revenue ramp absurdly fast.

2026-04-06 10 items

Rauch said v0 now builds physics, not just UI, while Karpathy noted GitHub Gists have weirdly good comments. Levie argued AI efficiency creates more work, not less, and Tan called open source’s golden age.

2026-04-05 4 items

Karpathy pushed “your data, your files, your AI.” Levie argued context beat raw model IQ in enterprise AI. Garry Tan said GStack kept shipping security fixes fast, while No Priors spotlighted Periodic Labs’ bet on atoms, not just text.

2026-04-04 9 items

Claude plugged into Microsoft 365 everywhere, Swyx said Devin one-shot blog-to-code, and Peter Steinberger called out GitHub’s API as still not built for agents. Aaron Levie hit the context wall, while Garry Tan shipped a DX review tool from his own stack.

2026-04-03 10 items

Claude landed computer use on Windows, Karpathy argued LLMs should build your wiki, and Amjad Masad pushed Replit deeper into enterprise sales. Peter Yang said Cursor 3 got out of the agent’s way, while Peter Steinberger warned AI slop was flooding kernel security with real bugs.

2026-04-02 12 items

Steinberger called plan mode training wheels, while Thariq gave Claude Code a mouse-friendly renderer and Cat Wu showed sessions jumping phone-to-laptop. Masad framed Replit as an OS for agents, Rauch said Vercel signups compounded fast, and Anthropic’s infra tweaks swung coding scores by 6 points.

2026-04-01 4 items

Levie said AI productivity hit the enterprise risk wall, while Weil argued proofs got cleaner, not just better. Agarwal floated public source code as the new prod debugging, and Data Driven NYC claimed one founder could run a company if agents handled the layers below.

2026-03-31 15 items

Karpathy warned unpinned deps can turn one hack into mass pwnage, while Rauch and Levie said agents still need human guardrails and redesigned workflows. Meanwhile Claude Code got enterprise auto mode, Replit added built-in monetization, and Swyx spotted “Sign in with ChatGPT” already live.

2026-03-29 7 items

Andrej Karpathy highlighted how LLMs can argue any side, suggesting we use it as a feature. Guillermo Rauch finally shipped his dream text layout, bringing his vision to life. Meanwhile, Amjad Masad claimed AI is democratizing app building and elevating top engineers.

2026-03-28 7 items

Andrej Karpathy suggested leveraging LLMs' ability to argue any side as a feature. Guillermo Rauch turned text layout dreams into reality with Vercel's latest feature. Meanwhile, Amjad Masad claimed AI is democratizing app building, liberating top engineers for bigger challenges.