AI Builders Brief
?

Follow builders, not influencers.

2026.04.12

25+ builders tracked

TL;DR

Thariq said Claude Code now handles TurboTax pain, while Rauch called microVM sandboxes the new compute layer. Aditya Agarwal pushed memory over loops, and Levie argued AI won’t shrink law—it’ll inflate it.

BUILDER INSIGHTS
10
01
Thariq Thariq anthropicai

Claude Code now handles TurboTax pain

Claude Code added a TurboTax connector, so tax procrastination just got a lot less painful. It’s a small but very real example of AI agents moving from chat to actually doing annoying life admin inside the tools people already use.

X
02
Guillermo Rauch Guillermo Rauch CEO, vercel

MicroVM sandboxes are the new compute layer

Vercel Sandbox is now the fastest microVM-based sandbox, and he says customers are seeing the real win in production: better performance and reliability, not just lab benchmarks. The pitch is bigger than sandboxes — this is the foundation for coding agents, parallel compute, and whatever weird workloads come next.

X
03
Nikunj Kothari Nikunj Kothari Partner, fpvventures

VCs want cheap in, rich out — avoid the mercenaries

He says some VCs preach low valuations when they’re investing, then flip and push for the highest mark possible once they’re on the cap table. His advice: back missionaries, not mercenaries, and backchannel anyone joining your round because the goodwill disappears fast when things get rough.

X
04
Peter Steinberger Peter Steinberger openclaw

Make the agent keep working, not just planning

He’s testing a stricter agent mode in OpenClaw that forces GPT-5.x to keep reading code, calling tools, and making changes instead of stopping at a polite plan. He’s also making the harness pluggable, so Codex or other SDKs can run the agent loop — with Codex taking over threads, resume, compaction, and app-server execution in one setup.

X
05
Aaron Levie Aaron Levie CEO, box

AI won’t shrink law — it’ll inflate it

AI will likely create more lawyers, not fewer: more people will ask legal questions, more exotic issues will need review, and AI itself is spawning fresh IP, privacy, and compliance work. He points to the PC/internet era as precedent — when professions get more efficient, demand often rises instead of falling.

X
06
Aditya Agarwal Aditya Agarwal CTO, SouthPkCommons

Agents need memory, not just loops

He says the real line between a true agent and a looped LLM is long-horizon memory management. The most interesting part of Claude Code, in his view, is its 3-tier memory architecture — a reminder that agent quality is mostly a memory problem, not a prompt problem.

X
07
Peter Yang Peter Yang

Figma’s AI-era design play is coming into focus

He teased a chat with Figma CEO Dylan Field on whether AI can learn design taste, plus the bigger question of whether design systems help or hurt creativity. The real hook: how Figma plans to compete once AI starts eating more of the design workflow.

X
08
Garry Tan Garry Tan CEO, ycombinator

Thin harnesses, fat skills win agentic work

He argues agentic systems should keep the harness thin: memory and skills live in markdown, and the brain sits in a git repo the harness just reads. After 3 months of open source used by tens of thousands of agentic engineers a day, his counterpoint is basically: don’t let the wrapper become the product.

X
09
Zara Zhang Zara Zhang

Agents make collaboration optional, not central

She argues the most efficient team structure is basically anti-teamwork: one person owns a task end-to-end and works with agents. Human communication, in her view, should shrink to deciding what to build, defining what “good” looks like, and the parts that actually need empathy or creativity — not status updates and handoffs.

X
10
Matt Turck Matt Turck FirstMarkCap

AI customer service is basically solved

He says the AI customer service market is already crowded enough that it’s basically a solved problem. That’s a blunt take from a FirstMark VC, and it suggests the real differentiation now is elsewhere in the stack, not in yet another support bot.

X
PODCAST HIGHLIGHTS
1

Stablecoins are becoming the rails for machine-to-machine finance

The Takeaway: Jeremy Allaire thinks the next financial system won’t be built for humans first — it’ll be built for AI agents.

  • Stablecoins aren’t a crypto side quest; they’re a full-reserve, internet-native dollar system designed to be safer and more useful than legacy banking.
  • The real unlock isn’t speculation, it’s programmable money: software can now move value, settle contracts, and coordinate economic activity in real time.
  • Circle’s bet with Arc is contrarian to crypto’s old “anti-government” vibe: mainstream finance needs known validators, deterministic finality, and compliance baked in.

Allaire, co-founder and CEO of Circle, has spent more than a decade chasing the same idea: “a protocol for dollars on the internet.” That started as a way to move money instantly and globally, but his philosophy has sharpened into something bigger. He argues that stablecoins like USDC are the practical answer to the old banking problem of leverage and fragility — a safer, full-reserve form of money backed by short-duration Treasuries, repos, and cash. In his view, that’s why stablecoins matter more than most crypto assets: they’re not trying to escape the system, they’re trying to fix it.

His sharper point is about what comes next. As AI agents begin doing real work, buying services from each other, and coordinating across companies and borders, they’ll need financial infrastructure that works “globally, interoperably, instantly.” That’s where blockchain becomes less like a casino and more like an operating system. Allaire sees Arc as an “economic operating system” built for this machine economy, with USDC as the native money layer.

The twist: he’s not romantic about decentralization for its own sake. He wants infrastructure that financial institutions can actually trust. Or as he put it, the goal is to support “the real economy’s activity, not a kind of shadow economy.”

STAY UPDATED

Daily builder insights, straight to your inbox.

Prefer RSS? Subscribe via RSS

ARCHIVE
2026-04-11 16 items

Claude pushed into Word with tracked edits, and Claude Code moved planning to the web with auto mode approvals. Garry Tan called agents the Altair BASIC era, while Aaron Levie warned software without a real API gets left behind.

2026-04-10 12 items

Karpathy said free ChatGPT lagged while frontier coding models didn’t. Albert pushed cheap-to-smart escalation, Rauch said cloud infra went agent-native, and OpenAI’s next leap looked like autonomy—not chat.

2026-04-09 16 items

Woodward gave Gemini a second brain with Notebooks, while Anthropic shipped Managed Agents to move Claude from prompt to production. Rauch called the web AI’s native OS, and Levie, Masad, and Shipper all bet agents will do the work, not the people.

2026-04-08 12 items

Albert teased Anthropic’s Mythos Preview, Cat Wu juiced Claude Code’s CLI tricks, and Peter Steinberger patched CodexBar with 2 providers plus billing fixes. Levie said agents are eating knowledge work, while Nikunj Kothari preached retention over launch hype.

2026-04-07 8 items

Levie said agents won’t erase work, just push it up a layer; Yang argued they’ll shrink teams, not ambition. Garry Tan flagged an unpatched file leak in Claude’s coding env, while Kothari called Anthropic’s revenue ramp absurdly fast.

2026-04-06 10 items

Rauch said v0 now builds physics, not just UI, while Karpathy noted GitHub Gists have weirdly good comments. Levie argued AI efficiency creates more work, not less, and Tan called open source’s golden age.

2026-04-05 4 items

Karpathy pushed “your data, your files, your AI.” Levie argued context beat raw model IQ in enterprise AI. Garry Tan said GStack kept shipping security fixes fast, while No Priors spotlighted Periodic Labs’ bet on atoms, not just text.

2026-04-04 9 items

Claude plugged into Microsoft 365 everywhere, Swyx said Devin one-shot blog-to-code, and Peter Steinberger called out GitHub’s API as still not built for agents. Aaron Levie hit the context wall, while Garry Tan shipped a DX review tool from his own stack.

2026-04-03 10 items

Claude landed computer use on Windows, Karpathy argued LLMs should build your wiki, and Amjad Masad pushed Replit deeper into enterprise sales. Peter Yang said Cursor 3 got out of the agent’s way, while Peter Steinberger warned AI slop was flooding kernel security with real bugs.

2026-04-02 12 items

Steinberger called plan mode training wheels, while Thariq gave Claude Code a mouse-friendly renderer and Cat Wu showed sessions jumping phone-to-laptop. Masad framed Replit as an OS for agents, Rauch said Vercel signups compounded fast, and Anthropic’s infra tweaks swung coding scores by 6 points.

2026-04-01 4 items

Levie said AI productivity hit the enterprise risk wall, while Weil argued proofs got cleaner, not just better. Agarwal floated public source code as the new prod debugging, and Data Driven NYC claimed one founder could run a company if agents handled the layers below.

2026-03-31 15 items

Karpathy warned unpinned deps can turn one hack into mass pwnage, while Rauch and Levie said agents still need human guardrails and redesigned workflows. Meanwhile Claude Code got enterprise auto mode, Replit added built-in monetization, and Swyx spotted “Sign in with ChatGPT” already live.

2026-03-29 7 items

Andrej Karpathy highlighted how LLMs can argue any side, suggesting we use it as a feature. Guillermo Rauch finally shipped his dream text layout, bringing his vision to life. Meanwhile, Amjad Masad claimed AI is democratizing app building and elevating top engineers.

2026-03-28 7 items

Andrej Karpathy suggested leveraging LLMs' ability to argue any side as a feature. Guillermo Rauch turned text layout dreams into reality with Vercel's latest feature. Meanwhile, Amjad Masad claimed AI is democratizing app building, liberating top engineers for bigger challenges.