AI & I
AI & I

AI & I

Dan Shipper

Overview
Episodes

Details

Learn how the smartest people in the world are using AI to think, create, and relate. Each week I interview founders, filmmakers, writers, investors, and others about how they use AI tools like ChatGPT, Claude, and Midjourney in their work and in their lives. We screen-share through their historical chats and then experiment with AI live on the show. Join us to discover how AI is changing how we think about our world—and ourselves. For more essays, interviews, and experiments at the forefront of AI: https://every.to/chain-of-thought?sort=newest.

Recent Episodes

The Secrets of Claude's Platform From the Team Who Built It
MAY 8, 2026
The Secrets of Claude's Platform From the Team Who Built It
In the future, you’ll be able to accomplish a goal by just giving Claude an outcome and a budget.That’s the direction Anthropic is building in with its new Managed Agents features, announced at this week’s Code with Claude developer event. The basic idea: Claude, wrapped in a computer in the cloud, that you can spin up, scale, and manage as needed. Anthropic is taking on the infrastructure that kills most agent products, and making sure that it scales to meet the needs of agents running 24/7. On this week’s AI & I from @every, I talk with Angela Jiang (@angjiang), head of product for the Claude platform, and Katelyn Lesse (@katelyn_lesse), head of engineering for the Claude platform, about what Anthropic is building and what it takes to make agents reliable in production.If you found this episode interesting, please like, subscribe, comment, and share!To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperTimestamps:00:01:48 - How the Claude platform evolved from API to agents00:04:09 - The primitives that make up Claude Managed Agents00:10:37 - Why the harness and the model are becoming a single unit00:18:49 - The infrastructure wall that kills most agent projects in production00:24:49 - Why team agents need a different shape than individual productivity tools00:26:36 - How Anthropic's legal team uses an agent to review marketing copy00:34:24 - Using multi-agent orchestration for advisor strategies, adversarial pairs, and swarms00:35:50 - How to measure agent success with outcome and budget as the end state00:39:11 - What the platform looks like a year from now, when Claude writes its own harness
play-circle icon
43 MIN
Why We Switched From Claude Code to Codex
MAY 6, 2026
Why We Switched From Claude Code to Codex
In January, Dan Shipper wrote that whoever wins vibe coding wins how you work on your computer—and OpenAI had some serious catching up to do.Three months and the release of GPT-5.5 later, Codex has more than caught up. Austin Tedesco, Every's head of growth, now spends about 80 percent of his working time inside the Codex desktop app, doing everything from drafting go-to-market plans from a stack of meeting transcripts to rebuilding the company's KPI dashboard.On this episode of AI & I, Dan sat down with Austin to discuss why the agent management interface—a desktop app built on top of a coding agent—is becoming the new operating system for knowledge work, and why Codex has become his daily driver.If you found this episode interesting, please like, subscribe, comment, and share!To hear more from Dan Shipper:Subscribe to Every: every.to/subscribeFollow him on X: twitter.com/danshipperJoin the membership for Where You Live at joinbilt.com/danTimestamps for YouTube:00:00:00 Introduction00:00:57 How Codex went from a tool for senior engineers to a daily driver for knowledge work00:02:42 How Claude Code proved that a great coding agent works for any knowledge work00:07:24 Austin's switch to Codex00:13:48 How Austin set up Codex with folders, keys, and reviewer agents00:18:24 Using Codex to brainstorm automations across Gmail, Slack, and Notion00:22:42 How Austin manages the human review step when Codex is drafting communications00:28:54 Using Codex to build specialized agents inspired by product executive Claire Vo00:31:09 Synthesizing meeting transcripts and Slack threads into a go-to-market plan00:40:15 Building a live KPI tracker in Notion that agents can read00:44:54 Using Codex for recruitingLinks to resources mentioned in the episode:Austin on X: @tedescauDan's January essay on OpenAI's catch-up problem: every.to/chain-of-thought/openai-has-some-catching-up-to-doEvery's vibe check on GPT-5.5: every.to/vibe-check/gpt-5-5
play-circle icon
58 MIN
How Stripe Is Building for an Agent-native World
APR 29, 2026
How Stripe Is Building for an Agent-native World
Emily Glassberg Sands leads data and AI at Stripe, which processes roughly 2% of global GDP, giving her a bird’s-eye view into how AI is upending the internet economy. Dan Shipper talked with Glassberg Sands for Every's AI & I about what the data on Stripe's network actually shows: AI companies are scaling three times faster than the top SaaS cohort of 2018, fraud has moved from the checkout to the full funnel, and agents have started buying things, although mostly low-stakes commodities like Halloween costumes. The conversation covers the new fraud types unique to AI companies, the AI-on-AI arms race between bad actors and fraud detectors, where AI revenue growth is actually coming from, and how Stripe is rebuilding the payments infrastructure for a world where the buyer is an agent.If you found this episode interesting, please like, subscribe, comment, and share!To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperHead to http://granola.ai/every and get 3 months free with the code EVERYTimestamps00:00:45 Introduction00:01:27 New rules for an agent-driven economy00:03:57 Compute theft is the new payment fraud00:10:00 How Stripe expanded fraud detection from checkout to the full customer lifecycle00:19:48 Why AI companies are scaling way faster than top SaaS companies00:23:27 Outcome-based billing is replacing seat-based pricing00:29:57 Where AI spending is coming from00:36:45 How the developer experience changes when agents are the builders00:41:00 The agentic commerce spectrum, from assisted buying to autonomous purchasing00:51:06 Meet Link, a consumer wallet for delegated agent purchasesLinks to resources mentioned in the episode:Emily Glassberg Sands on X: https://x.com/emilygsandsStripe: https://stripe.comStripe Radar: https://stripe.com/radarStripe Link: https://link.comLovable: https://lovable.dev
play-circle icon
53 MIN
The AI Sandwich: Where Humans Excel in an AI World
APR 22, 2026
The AI Sandwich: Where Humans Excel in an AI World
Most frameworks for working with AI agents assume humans should stay in the loop at every phase. That’s the wrong approach, says Cora general manager Kieran Klaassen.Kieran is the creator of Every's AI-native engineering methodology, compound engineering. His four-step framework—plan, work, review, compound—rebuilds how engineers work with agents. The insight, worked out with collaborator Trevin Chow, is about when to be in the loop and when to step away and let the model handle it. "LLMs are very good at just following steps, doing deep work, working for hours—days even now," Kieran says. "That thing is kind of solved."Kieran and Trevin describe an AI workflow as a sandwich. Agents are the workhorse filling, and humans are the bread, responsible for framing the problem at the start and reviewing the outputs at the end. Every CEO Dan Shipper talked with Kieran for AI & I about why setting the frame of a problem is still hard for agents, why simulated personas won't replace human judgment, Dan's bar for AGI—an agent worth running 24/7 with no off switch—and what Kieran's background as a classical composer taught him about performance, polish, and finding the parts of work that bring you joy.If you found this episode interesting, please like, subscribe, comment, and share!Head to http://granola.ai/every and get 3 months free with the code EVERYTo hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Discover more resources in the episodeCompound engineering plugin: https://github.com/EveryInc/compound-engineering-pluginCompound engineering guide: https://every.to/source-code/compound-engineering-the-definitive-guideCompound engineering camp: https://every.to/source-code/compound-engineering-camp-every-step-from-scratchTimestamps:   00:00:00 – Introduction and the AI sandwich metaphor 00:02:33 – What compound engineering is and how it’s evolved 00:04:27 – The "work" phase of agentic coding is essentially solved 00:06:27 – Why humans belong at the beginning and the end of an AI workflow 00:11:06 – Dan's argument for why agents can't change frames—and how this will keep us employed 00:16:51 – Full automation is a moving target 00:23:21 – Musical composition as a model for human-AI collaboration 00:26:39 – Find your place in an AI-accelerated world by leaning into what brings you joy
play-circle icon
28 MIN
The AI Model Built for What LLMs Can't Do
APR 15, 2026
The AI Model Built for What LLMs Can't Do
Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks. Dan Shipper talked with Bodnia for AI & I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.If you found this episode interesting, please like, subscribe, comment, and share!Head to http://granola.ai/every and get 3 months free with the code EVERYTo hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Timestamps: 00:00:51 - Introduction00:02:09 - Why correctness and verifiability matter in AI00:09:33 - What an energy-based model is00:14:21 - How EBMs construct energy landscapes to understand data00:19:00 - Why modeling intelligence through language alone is a flawed approach00:26:54 - What it means for a model to "understand" data00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code00:43:21 - Why LLM progress is plateauing00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap
play-circle icon
53 MIN