Fireside Product Management
Fireside Product Management

Fireside Product Management

Tom Leung

Overview
Episodes

Details

Product Management podcast where 20 year PM veteran Tom Leung interviews VP's, CPO's, and CEO's who rose up from product to talk about their careers, the art and science of product management, and advice for other PM's. Watch video on YouTube. firesidepm.co Learn more about host Tom Leung at http://tomleungcoaching.com firesidepm.substack.com

Recent Episodes

I Tested 5 AI Tools to Write a PRD—Here's the Winner
DEC 15, 2025
I Tested 5 AI Tools to Write a PRD—Here's the Winner
<p>TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they’d all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They’re all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing.</p><p>What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they’re better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking.</p><p>If you’re an early or mid-career PM in Silicon Valley, this matters to you. Because here’s the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn’t whether to use these tools. The question is whether you’re using the right ones most effectively.</p><p>So let me walk you through exactly what I did, what I learned, and what you should do differently.</p><p>The Setup: A Real-World Test Case</p><p>Here’s how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.”</p><p>So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD.</p><p>For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that’s complex enough to stress-test these tools but straightforward enough that I could iterate quickly.</p><p>But here’s the critical part that too many PMs get wrong when they start using AI for product work: I didn’t just throw a single sentence at these tools and expect magic.</p><p>The “Back of the Napkin” Approach: Why You Still Need to Think</p><p>“I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we’re gonna do this more, a little old-school AI approach where we’re gonna do some original human thinking.”</p><p>This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless.</p><p>Your job as a PM isn’t to become obsolete. It’s to become more effective. And that means doing the strategic thinking work that AI <em>cannot</em> do for you.</p><p>So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here’s what I included:</p><p><strong>Why:</strong> The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.”</p><p><strong>Target User:</strong> Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.”</p><p>This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage.</p><p><strong>Problem to Solve:</strong> What’s broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.”</p><p><strong>Key Elements:</strong> The feature set and approach.</p><p><strong>Success Metrics:</strong> How we’d measure success.</p><p>Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that’s exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That’s all it took to create a foundation that would dramatically improve what came out of the AI tools.</p><p>Round One: Generating the Full PRD</p><p>With my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD.</p><p>ChatGPT: The Reliable Generalist</p><p>ChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring.</p><p>The document it produced checked all the boxes. It had the sections you’d expect. The writing was clear. But when I read it, I couldn’t shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation.</p><p>Here’s what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment.</p><p>But here’s what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics.</p><p>The problem with generic output isn’t that it’s wrong, it’s that it’s invisible. When you’re trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company’s actual strategy. ChatGPT’s output felt like it was written by someone who’d read a lot of PRDs but never actually shipped a product.</p><p>One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren’t wrong, but they’re lazy. They don’t show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude’s output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.”</p><p><strong>Actionable Insight:</strong> Use ChatGPT when you need fast, serviceable documentation that doesn’t need to be exceptional. Think: internal updates, status reports, routine communications. Don’t rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context.</p><p>Gemini: Better Than Expected</p><p>Google’s Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming.</p><p>What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations.</p><p>Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren’t in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting.</p><p>But here’s where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren’t terrible, but they weren’t compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer’s brain.</p><p>For a PRD that you’re going to use internally with a team that already understands the context, Gemini’s output would work well. The text quality is strong enough, and if you’re in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini’s output directly into Google Docs and continue iterating there.</p><p>But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It’s good, but not great. It’s the solid B+ student: reliably competent but rarely exceptional.</p><p><strong>Actionable Insight:</strong> Gemini is a strong choice if you’re working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It’s particularly good if you’re working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don’t expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents.</p><p>Grok: Not Ready for Prime Time</p><p>Let’s just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work.</p><p>“I don’t have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated.</p><p><strong>Actionable Insight:</strong> Skip Grok for product documentation work right now. Maybe it’ll improve, but as of my testing, it’s simply not competitive with the other options. It felt like 1-2 years behind the others.</p><p>ChatPRD: The Specialized Tool</p><p>Now this was interesting. ChatPRD is purpose-built for PRDs, using foundational models underneath but with specific tuning and structure for product documentation.</p><p>The result? The structure was logical, the depth was appropriate, and it included elements that showed understanding of what actually matters in a PRD. As I reflected: “Cause this one feels like, A human wrote this PRD.”</p><p>The interface guides you through the process more deliberately than just dumping text into a general chat interface. It asks clarifying questions. It structures the output more thoughtfully.</p><p><strong>Actionable Insight:</strong> If you’re a technical lead without a dedicated PM, or you’re a PM who wants a more structured approach to using AI for PRDs, ChatPRD is worth the specialized focus. It’s particularly good when you need something that feels authentic enough to share with stakeholders without heavy editing.</p><p>Claude: The Clear Winner</p><p>But the standout performer, and I’m ranking these, was Claude.</p><p>“I think we know that for now, I’m gonna say Claude did the best job,” I concluded after all the testing. Claude produced the most comprehensive, thoughtful, and strategically sound PRD. But what really set it apart were the concept mocks.</p><p>When I asked each tool to generate visual mockups of the product, Claude produced HTML prototypes that, while not fully functional, looked genuinely compelling. They had thoughtful UI design, clear information architecture, and felt like something that could actually guide development.</p><p>“They were, like, closer to, like, what a Lovable would produce or something like that,” I noted, referring to the quality of low-fidelity prototypes that good designers create.</p><p>The text quality was also superior: more nuanced, better structured, and with more strategic depth. It felt like Claude understood not just what a PRD should contain, but why it should contain those elements.</p><p><strong>Actionable Insight:</strong> For any PRD that matters, meaning anything you’ll share with leadership, use to get buy-in, or guide actual product development, you might as well start with Claude. The quality difference is significant enough that it’s worth using Claude even if you primarily use another tool for other tasks.</p><p>Final Rankings: The Definitive Hierarchy</p><p>After testing all five tools on multiple dimensions: initial PRD generation, visual mockups, and even crafting a pitch paragraph for a skeptical VP of Engineering, here’s my final ranking:</p><p>* <strong>Claude</strong> - Best overall quality, most compelling mockups, strongest strategic thinking</p><p>* <strong>ChatPRD</strong> - Best for structured PRD creation, feels most “human”</p><p>* <strong>Gemini</strong> - Solid all-around performance, good Google integration</p><p>* <strong>ChatGPT</strong> - Reliable but generic, lacks differentiation</p><p>* <strong>Grok</strong> - Not competitive for this use case</p><p>“I’d probably say Claude, then chat PRD, then Gemini, then chat GPT, and then Grock,” I concluded.</p><p>The Deeper Lesson: Garbage In, Garbage Out (Still Applies)</p><p>But here’s what matters more than which tool wins: the realization that hit me partway through this experiment.</p><p>“I think it really does come down to, like, you know, the quality of the prompt,” I observed. “So if our prompt were a little more detailed, all that were more thought-through, then I’m sure the output would have been better. But as you can see we didn’t really put in brain trust prompting here. Just a little bit of, kind of hand-wavy prompting, but a little better than just one or two sentences.”</p><p>And we still got pretty good results.</p><p>This is the meta-insight that should change how you approach AI tools in your product work: <strong>The quality of your input determines the quality of your output, but the baseline quality of the tool determines the ceiling of what’s possible.</strong></p><p>No amount of great prompting will make Grok produce Claude-level output. But even mediocre prompting with Claude will beat great prompting with lesser tools.</p><p>So the dual strategy is:</p><p>* Use the best tool available (currently Claude for PRDs)</p><p>* Invest in improving your prompting skills ideally with as much original and insightful human, company aware, and context aware thinking as possible.</p><p>Real-World Workflows: How to Actually Use This in Your Day-to-Day PM Work</p><p>Theory is great. Here’s how to incorporate these insights into your actual product management workflows.</p><p>The Weekly Sprint Planning Workflow</p><p>Every PM I know spends hours each week preparing for sprint planning. You need to refine user stories, clarify acceptance criteria, anticipate engineering questions, and align with design and data science. AI can compress this work significantly.</p><p><strong>Here’s an example workflow:</strong></p><p><strong>Monday morning (30 minutes):</strong></p><p>* Review upcoming priorities and open your rough notes/outline in Google Docs</p><p>* Open Claude and paste your outline with this prompt:</p><p>“I’m preparing for sprint planning. Based on these priorities [paste notes], generate detailed user stories with acceptance criteria. Format each as: User story, Business context, Technical considerations, Acceptance criteria, Dependencies, Open questions.”</p><p><strong>Monday afternoon (20 minutes):</strong></p><p>* Review Claude’s output critically</p><p>* Identify gaps, unclear requirements, or missing context</p><p>* Follow up with targeted prompts:</p><p>“The user story about authentication is too vague. Break it down into separate stories for: social login, email/password, session management, and password reset. For each, specify security requirements and edge cases.”</p><p><strong>Tuesday morning (15 minutes):</strong></p><p>* Generate mockups for any UI-heavy stories:</p><p>“Create an HTML mockup for the login flow showing: landing page, social login options, email/password form, error states, and success redirect.”</p><p>* Even if the HTML doesn’t work perfectly, it gives your designers a starting point</p><p><strong>Before sprint planning (10 minutes):</strong></p><p>* Ask Claude to anticipate engineering questions:</p><p>“Review these user stories as if you’re a senior engineer. What questions would you ask? What concerns would you raise about technical feasibility, dependencies, or edge cases?”</p><p>* This preparation makes you look thoughtful and helps the meeting run smoothly</p><p>Total time investment: ~75 minutes. Typical time saved: 3-4 hours compared to doing this manually.</p><p>The Stakeholder Alignment Workflow</p><p>Getting alignment from multiple stakeholders (product leadership, engineering, design, data science, legal, marketing) is one of the hardest parts of PM work. AI can help you think through different stakeholder perspectives and craft compelling communications for each.</p><p><strong>Here’s how:</strong></p><p><strong>Step 1: Map your stakeholders (10 minutes)</strong></p><p>Create a quick table in a doc:</p><p>Stakeholder | Primary Concern | Decision Criteria | Likely Objections VP Product | Strategic fit, ROI | Company OKRs, market opportunity | Resource allocation vs other priorities VP Eng | Technical risk, capacity | Engineering capacity, tech debt | Complexity, unclear requirements Design Lead | User experience | User research, design principles | Timeline doesn’t allow proper design process Legal | Compliance, risk | Regulatory requirements | Data privacy, user consent flows</p><p><strong>Step 2: Generate stakeholder-specific communications (20 minutes)</strong></p><p>For each key stakeholder, ask Claude:</p><p>“I need to pitch this product idea to [Stakeholder]. Based on this PRD, create a 1-page brief addressing their primary concern of [concern from your table]. Open with the specific value for them, address their likely objection of [objection], and close with a clear ask. Tone should be [professional/technical/strategic] based on their role.”</p><p>Then you’ll have customized one-pagers for your pre-meetings with each stakeholder, dramatically increasing your alignment rate.</p><p><strong>Step 3: Synthesize feedback (15 minutes)</strong></p><p>After gathering stakeholder input, ask Claude to help you synthesize:</p><p>“I got the following feedback from stakeholders: [paste feedback]. Identify: (1) Common themes, (2) Conflicting requirements, (3) Legitimate concerns vs organizational politics, (4) Recommended compromises that might satisfy multiple parties.”</p><p>This pattern-matching across stakeholder feedback is something AI does really well and saves you hours of mental processing.</p><p>The Quarterly Planning Workflow</p><p>Quarterly or annual planning is where product strategy gets real. You need to synthesize market trends, customer feedback, technical capabilities, and business objectives into a coherent roadmap. AI can accelerate this dramatically.</p><p><strong>Six weeks before planning:</strong></p><p>* Start collecting input (customer interviews, market research, competitive analysis, engineering feedback)</p><p>* Don’t wait until the last minute</p><p><strong>Four weeks before planning:</strong></p><p>Dump everything into Claude with this structure:</p><p>“I’m creating our Q2 roadmap. Context:</p><p>* Business objectives: [paste from leadership]</p><p>* Customer feedback themes: [paste synthesis]</p><p>* Technical capabilities/constraints: [paste from engineering]</p><p>* Competitive landscape: [paste analysis]</p><p>* Current product gaps: [paste from your analysis]</p><p>Generate 5 strategic themes that could anchor our Q2 roadmap. For each theme:</p><p>* Strategic rationale (how it connects to business objectives)</p><p>* Key initiatives (2-3 major features/projects)</p><p>* Success metrics</p><p>* Resource requirements (rough estimate)</p><p>* Risks and mitigations</p><p>* Customer segments addressed”</p><p>This gives you a strategic framework to react to rather than starting from a blank page.</p><p><strong>Three weeks before planning:</strong></p><p>Iterate on the most promising themes:</p><p>“Deep dive on Theme 3. Generate:</p><p>* Detailed initiative breakdown</p><p>* Dependencies on platform/infrastructure</p><p>* Phasing options (MVP vs full build)</p><p>* Go-to-market considerations</p><p>* Data requirements</p><p>* Open questions requiring research”</p><p><strong>Two weeks before planning:</strong></p><p>Pressure-test your thinking:</p><p>“Play devil’s advocate on this roadmap. What are the strongest arguments against each initiative? What am I likely missing? What failure modes should I plan for?”</p><p>This adversarial prompting forces you to strengthen weak points before your leadership reviews it.</p><p><strong>One week before planning:</strong></p><p>Generate your presentation:</p><p>“Create an executive presentation for this roadmap. Structure: (1) Market context and strategic imperative, (2) Q2 themes and initiatives, (3) Expected outcomes and metrics, (4) Resource requirements, (5) Key risks and mitigations, (6) Success criteria for decision. Make it compelling but data-driven. Tone: confident but not overselling.”</p><p>Then add your company-specific context, visual brand, and personal voice.</p><p>The Customer Research Workflow</p><p>AI can’t replace talking to customers, but it can help you prepare better questions, analyze feedback more systematically, and identify patterns faster.</p><p><strong>Before customer interviews:</strong></p><p>“I’m interviewing customers about [topic]. Generate:</p><p>* 10 open-ended questions that avoid leading the witness</p><p>* 5 follow-up questions for each main question</p><p>* Common cognitive biases I should watch for</p><p>* A framework for categorizing responses”</p><p>This prep work helps you conduct better interviews.</p><p><strong>After interviews:</strong></p><p>“I conducted 15 customer interviews. Here are the key quotes: [paste anonymized quotes]. Identify:</p><p>* Recurring themes and patterns</p><p>* Surprising insights that contradict our assumptions</p><p>* Segments with different needs</p><p>* Implied needs customers didn’t articulate directly</p><p>* Recommended next steps for validation”</p><p>AI is excellent at pattern-matching across qualitative data at scale.</p><p>The Crisis Management Workflow</p><p>Something broke. The site is down. Data was lost. A feature shipped with a critical bug. You need to move fast.</p><p><strong>Immediate response (5 minutes):</strong></p><p>“Critical incident. Details: [brief description]. Generate:</p><p>* Incident classification (Sev 1-4)</p><p>* Immediate stakeholders to notify</p><p>* Draft customer communication (honest, apologetic, specific about what happened and what we’re doing)</p><p>* Draft internal communication for leadership</p><p>* Key questions to ask engineering during investigation”</p><p>Having these drafted in 5 minutes lets you focus on coordination and decision-making rather than wordsmithing.</p><p><strong>Post-incident (30 minutes):</strong></p><p>“Write a post-mortem based on this incident timeline: [paste timeline]. Include:</p><p>* What happened (technical details)</p><p>* Root cause analysis</p><p>* Impact quantification (users affected, revenue impact, time to resolution)</p><p>* What went well in our response</p><p>* What could have been better</p><p>* Specific action items with owners and deadlines</p><p>* Process changes to prevent recurrence Tone: Blameless, focused on learning and improvement.”</p><p>This gives you a strong first draft to refine with your team.</p><p>Common Pitfalls: What Not to Do with AI in Product Management</p><p>Now let’s talk about the mistakes I see PMs making with AI tools. </p><p>Pitfall #1: Treating AI Output as Final</p><p>The biggest mistake is copy-pasting AI output directly into your PRD, roadmap presentation, or stakeholder email without critical review.</p><p>The result? Documents that are grammatically perfect but strategically shallow. Presentations that sound impressive but don’t hold up under questioning. Emails that are professionally worded but miss the subtext of organizational politics.</p><p><strong>The fix:</strong> Always ask yourself:</p><p>* Does this reflect my actual strategic thinking, or generic best practices?</p><p>* Would my CEO/engineering lead/biggest customer find this compelling and specific?</p><p>* Are there company-specific details, customer insights, or technical constraints that only I know?</p><p>* Does this sound like me, or like a robot?</p><p>Add those elements. That’s where your value as a PM comes through.</p><p>Pitfall #2: Using AI as a Crutch Instead of a Tool</p><p>Some PMs use AI because they don’t want to think deeply about the product. They’re looking for AI to do the hard work of strategy, prioritization, and trade-off analysis.</p><p>This never works. AI can help you think more systematically, but it can’t replace thinking.</p><p>If you find yourself using AI to avoid wrestling with hard questions (”Should we build X or Y?” “What’s our actual competitive advantage?” “Why would customers switch from the incumbent?”), you’re using it wrong.</p><p><strong>The fix:</strong> Use AI to explore options, not to make decisions. Generate three alternatives, pressure-test each one, then use your judgment to decide. The AI can help you think through implications, but you’re still the one choosing.</p><p>Pitfall #3: Not Iterating</p><p>Getting mediocre AI output and just accepting it is a waste of the technology’s potential.</p><p>The PMs who get exceptional results from AI are the ones who iterate. They generate an initial response, identify what’s weak or missing, and ask follow-up questions. They might go through 5-10 iterations on a key section of a PRD.</p><p>Each iteration is quick (30 seconds to type a follow-up prompt, 30 seconds to read the response), but the cumulative effect is dramatically better output.</p><p><strong>The fix:</strong> Budget time for iteration. Don’t try to generate a complete, polished PRD in one prompt. Instead, generate a rough draft, then spend 30 minutes iterating on specific sections that matter most.</p><p>Pitfall #4: Ignoring the Political and Human Context</p><p>AI tools have no understanding of organizational politics, interpersonal relationships, or the specific humans you’re working with.</p><p>They don’t know that your VP of Engineering is burned out and skeptical of any new initiatives. They don’t know that your CEO has a personal obsession with a specific competitor. They don’t know that your lead designer is sensitive about not being included early enough in the process.</p><p>If you use AI-generated communications without layering in this human context, you’ll create perfectly worded documents that land badly because they miss the subtext.</p><p><strong>The fix:</strong> After generating AI content, explicitly ask yourself: “What human context am I missing? What relationships do I need to consider? What political dynamics are in play?” Then modify the AI output accordingly.</p><p>Pitfall #5: Over-Relying on a Single Tool</p><p>Different AI tools have different strengths. Claude is great for strategic depth, ChatPRD is great for structure, Gemini integrates well with Google Workspace.</p><p>If you only ever use one tool, you’re missing opportunities to leverage different strengths for different tasks.</p><p><strong>The fix:</strong> Keep 2-3 tools in your toolkit. Use Claude for important PRDs and strategic documents. Use Gemini for quick internal documentation that needs to integrate with Google Docs. Use ChatPRD when you want more guided structure. Match the tool to the task.</p><p>Pitfall #6: Not Fact-Checking AI Output</p><p>AI tools hallucinate. They make up statistics, misrepresent competitors, and confidently state things that aren’t true. If you include those hallucinations in a PRD that goes to leadership, you look incompetent.</p><p><strong>The fix:</strong> Fact-check everything, especially:</p><p>* Statistics and market data</p><p>* Competitive feature claims</p><p>* Technical capabilities and limitations</p><p>* Regulatory and compliance requirements</p><p>If the AI cites a number or makes a factual claim, verify it independently before including it in your document.</p><p>The Meta-Skill: Prompt Engineering for PMs</p><p>Let’s zoom out and talk about the underlying skill that makes all of this work: prompt engineering.</p><p>This is a real skill. The difference between a mediocre prompt and a great prompt can be 10x difference in output quality. And unlike coding or design, where there’s a steep learning curve, prompt engineering is something you can get good at quickly.</p><p><strong>Principle 1: Provide Context Before Instructions</strong></p><p>Bad prompt:</p><p>“Write a PRD for an AI tutor”</p><p>Good prompt:</p><p>“I’m a PM at an edtech company with 2M users, primarily high school students. We’re exploring an AI tutor feature to complement our existing video content library and practice problems. Our main competitors are Khan Academy and Course Hero. Our differentiation is personalized learning paths based on student performance data.</p><p>Write a PRD for an AI tutor feature targeting students in the middle 80% academically who struggle with science and math.”</p><p>The second prompt gives Claude the context it needs to generate something specific and strategic rather than generic.</p><p><strong>Principle 2: Specify Format and Constraints</strong></p><p>Bad prompt:</p><p>“Generate success metrics”</p><p>Good prompt:</p><p>“Generate 5-7 success metrics for this feature. Include a mix of:</p><p>* Leading indicators (early signals of success)</p><p>* Lagging indicators (definitive success measures)</p><p>* User behavior metrics</p><p>* Business impact metrics</p><p>For each metric, specify: name, definition, target value, measurement method, and why it matters.”</p><p>The structure you provide shapes the structure you get back.</p><p><strong>Principle 3: Ask for Multiple Options</strong></p><p>Bad prompt:</p><p>“What should our Q2 priorities be?”</p><p>Good prompt:</p><p>“Generate 3 different strategic approaches for Q2:</p><p>* Option A: Focus on user acquisition</p><p>* Option B: Focus on engagement and retention</p><p>* Option C: Focus on monetization</p><p>For each option, detail: key initiatives, expected outcomes, resource requirements, risks, and recommendation for or against.”</p><p>Asking for multiple options forces the AI (and forces you) to think through trade-offs systematically.</p><p><strong>Principle 4: Specify Audience and Tone</strong></p><p>Bad prompt:</p><p>“Summarize this PRD”</p><p>Good prompt:</p><p>“Create a 1-paragraph summary of this PRD for our skeptical VP of Engineering. Tone: Technical, concise, addresses engineering concerns upfront. Focus on: technical architecture, resource requirements, risks, and expected engineering effort. Avoid marketing language.”</p><p>The audience and tone specification ensures the output will actually work for your intended use.</p><p><strong>Principle 5: Use Iterative Refinement</strong></p><p>Don’t try to get perfect output in one prompt. Instead:</p><p>First prompt: Generate rough draft Second prompt: “This is too generic. Add specific examples from [our company context].” Third prompt: “The technical section is weak. Expand with architecture details and dependencies.” Fourth prompt: “Good. Now make it 30% more concise while keeping the key details.”</p><p>Each iteration improves the output incrementally.</p><p>Let me break down the prompting approach that worked in this experiment, because this is immediately actionable for your work tomorrow.</p><p>Strategy 1: The Structured Outline Approach</p><p>Don’t go from zero to full PRD in one prompt. Instead:</p><p>* <strong>Start with strategic thinking</strong> - Spend 10-15 minutes outlining why you’re building this, who it’s for, and what problem it solves</p><p>* <strong>Get specific</strong> - Don’t say “users,” say “high school students in the middle 80% of academic performance”</p><p>* <strong>Include constraints</strong> - Budget, timeline, technical limitations, competitive landscape</p><p>* <strong>Dump your outline into the AI</strong> - Now ask it to expand into a full PRD</p><p>* <strong>Iterate section by section</strong> - Don’t try to perfect everything at once</p><p>This is exactly what I did in my experiment, and even with my somewhat sloppy outline, the results were dramatically better than they would have been with a single-sentence prompt.</p><p>Strategy 2: The Comparative Analysis Pattern</p><p>One technique I used that worked particularly well: asking each tool to do the same specific task and comparing results.</p><p>For example, I asked all five tools: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”</p><p>This forced each tool to synthesize the entire PRD into a compelling pitch while accounting for a specific, challenging audience. The variation in quality was revealing—and it gave me multiple options to choose from or blend together.</p><p><strong>Actionable tip:</strong> When you need something critical (a pitch, an executive summary, a key decision framework), generate it with 2-3 different AI tools and take the best elements from each. This “ensemble approach” often produces better results than any single tool.</p><p>Strategy 3: The Iterative Refinement Loop</p><p>Don’t treat the AI output as final. Use it as a first draft that you then refine through conversation with the AI.</p><p>After getting the initial PRD, I could have asked follow-up questions like:</p><p>* “What’s missing from this PRD?”</p><p>* “How would you strengthen the success metrics section?”</p><p>* “Generate 3 alternative approaches to the core feature set”</p><p>Each iteration improves the output and, more importantly, forces me to think more deeply about the product.</p><p>What This Means for Your Career</p><p>If you’re an early or mid-career PM reading this, you might be thinking: “Great, so AI can write PRDs now. Am I becoming obsolete?”</p><p>Absolutely not. But your role is evolving, and understanding that evolution is critical.</p><p>The PMs who will thrive in the AI era are those who:</p><p>* <strong>Excel at strategic thinking</strong> - AI can generate options, but you need to know which options align with company strategy, customer needs, and technical feasibility</p><p>* <strong>Master the art of prompting</strong> - This is a genuine skill that separates mediocre AI users from exceptional ones</p><p>* <strong>Know when to use AI and when not to</strong> - Some aspects of product work benefit enormously from AI. Others (user interviews, stakeholder negotiation, cross-functional relationship building) require human judgment and empathy</p><p>* <strong>Can evaluate AI output critically</strong> - You need to spot the hallucinations, the generic fluff, and the strategic misalignments that AI inevitably produces</p><p>Think of AI tools as incredibly capable interns. They can produce impressive work quickly, but they need direction, oversight, and strategic guidance. Your job is to provide that guidance while leveraging their speed and breadth.</p><p>The Real-World Application: What to Do Monday Morning</p><p>Let’s get tactical. Here’s exactly how to apply these insights to your actual product work:</p><p>For Your Next PRD:</p><p>* <strong>Block 30 minutes for strategic thinking</strong> - Write your back-of-the-napkin outline in Google Docs or your tool of choice</p><p>* <strong>Open Claude</strong> (or ChatPRD if you want more structure)</p><p>* <strong>Copy your outline with this prompt:</strong></p><p>“I’m a product manager at [company] working on [product area]. I need to create a comprehensive PRD based on this outline. Please expand this into a complete PRD with the following sections: [list your preferred sections]. Make it detailed enough for engineering to start breaking down into user stories, but concise enough for leadership to read in 15 minutes. [Paste your outline]”</p><p>* <strong>Review the output critically</strong> - Look for generic statements, missing details, or strategic misalignments</p><p>* <strong>Iterate on specific sections:</strong></p><p>“The success metrics section is too vague. Please provide 3-5 specific, measurable KPIs with target values and explanation of why these metrics matter.”</p><p>* <strong>Generate supporting materials:</strong></p><p>“Create a visual mockup of the core user flow showing the key interaction points.”</p><p>* <strong>Synthesize the best elements</strong> - Don’t just copy-paste the AI output. Use it as raw material that you shape into your final document</p><p>For Stakeholder Communication:</p><p>When you need to pitch something to leadership or engineering:</p><p>* <strong>Generate 3 versions</strong> of your pitch using different tools (Claude, ChatPRD, and one other)</p><p>* <strong>Compare them for:</strong></p><p>* Clarity and conciseness</p><p>* Strategic framing</p><p>* Compelling value proposition</p><p>* Addressing likely objections</p><p>* <strong>Blend the best elements</strong> into your final version</p><p>* <strong>Add your personal voice</strong> - This is crucial. AI output often lacks personality and specific company context. Add that yourself.</p><p>For Feature Prioritization:</p><p>AI tools can help you think through trade-offs more systematically:</p><p>“I’m deciding between three features for our next release: [Feature A], [Feature B], and [Feature C]. For each feature, analyze: (1) Estimated engineering effort, (2) Expected user impact, (3) Strategic alignment with making our platform the go-to solution for [your market], (4) Risk factors. Then recommend a prioritization with rationale.”</p><p>This doesn’t replace your judgment, but it forces you to think through each dimension systematically and often surfaces considerations you hadn’t thought of.</p><p>The Uncomfortable Truth About AI and Product Management</p><p>Let me be direct about something that makes many PMs uncomfortable: AI will make some PM skills less valuable while making others more valuable.</p><p><strong>Less valuable:</strong></p><p>* Writing boilerplate documentation</p><p>* Creating standard frameworks and templates</p><p>* Generating routine status updates</p><p>* Synthesizing information from existing sources</p><p><strong>More valuable:</strong></p><p>* Strategic product vision and roadmapping</p><p>* Deep customer empathy and insight generation</p><p>* Cross-functional leadership and influence</p><p>* Critical evaluation of options and trade-offs</p><p>* Creative problem-solving for novel situations</p><p>If your PM role primarily involves the first category of tasks, you should be concerned. But if you’re focused on the second category while leveraging AI for the first, you’re going to be exponentially more effective than your peers who resist these tools.</p><p>The PMs I see succeeding aren’t those who can write the best PRD manually. They’re those who can write the best PRD with AI assistance in one-tenth the time, then use the saved time to talk to more customers, think more deeply about strategy, and build stronger cross-functional relationships.</p><p>Advanced Techniques: Beyond Basic PRD Generation</p><p>Once you’ve mastered the basics, here are some advanced applications I’ve found valuable:</p><p>Competitive Analysis at Scale</p><p>“Research our top 5 competitors in [market]. For each one, analyze: their core value proposition, key features, pricing strategy, target customer, and likely product roadmap based on recent releases and job postings. Create a comparison matrix showing where we have advantages and gaps.”</p><p>Then use web search tools in Claude or Perplexity to fact-check and expand the analysis.</p><p>Scenario Planning</p><p>“We’re considering three strategic directions for our product: [Direction A], [Direction B], [Direction C]. For each direction, map out: likely customer adoption curve, required technical investments, competitive positioning in 12 months, and potential pivots if the hypothesis proves wrong. Then identify the highest-risk assumptions we should test first for each direction.”</p><p>This kind of structured scenario thinking is exactly what AI excels at—generating multiple well-reasoned perspectives quickly.</p><p>User Story Generation</p><p>After your PRD is solid:</p><p>“Based on this PRD, generate a complete set of user stories following the format ‘As a [user type], I want to [action] so that [benefit].’ Include acceptance criteria for each story. Organize them into epics by functional area.”</p><p>This can save your engineering team hours of grooming meetings.</p><p>The Tools Will Keep Evolving. Your Process Shouldn’t</p><p>Here’s something important to remember: by the time you read this, the specific rankings might have shifted. Maybe ChatGPT-5 has leapfrogged Claude. Maybe a new specialized tool has emerged.</p><p>But the core principles won’t change:</p><p>* Do strategic thinking before touching AI</p><p>* Use the best tool available for your specific task</p><p>* Iterate and refine rather than accepting first outputs</p><p>* Blend AI capabilities with human judgment</p><p>* Focus your time on the uniquely human aspects of product management</p><p>The specific tools matter less than your process for using them effectively.</p><p>A Final Experiment: The Skeptical VP Test</p><p>I want to share one more insight from my testing that I think is particularly relevant for early and mid-career PMs.</p><p>Toward the end of my experiment, I gave each tool this prompt: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”</p><p>This is such a realistic scenario. How many times have you needed to pitch an idea to a skeptical technical leader via Slack or email? Someone who’s brilliant, who’s seen a thousand product ideas fail, and who can spot b******t from a mile away?</p><p>The quality variation in the responses was fascinating. ChatGPT gave me something that felt generic and safe. Gemini was better but still a bit too enthusiastic. Grok was... well, Grok.</p><p>But Claude and ChatPRD both produced messages that felt authentic, technically credible, and appropriately confident without being overselling. They acknowledged the engineering challenges while framing the opportunity compellingly.</p><p><strong>The lesson:</strong> When the stakes are high and the audience is sophisticated, the quality of your AI tool matters even more. That skeptical VP can tell the difference between a carefully crafted message and AI-generated fluff. So can your CEO. So can your biggest customers.</p><p>Use the best tools available, but more importantly, always add your own strategic thinking and authentic voice on top.</p><p>Questions to Consider: A Framework for Your Own Experiments</p><p>As I wrapped up my Loom, I posed some questions to the audience that I’ll pose to you:</p><p>“Let me know in the comments, if you do your PRDs using AI differently, do you start with back of the envelope? Do you say, oh no, I just start with one sentence, and then I let the chatbot refine it with me? Or do you go way more detailed and then use the chatbot to kind of pressure test it?”</p><p>These aren’t rhetorical questions. Your answer reveals your approach to AI-augmented product work, and different approaches work for different people and contexts.</p><p><strong>For early-career PMs:</strong> I’d recommend starting with more detailed outlines. The discipline of thinking through your product strategy before touching AI will make you a stronger PM. You can always compress that process later as you get more experienced.</p><p><strong>For mid-career PMs:</strong> Experiment with different approaches for different types of documents. Maybe you do detailed outlines for major feature PRDs but use more iterative AI-assisted refinement for smaller features or updates. Find what optimizes your personal productivity while maintaining quality.</p><p><strong>For senior PMs and product leaders:</strong> Consider how AI changes what you should expect from your PM team. Should you be reviewing more AI-generated first drafts and spending more time on strategic guidance? Should you be training your team on effective AI usage? These are leadership questions worth grappling with.</p><p>The Path Forward: Continuous Experimentation</p><p>My experiment with these five AI tools took 45 minutes. But I’m not done experimenting.</p><p>The field of AI-assisted product management is evolving rapidly. New tools launch monthly. Existing tools get smarter weekly. Prompting techniques that work today might be obsolete in three months.</p><p>Your job, if you want to stay at the forefront of product management, is to continuously experiment. Try new tools. Share what works with your peers. Build a personal knowledge base of effective prompts and workflows. And be generous with what you learn. The PM community gets stronger when we share insights rather than hoarding them.</p><p>That’s why I created this Loom and why I’m writing this post. Not because I have all the answers, but because I’m figuring it out in real-time and want to share the journey.</p><p>A Personal Note on Coaching and Consulting</p><p>If this kind of practical advice resonates with you, I’m happy to work with you directly.</p><p>Through my pm coaching practice, I offer 1:1 executive, career, and product coaching for PMs and product leaders. We can dig into your specific challenges: whether that’s leveling up your AI workflows, navigating a career transition, or developing your strategic product thinking.</p><p>I also work with companies (usually startups or incubation teams) on product strategy, helping teams figure out PMF for new explorations and improving their product management function.</p><p>The format is flexible. Some clients want ongoing coaching, others prefer project-based consulting, and some just want a strategic sounding board for a specific decision. Whatever works for you.</p><p>Reach out through <a target="_blank" href="http://tomleungcoaching.com">tomleungcoaching.com</a> if you’re interested in working together.</p><p>OK. Enough pontificating. Let’s ship greatness.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://firesidepm.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">firesidepm.substack.com</a>
play-circle icon
52 MIN
The Future of Product Management in the Age of AI: Lessons From a Five Leader Panel
DEC 8, 2025
The Future of Product Management in the Age of AI: Lessons From a Five Leader Panel
<p>Every few years, the world of product management goes through a phase shift. When I started at Microsoft in the early 2000s, we shipped Office in boxes. Product cycles were long, engineering was expensive, and user research moved at the speed of snail mail. Fast forward a decade and the cloud era reset the speed at which we build, measure, and learn. Then mobile reshaped everything we thought we knew about attention, engagement, and distribution.</p><p>Now we are standing at the edge of another shift. Not a small shift, but a tectonic one. Artificial intelligence is rewriting the rules of product creation, product discovery, product expectations, and product careers.</p><p>To help make sense of this moment, I hosted a panel of world class product leaders on the Fireside PM podcast:</p><p>• <a target="_blank" href="https://www.linkedin.com/in/ramie-abu-zahra-95a5688/"><strong>Rami Abu-Zahra</strong></a>, Amazon product leader across Kindle, Books, and Prime Video• <a target="_blank" href="https://www.linkedin.com/in/toddb/"><strong>Todd Beaupre</strong></a>, Product Director at YouTube leading Home and Recommendations• <a target="_blank" href="https://www.linkedin.com/in/joecorkery/"><strong>Joe Corkery</strong></a>, CEO and cofounder of Jaide Health • <a target="_blank" href="https://www.linkedin.com/in/toml/"><strong>Tom Leung</strong></a><strong> (me)</strong>, Partner at Palo Alto Foundry• <a target="_blank" href="https://www.linkedin.com/in/laurenyoungernagel/"><strong>Lauren Nagel</strong></a>, VP Product at Mezmo• <a target="_blank" href="https://www.linkedin.com/in/davidnydegger/"><strong>David Nydegger</strong></a>, Chief Product Officer at OvivaThese are leaders running massive consumer platforms, high stakes health tech, and fast moving developer tools. The conversation was rich, honest, and filled with specific examples. </p><p>This post summarizes the discussion, adds my own reflections, and offers a practical guide for early and mid career PMs who want to stay relevant in a world where AI is redefining what great product management looks like.</p><p><strong>Table of Contents</strong></p><p>* What AI Cannot Do and Why PM Judgment Still Matters</p><p>* The New AI Literacy: What PMs Must Know by 2026</p><p>* Why Building AI Products Speeds Up Some Cycles and Slows Down Others</p><p>* Whether the PM, Eng, UX Trifecta Still Stands</p><p>* The Biggest Risks AI Introduces Into Product Development</p><p>* Actionable Advice for Early and Mid Career PMs</p><p>* My Takeaways and What Really Matters Going Forward</p><p>* Closing Thoughts and Coaching Practice</p><p><strong>1. What AI Cannot Do and Why PM Judgment Still Matters</strong></p><p>We opened the panel with a foundational question. As AI becomes more capable every quarter, what is left for humans to do. Where do PMs still add irreplaceable value. It is the question every PM secretly wonders.</p><p>Todd put it simply: <strong>“At the end of the day, you have to make some judgment calls. We are not going to turn that over anytime soon.”</strong></p><p>This theme came up again and again. AI is phenomenal at synthesizing, drafting, exploring, and narrowing. But it does not have conviction. It does not have lived experience. It does not feel user pain. It does not carry responsibility.</p><p>Joe from Jaide Health captured it perfectly when he said: <strong>“AI cannot feel the pain your users have. It can help meet their goals, but it will not get you that deep understanding.”</strong></p><p>There is still no replacement for sitting with a frustrated healthcare customer who cannot get their clinical data into your system, or a creator on YouTube who feels the algorithm is punishing their art, or a devops engineer staring at an RCA output that feels 20 percent off.</p><p>Every PM knows this feeling: the moment when all signals point one way, but your gut tells you the data is incomplete or misleading. This is the craft that AI does not have.</p><p><strong>Why judgment becomes even more important in an AI world</strong></p><p>David, who runs product at a regulated health company, said something incredibly important: <strong>“Knowing what great looks like becomes more essential, not less. The PM's that thrive in AI are the ones with great product sense.”</strong></p><p>This is counterintuitive for many. But when the operational work becomes automated, the differentiation shifts toward taste, intuition, sequencing, and prioritization.</p><p>Lauren asked the million dollar question. <strong>“How are we going to train junior PMs if AI is doing the legwork. Who teaches them how to think.”</strong></p><p>This is a profound point. If AI closes the gap between junior and senior PMs in execution tasks, the difference will emerge almost entirely in judgment. Knowing how to probe user problems. Knowing when a feature is good enough. Knowing which tradeoffs matter. Knowing which flaw is fatal and which is cosmetic.</p><p>AI is incredible at writing a PRD. AI is terrible at knowing whether the PRD is any good.</p><p>Which means the future PM becomes more strategic, more intuitive, more customer obsessed, and more willing to make thoughtful bets under uncertainty.</p><p><strong>2. The New AI Literacy: What PMs Must Know by 2026</strong></p><p>I asked the panel what AI literacy actually means for PMs. Not the hype. Not the buzzwords. The real work.</p><p>Instead of giving gimmicky answers, the discussion converged on a clear set of skills that PMs must master.</p><p><strong>Skill 1: Understanding context engineering</strong></p><p>David laid this out clearly: <strong>“Knowing what LMS are good at and what they are not good at, and knowing how to give them the right context, has become a foundational PM skill.”</strong></p><p>Most PMs think prompt engineering is about clever phrasing. In reality, the future is about context engineering. Feeding models the right data. Choosing the right constraints. Deciding what to ignore. Curating inputs that shape outputs in reliable ways.</p><p>Context engineering is to AI product development what Figma was to collaborative design. If you cannot do it, you are not going to be effective.</p><p><strong>Skill 2: Evals, evals, evals</strong></p><p>Rami said something that resonated with the entire panel: <strong>“Last year was all about prompts. This year is all about evals.”</strong></p><p>He is right.</p><p>• How do you build a golden dataset.• How do you evaluate accuracy.• How do you detect drift.• How do you measure hallucination rates.• How do you combine UX evals with model evals.• How do you decide what good looks like.• How do you define safe versus unsafe boundaries.</p><p>AI evaluation is now a core PM responsibility. Not exclusively. But PMs must understand what engineers are testing for, what failure modes exist, and how to design test sets that reflect the real world.</p><p>Lauren said her PMs write evals side by side with engineering. That is where the world is going.</p><p><strong>Skill 3: Knowing when to trust AI output and when to override it</strong></p><p>Todd noted: <strong>“It is one thing to get an answer that sounds good. It is another thing to know if it is actually good.”</strong></p><p>This is the heart of the role. AI can produce strategic recommendations that look polished, structured, and wise. But the real question is whether they are grounded in reality, aligned with your constraints, and consistent with your product vision.</p><p>A PM without the ability to tell real insight from confident nonsense will be replaced by someone who can.</p><p><strong>Skill 4: Understanding the physics of model changes</strong></p><p>This one surprised many people, but it was a recurring point.</p><p>Rami noted: <strong>“When you upgrade a model, the outputs can be totally different. The evals start failing. The experience shifts.”</strong></p><p>PMs must understand:</p><p>• Models get deprecated• Models drift• Model updates can break well tuned prompts• API pricing has real COGS implications• Latency varies• Context windows vary• Some tasks need agents, some need RAG, some need a small finetuned model</p><p>This is product work now. The PM of 2026 must know these constraints as well as a PM of the cloud era understood database limits or API rate limits.</p><p><strong>Skill 5: How to construct AI powered prototypes in hours, not weeks</strong></p><p>It now takes one afternoon to build something meaningful. Zero code required. Prompt, test, refine. Whether you use Replit, Cursor, Vercel, or sandboxed agents, the speed is shocking.</p><p>But this makes taste and problem selection even more important. The future PM must be able to quickly validate whether a concept is worth building beyond the demo stage.</p><p><strong>3. Why Building AI Products Speeds Up Some Cycles and Slows Down Others</strong></p><p>This part of the conversation was fascinating because people expected AI to accelerate everything. The panel had a very different view.</p><p><strong>Fast: Prototyping and concept validation</strong></p><p>Lauren described how her teams can build working versions of an AI powered Root Cause Analysis feature in days, test it with customers, and get directional feedback immediately.</p><p><strong>“You can think bigger because the cost of trying things is much lower,”</strong> she said.</p><p>For founders, early PMs, and anyone validating hypotheses, this is liberating. You can test ten ideas in a week. That used to take a quarter.</p><p><strong>Slow: Productionizing AI features</strong></p><p>The surprising part is that shipping the V1 of an AI feature is slower than most expect.</p><p>Joe noted: <strong>“You can get prototypes instantly. But turning that into a real product that works reliably is still hard.”</strong></p><p>Why. Because:</p><p>• You need evals.• You need monitoring.• You need guardrails.• You need safety reviews.• You need deterministic parts of the workflow.• You need to manage COGS.• You need to design fallbacks.• You need to handle unpredictable inputs.• You need to think about hallucination risk.• You need new UI surfaces for non deterministic outputs.</p><p>Lauren said bluntly: <strong>“Vibe coding is fast. Moving that vibe code to production is still a four month process.”</strong></p><p>This should be printed on a poster in every AI startup office.</p><p><strong>Very Slow: Iterating on AI powered features</strong></p><p>Another counterintuitive point. Many teams ship a great V1 but struggle to improve it significantly afterward.</p><p>David said their nutrition AI feature launched well but: <strong>“We struggled really hard to make it better. Each iteration was easy to try but difficult to improve in a meaningful way.”</strong></p><p>Why is iteration so difficult.</p><p>Because model improvements may not translate directly into UX improvements. Users need consistency. Drift creates churn. Small changes in context or prompts can cause large changes in behavior.</p><p>Teams are learning a hard truth: AI powered features do not behave like typical deterministic product flows. They require new iteration muscles that most orgs do not yet have.</p><p><strong>4. The PM, Eng, UX Trifecta in the AI Era</strong></p><p>I asked whether the classic PM, Eng, UX triad is still the right model. The audience was expecting disagreement. The panel was surprisingly aligned.</p><p><strong>The trifecta is not going anywhere</strong></p><p>Rami put it simply: <strong>“We still need experts in all three domains to raise the bar.”</strong></p><p>Joe added: <strong>“AI makes it possible for PMs to do more technical work. But it does not replace engineering. Same for design.”</strong></p><p>AI blurs the edges of the roles, but it does not collapse them. In fact, each role becomes more valuable because the work becomes more abstract.</p><p>• PMs focus on judgment, sequencing, evaluation, and customer centric problem framing• Engineers focus on agents, systems, architecture, guardrails, latency, and reliability• Designers focus on dynamic UX, non deterministic UX patterns, and new affordances for AI outputs</p><p><strong>What does change</strong></p><p>AI makes the PM-Eng relationship more intense. The backbone of AI features is a combination of model orchestration, evaluation, prompting, and context curation. PMs must be tighter than ever with engineering to design these systems.</p><p>David noted that his teams focus more on individual talents. Some PMs are great at context engineering. Some designers excel at polishing AI generated layouts. Some engineers are brilliant at prompt chaining. AI reveals strengths quickly.</p><p>The trifecta remains. The skill distribution within it evolves.</p><p><strong>5. The Biggest Risks AI Introduces Into Product Development</strong></p><p>When we asked what scares PMs most about AI, the conversation became blunt and honest. </p><p><strong>Risk 1: Loss of user trust</strong></p><p>Lauren warned: <strong>“If people keep shipping low quality AI features, user trust in AI erodes. And then your good AI product suffers from the skepticism.”</strong></p><p>This is very real. Many early AI features across industries are low quality, gimmicky, or unreliable. Users quickly learn to distrust these experiences.</p><p>Which means PMs must resist the pressure to ship before the feature is ready.</p><p><strong>Risk 2: Skill atrophy</strong></p><p>Todd shared a story that hit home for many PMs. <strong>“Junior folks just want to plug in the prompt and take whatever the AI gives them. That is a recipe for having no job later.”</strong></p><p>PMs who outsource their thinking to AI will lose their judgment. Judgment cannot be regained easily.</p><p>This is the silent career killer.</p><p><strong>Risk 3: Safety hazards in sensitive domains</strong></p><p>David was direct: <strong>“If we have one unsafe output, we have to shut the feature off. We cannot afford even small mistakes.”</strong></p><p>In healthcare, finance, education, and legal industries, the tolerance for error is near zero. AI must be monitored relentlessly. Human in the loop systems are mandatory. The cycles are slower but the stakes are higher.</p><p><strong>Risk 4: The high bar for AI compared to humans</strong></p><p>Joe said something I have thought about for years: <strong>“AI is held to a much higher standard than human decision making. Humans make mistakes constantly, but we forgive them. AI makes one mistake and it is unacceptable.”</strong></p><p>This slows adoption in certain industries and creates unrealistic expectations.</p><p><strong>Risk 5: Model deprecation and instability</strong></p><p>Rami described a real problem AI PMs face: <strong>“Models get deprecated faster than they get replaced. The next model is not always GA. Outputs change. Prompts break.”</strong></p><p>This creates product instability that PMs must anticipate and design around.</p><p><strong>Risk 6: Differentiation becomes hard</strong></p><p>I shared this perspective because I see so many early stage startups struggle with it.</p><p>If your whole product is a wrapper around an LLM, competitors will copy you in a week. The real differentiation will not come from using AI. It will come from how deeply you understand the customer, how you integrate AI with proprietary data, and how you create durable workflows.</p><p><strong>6. Actionable Advice for Early and Mid Career PMs</strong></p><p>This was one of my favorite parts of the panel because the advice was humble, practical, and immediately useful.</p><p><strong>A. Develop deep user empathy. This will become your biggest differentiator.</strong></p><p>Lauren said it clearly: <strong>“Maintain your empathy. Understand the pain your user really has.”</strong></p><p>AI makes execution cheap. It makes insight valuable.</p><p>If you can articulate user pain precisely.If you can differentiate surface friction from underlying need.If you can see around corners.If you can prototype solutions and test them in hours.If you can connect dots between what AI can do and what users need.</p><p>You will thrive.</p><p><strong>Tactical steps:</strong></p><p>• Sit in on customer support calls every week.• Watch 10 user sessions for every feature you own.• Talk to customers until patterns emerge.• Ask “why” five times in every conversation.• Maintain a user pain log and update it constantly.</p><p><strong>B. Become great at context engineering</strong></p><p>This will matter as much as SQL mattered ten years ago.</p><p><strong>Action steps:</strong></p><p>• Practice writing prompts with structured context blocks.• Build a library of prompts that work for your product.• Study how adding, removing, or reordering context changes output.• Learn RAG patterns.• Learn when structured data beats embeddings.• Learn when smaller local models outperform big ones.</p><p><strong>C. Learn eval frameworks</strong></p><p>This is non negotiable.</p><p>You need to know:</p><p>• Precision vs recall tradeoffs• How to build golden datasets• How to design scenario based evals for UX• How to test for hallucination• How to monitor drift• How to set quality thresholds• How to build dashboards that reflect real world input distributions</p><p>You do not need to write the code.You do need to define the eval strategy.</p><p><strong>D. Strengthen your product sense</strong></p><p>You cannot outsource product taste.</p><p>Todd said it best: <strong>“Imagine asking AI to generate 20 percent growth for you. It will not tell you what great looks like.”</strong></p><p>To strengthen your product sense:</p><p>• Review the best products weekly.• Take screenshots of great UX patterns.• Map user flows from apps you admire.• Break products down into primitives.• Ask yourself why a product decision works.• Predict what great would look like before you design it.</p><p>The PMs who thrive will be the ones who can recognize magic when they see it.</p><p><strong>E. Stay curious</strong></p><p>Rami’s closing advice was simple and perfect: <strong>“Stay curious. Keep learning. It never gets old.”</strong></p><p>AI changes monthly. The PM who is excited by new ideas will outperform the PM who clings to old patterns.</p><p><strong>Practical habits:</strong></p><p>• Read one AI research paper summary each week.• Follow evaluation and model updates from major vendors.• Build at least one small AI prototype a month.• Join AI PM communities.• Teach juniors what you learn. Nothing accelerates mastery faster.</p><p><strong>F. Embrace velocity and side projects</strong></p><p>Todd said that some of his biggest career breakthroughs came from solving problems on the side.</p><p>This is more true now than ever.</p><p>If you have an idea, you can build an MVP over a weekend. If it solves a real problem, someone will notice.</p><p><strong>G. Stay close to engineering</strong></p><p>Not because you need to code, but because AI features require tighter PM engineering collaboration.</p><p><strong>Learn enough to be dangerous:</strong></p><p>• How embeddings work• How vector stores behave• What latency tradeoffs exist• How agents chain tasks• How model versioning works• How context limits shape UX• Why some prompts blow up API costs</p><p>If you can speak this language, you will earn trust and accelerate cycles.</p><p><strong>H. Understand the business deeply</strong></p><p>Joe’s advice was timeless: <strong>“Know who pays you and how much they pay. Solve real problems and know the business model.”</strong></p><p>PMs who understand unit economics, COGS, pricing, and funnel dynamics will stand out.</p><p><strong>7. Tom’s Takeaways and What Really Matters Going Forward</strong></p><p>I ended the recording by sharing what I personally believe after moderating this discussion and working closely with a variety of AI teams over the past 2 years.</p><p><strong>Judgment becomes the most valuable PM skill</strong></p><p>As AI gets better at analysis, synthesis, and execution, your value shifts to:</p><p>• Choosing the right problem• Sequencing decisions• Making 55 45 calls• Understanding user pain• Making tradeoffs• Deciding when good is good enough• Defining success• Communicating vision• Influencing the org</p><p>Agents can write specs.LLMs can produce strategies.But only humans can choose the right one and commit.</p><p><strong>Learning speed becomes a competitive advantage</strong></p><p>I said this on the panel and I believe it more every month.</p><p>Because of AI, you now have:</p><p>• Infinite coaches• Infinite mentors• Infinite experts• Infinite documentation• Infinite learning loops</p><p>A PM who learns slowly will not survive the next decade. </p><p><strong>Curiosity, empathy, and velocity will separate great from good</strong></p><p>Many panelists said versions of this. The common pattern was:</p><p>• Understand users deeply• Combine multiple tools creatively• Move quickly• Learn constantly</p><p>The future rewards generalists with taste, speed, and emotional intelligence.</p><p><strong>Differentiation requires going beyond wrapper apps</strong></p><p>This is one of my biggest concerns for early stage founders. If your entire product is a wrapper around a model, you are vulnerable.</p><p>Durable value will come from:</p><p>• Proprietary data• Proprietary workflows• Deep domain insight• Organizational trust• Distribution advantage• Safety and reliability• Integration with existing systems</p><p>AI is a component, not a moat.</p><p><strong>8. Closing Thoughts</strong></p><p>Hosting this panel made me more optimistic about the future of product management. Not because AI will not change the job. It already has. But because the fundamental craft remains alive.</p><p>Product management has always been about understanding people, making decisions with incomplete information, telling compelling stories, and guiding teams through ambiguity and being right often.</p><p>AI accelerates the craft. It amplifies the best PMs and exposes the weak ones. It rewards curiosity, empathy, velocity, and judgment.</p><p>If you want tailored support on your PM career, leadership journey, or executive path, I offer 1 on 1 career, executive, and product coaching at <a target="_blank" href="https://tomleungcoaching.com/"><strong>tomleungcoaching.com</strong></a>.</p><p>OK team. Let’s ship greatness.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://firesidepm.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">firesidepm.substack.com</a>
play-circle icon
83 MIN
The Difference Between Encouragement and Truth: Lessons From Building What People Actually Need
NOV 3, 2025
The Difference Between Encouragement and Truth: Lessons From Building What People Actually Need
<p>The Interview That Sparked This Essay</p><p><a target="_blank" href="https://www.linkedin.com/in/joecorkery/">Joe Corkery</a> and I worked together at Google years ago, and he has since gone on to build a venture-backed company tackling a real and systemic problem in healthcare communication. </p><p>This essay is my attempt to synthesize that conversation. It is written for early and mid career PMs in Silicon Valley who want to get sharper at product judgment, market discovery, customer validation, and knowing the difference between encouragement and signal. If you feel like you have ever shipped something, presented it to customers, and then heard polite nodding instead of movement and urgency, this is for you.</p><p>Joe’s Unusual Career Arc</p><p>Joe’s background is not typical for a founder. He is a software engineer. And a physician. And someone who has led business development in the pharmaceutical industry. That multidisciplinary profile allowed him to see something that many insiders miss: healthcare is full of problems that everyone acknowledges, yet very few organizations are structurally capable of solving.</p><p>When Joe joined Google Cloud in 2014, he helped start the healthcare and life sciences product org. Yet the timing was difficult. As he put it:</p><p><em>“The world wasn’t ready or Google wasn’t ready to do healthcare.”</em> </p><p>So instead of building healthcare products right away, he spent two years working on security, compliance, and privacy. That detour will matter later, because it set the foundation for everything he is now doing at Jaide.</p><p>Years later, he left Google to build a healthcare company focused initially on guided healthcare search, particularly for women’s health. The idea resonated emotionally. Every customer interview validated the need. Investors said it was important. Healthcare organizations nodded enthusiastically.</p><p>And yet, there was no traction.</p><p>This created a familiar and emotionally challenging founder dilemma:</p><p>* When everyone is encouraging you</p><p>* But no one will pay you or adopt early</p><p>* How do you know if you are early, unlucky, or wrong?</p><p>This is the question at the heart of product strategy.</p><p>False Positives: Why Encouragement Is Not Feedback</p><p>If you have worked as a PM or founder for more than a few weeks, you have encountered positive feedback that turned out to be meaningless. People love your idea. Executives praise your clarity. Customers tell you they would definitely use it. Friends offer supportive high-fives.</p><p>But then nothing moves.</p><p>As Joe put it:</p><p><em>“Everyone wanted to be supportive. But that makes it hard to know whether you’re actually on the right path.”</em> </p><p>This is not because people are dishonest. It is because people are kind, polite, and socially conditioned to encourage enthusiasm. In Silicon Valley especially, we celebrate ambition. We praise risk-taking. We cheer for the founder-in-the-garage mythology. If someone tells you that your idea is flawed, they fear they are crushing your passion.</p><p>So even when we explicitly ask for brutal honesty, people soften their answers.</p><p>This is the false positive trap.</p><p>And if you misread encouragement as traction, you can waste months or even years.</p><p>The Small Framing Change That Changes Everything</p><p>Joe eventually realized that the problem was not the idea itself. The problem was how he was asking for feedback.</p><p>When you present your idea as <em>the</em> idea, people naturally react supportively:</p><p>* “That’s really interesting.”</p><p>* “I could see that being useful.”</p><p>* “This is definitely needed.”</p><p>But when you instead present <em>two competing ideas</em> and ask someone to help you choose, you change the psychology of the conversation entirely.</p><p>Joe explained it this way:</p><p><em>“When we said, ‘We are building this. What do you think?’ people wanted to be encouraging. But when we asked, ‘We are choosing between these two products. Which one should we build?’ it gave them permission to actually critique.”</em> </p><p>This shift is subtle, but powerful. Suddenly:</p><p>* People contrast.</p><p>* Their reasoning surfaces.</p><p>* Their hesitation becomes visible.</p><p>* Their priorities emerge with clarity.</p><p>By asking someone to choose between two ideas, you activate their decision-making brain instead of their supportive brain.</p><p>It is no different from usability testing. If you show someone a screen and ask what they think, they are polite. If you give them a task and ask them to complete it, their actual friction appears immediately.</p><p>In product discovery, <em>friction is truth</em>.</p><p>How This Applies to PMs, Not Just Founders</p><p>You may be thinking: this is interesting for entrepreneurs, but I work inside a company. I have stakeholders, OKRs, a roadmap, and a backlog that already feels too full.</p><p>This technique is actually more relevant for PMs <em>inside companies</em> than for founders.</p><p>Inside organizations, political encouragement is even more pervasive:</p><p>* Leaders say they want innovation, but are risk averse.</p><p>* Cross-functional partners smile in meetings, but quietly maintain objections.</p><p>* Engineers nod when you present the roadmap, but may not believe in it.</p><p>* Customers say they like your idea, but do not prioritize adoption.</p><p>One of the most powerful tools you can use as a PM is explicitly framing your product decisions as <em>explicit choices</em>, rather than proposals seeking validation. For example:</p><p><strong>Instead of saying:</strong>“We are planning to build a new onboarding flow. Here is the design. Thoughts?”</p><p><strong>Say:</strong>“We are deciding between optimizing retention or acquisition next quarter. If we choose retention, the main lever is onboarding friction. Here are two possible approaches. Which outcome matters more to the business right now?”</p><p>In the second framing:</p><p>* The <em>business goal</em> is visible.</p><p>* The <em>tradeoff</em> is unavoidable.</p><p>* The <em>decision owner</em> is clear.</p><p>* The <em>conversation becomes real</em>.</p><p>This is how PMs build credibility and influence: not through slides or persuasion, but through framing decisions clearly.</p><p>Jaide’s Pivot: From Health Search to AI Translation</p><p>The result of Joe’s reframed feedback approach was unambiguous.</p><p>Across dozens of conversations with healthcare executives and hospital leaders, one pattern emerged consistently:</p><p>Translation was the urgent, budget-backed, economically meaningful problem.</p><p>As Joe put it, after talking to more than 40 healthcare decision-makers:</p><p><em>“Every single person told us to build the translation product. Not mostly. Not many. Every single one.”</em> </p><p>This kind of clarity is rare in product strategy. When you get it, you do not ignore it. You move.</p><p>Jaide Health shifted its core focus to solving a very real, very measurable, and very painful problem in healthcare: the language gap affecting millions of patients.</p><p>More than 25 million patients in the United States do not speak English well enough to communicate with clinicians. This leads to measurable harm:</p><p>* Longer hospital stays</p><p>* Increased readmission rates</p><p>* Higher medical error rates</p><p>* Lower comprehension of discharge instructions</p><p>The status quo for translation relies on human interpreters who are expensive, limited, slow to schedule, and often unavailable after hours or in rare languages. Many clinicians, due to lack of resources, simply use Google Translate privately on their phones. They know this is not secure or compliant, but they feel like they have no better option.</p><p>So Jaide built a platform that integrates compliance, healthcare-specific terminology, workflow embedding, custom glossaries, discharge summaries, and real-time accessibility.</p><p>This is not simply “healthcare plus GPT”. It is targeted, workflow-integrated, risk-aware operational excellence.</p><p>Product managers should study this pattern closely.</p><p>The winning strategy was not inventing a new problem. It was solving a painful problem that everyone already agreed mattered.</p><p>The Core PM Lesson: Focus on Problems With Urgent Budgets Behind Them</p><p>A question I often ask PMs I coach:</p><p><strong>Who loses sleep if this problem is not solved?</strong></p><p>If the answer is:</p><p>* “Not sure”</p><p>* “Eventually the business will feel it”</p><p>* “It would improve the experience”</p><p>* “It could move a KPI if adoption increases”</p><p>Then you do not have a real problem yet.</p><p>Real product opportunities have:</p><p>* <strong>A user who is blocked from achieving something meaningful</strong></p><p>* <strong>A measurable cost or consequence of inaction</strong></p><p>* <strong>An internal champion with authority to push change</strong></p><p>* <strong>An adjacent workflow that your product can attach to immediately</strong></p><p>* <strong>A budget owner who is willing to pay now, not later</strong></p><p>Healthcare translation checks every box. That is why Joe now has institutional adoption and a business with meaningful traction behind it.</p><p>Why PMs Struggle With This in Practice</p><p>If the lesson seems obvious, why do so many PMs fall into the encouragement trap?</p><p>The reason is emotional more than analytical.</p><p>It is uncomfortable to confront the possibility that your idea, feature, roadmap, strategy, or deck is not compelling enough yet. It is easier to seek validation than truth.</p><p>In my first startup, we kept our product in closed beta for months longer than we should have. We told ourselves we were refining the UX, improving onboarding, solidifying architecture. The real reason, which I only admitted years later, was that I was afraid the product was not good enough. I delayed reality to protect my ego.</p><p>In product work, speed of invalidation is as important as speed of iteration.</p><p>If something is not working, you need to know as quickly as possible. The faster you learn, the more shots you get. The best PMs do not fall in love with their solutions. They fall in love with the moments of clarity that allow them to change direction quickly.</p><p>Actionable Advice for Early and Mid Career PMs</p><p>Below are specific behaviors and habits you can put into practice immediately.</p><p>1. <strong>Always test product concepts as choices, not presentations</strong></p><p>Instead of asking:“What do you think of this idea?”</p><p>Ask:“We are deciding between these two approaches. Which one is more important for you right now and why?”</p><p>This forces prioritization, not politeness.</p><p>2. <strong>Never ship a feature without observing real usage inside the workflow</strong></p><p>A feature that exists but is not used does not exist.</p><p>Sit next to users. Watch screen behavior. Listen to their muttering. Ask where they hesitate. And most importantly, observe what they do after they close your product.</p><p>That is where the real friction lives.</p><p>3. <strong>Always ask: What is the cost of not solving this?</strong></p><p>If there is no real cost of inaction, the feature will not drive adoption.</p><p>Impact must be felt, not imagined.</p><p>4. <strong>Look for users with strong emotional urgency, not polite agreement</strong></p><p>When someone says:“This would be helpful.”</p><p>That is death.</p><p>When someone says:“I need this and I need it now.”</p><p>That is life.</p><p>Find urgency. Design around urgency. Ignore politeness.</p><p>5. <strong>Know the business model of your customer better than they do</strong></p><p>This is where many PMs plateau.</p><p>If you want to be taken seriously by executives, you must understand:</p><p>* How your customer makes money</p><p>* What costs they must manage</p><p>* Which levers influence financial outcomes</p><p>When PMs learn to speak in revenue, cost, and risk instead of features, priorities, and backlog, their influence changes instantly.</p><p>The Broader Strategic Question: What Happens When Foundational Models Improve?</p><p>During our conversation, I asked Joe whether the rapid improvement of GPT-like translation will eventually make specialized healthcare translation unnecessary.</p><p>His answer was pragmatic:</p><p><em>“Our goal is to ride the wave. The best technology alone does not win. The integrated solution that solves the real problem wins.”</em> </p><p>This is another crucial product lesson:</p><p>* Foundational models are table stakes.</p><p>* Differentiation comes from workflow integration, specialization, compliance, and trust.</p><p>* Adoption is driven by reducing operational friction.</p><p>In other words:</p><p><strong>In AI-first product strategy, the model is the engine. The workflow is the vehicle. The customer problem is the road.</strong></p><p>The Future of Product Work: Judgment Over Output</p><p>The world is changing. Tools are accelerating. Capabilities are compounding. But the core skill of product leadership remains the same:</p><p><strong>Can you tell the difference between signal and noise, urgency and politeness, truth and encouragement?</strong></p><p>That is judgment.</p><p>Product management will increasingly become less about writing PRDs or pushing execution and more about identifying the real problem worth solving, framing tradeoffs clearly, and navigating ambiguity with confidence and clarity.</p><p>The PMs who will thrive in the coming decade are those who learn how to ask better questions.</p><p>Closing</p><p>This conversation with Joe reminded me that most of the time, product failure is not the result of a bad idea. It is the result of insufficient clarity. The clarity does not come from thinking harder. It comes from testing real choices, with real users, in real workflows, and asking questions that force truth rather than encouragement.</p><p>If this resonates and you want help sharpening your product judgment, improving your influence with executives, developing clarity in your roadmap, or navigating career transitions, I work 1:1 with a small number of PMs, founders, and product executives.</p><p>You can learn more at <a target="_blank" href="http://tomleungcoaching.com"><strong>tomleungcoaching.com</strong></a>.</p><p>OK. Enough pontificating. Let’s ship greatness.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://firesidepm.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">firesidepm.substack.com</a>
play-circle icon
39 MIN
Atlas Gets a C+: Lessons from ChatGPT’s Browser That’s Brilliant, Broken, and Bursting with Potential
OCT 24, 2025
Atlas Gets a C+: Lessons from ChatGPT’s Browser That’s Brilliant, Broken, and Bursting with Potential
<p>I didn’t plan to make a video today. I’d just wrapped a client call, remembered that OpenAI had released <strong>Atlas</strong>, and decided to record a quick unboxing for my <em>Fireside PM</em> community.</p><p>I’d heard mixed things—some people raving about it, others underwhelmed—but I made a deliberate choice not to read any reviews beforehand. I wanted to go in blind, the way an actual user would.</p><p>Within 30 minutes, I had my verdict: <strong>Atlas earns a C+.</strong></p><p>It’s ambitious, it’s fast, and it hints at a radical new way to experience the web. But it also stumbles in ways that remind you just how fragile early AI products can be—especially when ambition outpaces usability.</p><p>This post isn’t a teardown or a fan letter. It’s a field report from someone who’s built and shipped dozens of products, from scrappy startups to billion-user platforms. My goal here is simple: unpack what Atlas gets wrong, acknowledge what it gets right, and pull out lessons every PM and product team can use.</p><p>The Unboxing Experience</p><p>When I first launched Atlas, I got the usual macOS security warning. I’m not docking points for that—this is an MVP, and once it hits the Mac App Store, those prompts will fade into the background.</p><p>There <em>was</em> an onboarding window outlining the main features, but I barely glanced at it. I was eager to jump in and see the product in action. That’s not a unique flaw—it’s how most real users behave. We skip the instructions and go straight to testing the limits.</p><p>That’s why the best onboarding happens <em>in motion</em>, not before use. There were some suggested prompts which I ignored but I would’ve loved contextual <em>fly-outs</em> or light tooltips appearing as I explored past the first 30 seconds of my experience:</p><p>* “Try asking Atlas to summarize this page.”</p><p>* “Highlight text to discuss it.”</p><p>* “Atlas can compare this to other sources—want to see how?”</p><p>Small, progressive cues like these are what turn exploration into mastery.</p><p>The initial onboarding screen wasn’t wrong—it was just misplaced. It taught before I cared. And that’s a universal PM lesson: <strong>meet users where their curiosity is, not where your product tour is.</strong></p><p>When Atlas Stumbled</p><p>Atlas’s biggest issue isn’t accuracy or latency—it’s <strong>identity.</strong></p><p>It doesn’t yet know what it wants to be. On one hand, it acts like a browser with ChatGPT built in. On the other, it markets itself as an intelligent agent that can browse <em>for</em> you. Right now, it does neither convincingly.</p><p>When I tried simple commands like “Summarize this page” or “Open the next link and tell me what it says,” the experience broke down. Sometimes it responded correctly; other times, it ignored the context entirely.</p><p>The deeper issue isn’t technical—it’s architectural. Atlas hasn’t yet resolved the question of <strong>who’s driving.</strong> Is the user steering and Atlas assisting, or is Atlas steering and the user supervising?</p><p>That uncertainty creates friction. It’s like co-piloting with someone who keeps grabbing the wheel mid-turn.</p><p>Then there’s the missing piece that could make Atlas truly special: <strong>action loops.</strong></p><p>The UI makes it feel like Atlas <em>should</em> be able to take action—click, save, organize—but it rarely does. You can ask it to summarize, but you can’t yet say “add this to my notes” or “book this flight.” Those are the natural next steps in the agentic journey, and until they arrive, Atlas feels like a chat interface masquerading as a browser.</p><p>This isn’t a criticism of the vision—it’s a question of sequencing. The team is building for the agentic future before the product earns the right to claim that mantle. Until it can <em>act</em>, Atlas is mostly a neat wrapper around ChatGPT that doesn’t justify replacing Chrome, Safari, or Edge.</p><p>Where Atlas Shines</p><p>Despite the friction, there were moments where I saw real promise.</p><p>When Atlas got it right, it was magical. I’d open a 3,000-word article, ask for a summary, and seconds later have a coherent, tone-aware digest. Having that capability integrated directly into the browsing experience—no copy-paste, no tab-switching—is an elegant idea.</p><p>You can tell the team understands restraint. The UI is clean and minimal, the chat panel is thoughtfully integrated, and the speed is impressive. It feels engineered by people who care about quality.</p><p>The challenge is that all of this could, in theory, exist as a plugin. The browser leap feels premature. Building a full browser is one of the hardest product decisions a company can make—it’s expensive, high-friction, and carries a huge switching cost for users.</p><p>The most generous interpretation is that OpenAI went full browser to enable <strong>agentic workflows</strong>—where Atlas doesn’t just summarize, but acts on behalf of the user. That would justify the architecture. But until that capability arrives, the browser feels like infrastructure waiting for a reason to exist.</p><p>Atlas today is a scaffolding for the future, not a product for the present.</p><p>Lessons for Product Managers</p><p>Even so, Atlas offers a rich set of takeaways for PMs building ambitious products.</p><p>1. Don’t Confuse Vision with MVP</p><p>You earn the right to ship big ideas by nailing the small ones. Atlas’s long-term vision is compelling, but the MVP doesn’t yet prove why it needed to exist. Start with one unforgettable use case before scaling breadth.</p><p>2. Earn Every Switch Cost</p><p>Changing browsers is one of the highest-friction user behaviors in software. Unless your product delivers something 10x better, start as an extension, not a replacement.</p><p>3. Design for Real Behavior, Not Ideal Behavior</p><p>Most users skip onboarding. Expect it. Plan for it. Guide them in context instead of relying on their patience.</p><p>4. Choose a Metaphor and Commit</p><p>Atlas tries to be both browser and assistant. Pick one. If you’re an assistant, drive. If you’re a browser, stay out of the way. Users shouldn’t have to guess who’s in control.</p><p>5. Autonomy Without Agency Frustrates Users</p><p>It’s worse for an AI to <em>understand</em> what you want but refuse to act than to not understand at all. Until Atlas can take meaningful action, it’s not an agent—it’s a spectator.</p><p>6. Sequence Ambition Behind Value</p><p>The product is building for a world that doesn’t exist yet. Ambition is great, but the order of operations matters. Earn adoption today while building for tomorrow.</p><p>Advice for the Atlas Team</p><p>If I were advising the Atlas PM and design teams directly, I’d focus on five things:</p><p>* <strong>Clarify the core identity.</strong> Decide if you’re an AI browser <em>with</em> ChatGPT or a ChatGPT agent that <em>uses</em> a browser. Everything else flows from that choice.</p><p>* <strong>Earn the right to replace Chrome.</strong> Give users one undeniably magical use case that justifies the switch—research synthesis, comparison mode, or task execution.</p><p>* <strong>Fix the metaphor collision.</strong> Make it obvious who’s in control: human or AI. Even a “manual vs. autopilot” toggle would add clarity.</p><p>* <strong>Build action loops.</strong> Move from summarization to completion. The browser of the future won’t just explain—it will execute.</p><p>* <strong>Sequence ambition.</strong> Agentic work is the destination, but the current version needs to win users on everyday value first.</p><p>None of this is out of reach. The bones are good. What’s missing is coherence.</p><p>Closing Reflection</p><p>Atlas is a fascinating case study in what happens when world-class technology meets premature positioning. It’s not bad—it’s <em>unfinished.</em></p><p>A C+ isn’t an insult. It’s a reminder that potential and product-market fit are two different things. Atlas is the kind of product that might, in a few releases, feel indispensable. But right now, it’s a prototype wearing the clothes of a platform.</p><p>For every PM watching this unfold, the lesson is universal: <strong>don’t get seduced by your own roadmap.</strong> Ambition must be earned, one user journey at a time.</p><p>That’s how trust is built—and in AI, trust is everything.</p><p>If you or your team are wrestling with similar challenges—whether it’s clarifying your product vision, sequencing your roadmap, or improving PM leadership—I offer both <strong>1:1 executive and career coaching</strong> at <a target="_blank" href="https://tomleungcoaching.com"><strong>tomleungcoaching.com</strong></a> and <strong>expert product management consulting</strong> and fractional CPO services through my firm, <a target="_blank" href="https://paloaltofoundry.com"><strong>Palo Alto Foundry</strong></a>.</p><p><strong>OK. Enough pontificating. Let’s ship greatness.</strong></p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://firesidepm.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">firesidepm.substack.com</a>
play-circle icon
29 MIN
From Cashmere Sweaters to Billion-Dollar Lessons: What PMs Can Learn from Jason Stoffer's Analysis of Quince
OCT 2, 2025
From Cashmere Sweaters to Billion-Dollar Lessons: What PMs Can Learn from Jason Stoffer's Analysis of Quince
<p>Introduction</p><p>One of the great joys of hosting my Fireside PM podcast is the opportunity to reconnect with people I’ve known for years and go deep into the mechanics of business building. Recently, I sat down with Jason Stoffer, partner at Maveron Capital, a venture firm with a laser focus on consumer companies. Jason and I go way back to my Seattle days, so this was both a reunion and an education. Our conversation turned into a masterclass on scaling consumer businesses, the art of finding moats, and the brutal realities of marketplaces.</p><p>But beyond the case studies, what stood out were the actionable insights PMs can apply right now. If you’re an early or mid-career product manager in Silicon Valley, there are playbooks here you can borrow—not in theory, but in practice.</p><p>Jason summed up his approach to analyzing companies like this: <em>“So many founders can get caught in the moment that sometimes it’s best when we’re looking at a new investment to talk about if things go right, what can happen. What would an S-1 or public filing look like? What would the company look like at a big M&A event? And then you work backwards.”</em> That mindset—begin with the end in mind—is as powerful for a product manager shipping features as it is for a VC evaluating billion-dollar bets.</p><p>In this post, I’ll share:</p><p>* The key lessons from Jason’s breakdown of Quince and StubHub</p><p>* How these lessons apply directly to your PM career</p><p>* Tactical moves you can make to future-proof your trajectory</p><p>* Reflections on what surprised me most in this conversation</p><p>And along the way, I’ll highlight specific frameworks and examples you can put into action this week.</p><p>Part 1: Quince and the Power of Supply Chain Innovation</p><p>When Jason first explained Quince’s model, I’ll admit I was skeptical. On its face, it sounds like yet another DTC apparel play. Sell cheap cashmere sweaters online? Compete with incumbents like Theory and Away? It didn’t sound differentiated.</p><p>Jason disagreed. <em>“Most people know Shein, and Shein was kind of working direct with factories. Quince’s innovation was asking, what do factories in Asia have during certain times of the year? They have excess capacity. Those are the same factories who are making a Theory shirt or an Away bag. Quince went to those factories and said, hey, make product for us, you hold the inventory, we’ll guarantee we’ll sell it.”</em></p><p>That’s not a design tweak—it’s a supply chain disruption. Costco built an empire on this principle. TJX did the same. Walmart before them. If you can structurally rewire how goods get to consumers, you’ve got the foundation for a massive business.</p><p><strong>Lesson for PMs:</strong> Sometimes the real innovation isn’t visible in the interface. It’s hidden in the plumbing. As PMs, we often obsess over UI polish, onboarding flows, or feature prioritization. But step back and ask: what’s the equivalent of supply chain disruption in your domain? It might be a new data pipeline, a pricing model, or even a workflow that cuts out three layers of manual steps for your users. Those invisible shifts can unlock outsized value.</p><p>Jason gave the example of Quince’s $50 cashmere sweater. <em>“Anyone in retail knows that if you’re selling at a 12% gross margin and it’s apparel with returns, you’re making no money on that. What is it? It’s an alternative method of customer acquisition. You hook them with the sweater and sell them everything else.”</em> In other words, they turned a P&L liability into a marketing hack.</p><p><strong>Actionable move for PMs:</strong> Identify your “$50 sweater.” What’s the feature you can offer that might look unprofitable or inconvenient in isolation, but serves as an on-ramp to deeper engagement? Maybe it’s a generous free tier in SaaS, or an intentionally unscalable white-glove onboarding process. Don’t dismiss those just because they don’t scale on day one.</p><p>Part 2: Moats, Marketing, and Hero SKUs</p><p>Jason emphasized that great retailers pair supply chain execution with marketing innovation. Costco has rotisserie chickens and $2 hot dogs. Quince has $50 cashmere sweaters. These “hero SKUs” create shareable moments and lasting brand associations.</p><p><em>“You’re pairing supply chain innovation with marketing innovation, and it’s super effective,”</em> Jason explained.</p><p><strong>Lesson for PMs:</strong> Don’t just think about your feature set—think about your hero feature. What’s the one thing that makes users say, “You have to try this product”? Too often, PM roadmaps are a laundry list of incremental improvements. Instead, design at least one feature that can carry your brand in conversations, tweets, and TikToks. Think about Figma’s multiplayer cursors or Slack’s playful onboarding. These are features that double as marketing.</p><p>Part 3: StubHub and the Economics of Trust</p><p>After Quince, Jason shifted to a very different case study: StubHub. Here, the lesson wasn’t about supply chain but about moats built on trust, liquidity, and cash flow mechanics.</p><p><em>“Customers will pay for certainty even if they hate you,”</em> Jason said. Think about that. StubHub’s fees are infamous. Buyers grumble, sellers grumble. And yet, if you need a Taylor Swift ticket and want to be sure it’s legit, you go to StubHub. That reliability is the moat.</p><p><strong>Lesson for PMs:</strong> Trust is an underrated product feature. In consumer software, this might mean uptime and reliability. In enterprise SaaS, it might mean compliance and security certifications. In AI, it could mean interpretability and guardrails. Don’t underestimate how much people will endure friction if they can be sure you’ll deliver.</p><p>Jason also pointed out StubHub’s cash flow hack: <em>“StubHub gets money from buyers up front and then pays the sellers later. That’s a beautiful business model. If you create a cash flow cycle where you’re getting the money first and delivering later, you raise a lot less equity and get diluted less.”</em></p><p>This is a reminder that product decisions can have financial implications. As PMs, you may not directly set billing cycles, but you can influence monetization models, free trial design, or even refund policies—all of which affect working capital.</p><p><strong>Actionable move for PMs:</strong> Partner with finance. Ask them: what product levers could improve cash conversion cycles? Could prepayment discounts, annual billing, or usage-based pricing reduce working capital strain? Thinking beyond the feature spec makes you more valuable to your company—and accelerates your own career.</p><p>Part 4: Five Takeaways from StubHub </p><p>Jason listed five lessons from StubHub:</p><p>* <strong>Trust is a moat</strong> – Even if users complain, reliability keeps them loyal.</p><p>* <strong>Liquidity is a moat</strong> – Scale compounds, especially in marketplaces.</p><p>* <strong>Cash flow mechanics matter</strong> – Payment terms can determine survival.</p><p>* <strong>Tooling locks in supply</strong> – Seller-facing tools create stickiness.</p><p>* <strong>Scale itself compounds</strong> – Once you’re ahead, momentum carries you.</p><p>Part 5: What Surprised Me Most</p><p>As I listened back to this conversation, two surprises stood out.</p><p>First, the sheer size of value retail. Jason noted that TJX is worth $157 billion. Burlington, $22 billion. Costco, $418 billion. These aren’t sexy tech names, but they are empires. It made me rethink my assumptions about what “boring” industries can teach us.</p><p>Second, Jason’s humility about being wrong. <em>“Reddit might be one,”</em> he admitted when I asked about his biggest misses. <em>“I had no idea that LLMs would use their data in a way that would make it incredibly important. I was dead wrong. I said sit on the sidelines.”</em> That candor is refreshing—and a reminder that even seasoned investors get it wrong. The key is to keep learning.</p><p><strong>Lesson for PMs:</strong> Admit your misses. Write them down. Share them. Don’t hide them. Your credibility grows when you own your blind spots and show how you’ve adjusted.</p><p>Closing Thoughts</p><p>Talking with Jason felt like being back in business school—but with sharper edges. These aren’t abstract frameworks. They’re battle-tested strategies from companies that scaled to billions. As PMs, our job isn’t just to ship features. It’s to build businesses. That requires thinking about supply chains, trust, cash flow, and marketing moats.</p><p>If you found this helpful and want to go deeper, check out Jason’s Substack, <a target="_blank" href="https://ringingthebell.substack.com/"><em>Ringing the Bell</em></a>, where he publishes his case studies. And if you want to level up your own career trajectory, I offer 1:1 executive, career, and product coaching at tomleungcoaching.com.</p><p><strong>Shape the Future of PM</strong>And if you haven’t yet, I’d love your input on my <a target="_blank" href="https://docs.google.com/forms/d/e/1FAIpQLSfsT5l-R6JXIh3rGS1z1AIXZ7prakBf0fzju8HiIiJ32ZPT1w/viewform?usp=header"><strong>Future of Product Management survey</strong></a>. It only takes about 5 minutes, and by filling it out you’ll get early access to the results plus an invitation to a live readout with a panel of top product leaders. The survey explores how AI, team structures, and skill sets are reshaping the PM role for 2026 and beyond.OK. Let’s ship greatness.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://firesidepm.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">firesidepm.substack.com</a>
play-circle icon
39 MIN