Fireside Product Management

Tom Leung

Product Management podcast where 20 year PM veteran Tom Leung interviews VP's, CPO's, and CEO's who rose up from product to talk about their careers, the art and science of product management, and advice for other PM's. Watch video on YouTube. firesidepm.co Learn more about host Tom Leung at http://tomleungcoaching.com firesidepm.substack.com

  1. Why Your Next PM Job Depends More on Culture Than Compensation

    JAN 5

    Why Your Next PM Job Depends More on Culture Than Compensation

    I met Albino Sanchez in the bleachers at a high school JV football game. While our sons battled it out on the field for Palo Alto High School, we found ourselves deep in conversation about something far removed from touchdowns and tackles: why some product leaders thrive while others crash and burn in seemingly similar companies. Albino doesn’t fit the typical Silicon Valley mold. Born and raised in Mexico City, he spent his early career as a strategy consultant helping large companies implement frameworks like Balanced Scorecard and OKRs. But unlike most consultants who move on to the next engagement, Albino couldn’t stop thinking about his former clients. Some organizations flourished with these frameworks. Others abandoned them within months. The strategic tools were identical. The execution was completely different. What he discovered would fundamentally change how I think about my own career moves—and it should change how you think about yours too. The Pattern That Changes Everything After years of looking back at his consulting clients, Albino noticed something remarkable: “Those organizations that were really thriving with these frameworks and really growing, they had a special type of leader. And that leader was usually a people-centered leader, a leader that was humble, that was a servant leader, and that this leader cared about their people, listened to them, and really wanted collaboration.” This wasn’t just about nice leadership. It was about creating what he calls “the atmosphere for people to thrive.” The insight hit him hard enough that he completely pivoted his career. He became an executive coach, spending the last 15 years working with leaders to shape healthier, more productive cultures. He moved his family from Mexico City to Palo Alto four years ago and recently founded Aha! Impact, a company focused on helping organizations achieve the right culture so both the business and employees can thrive. But here’s what matters for you as a PM: Albino’s journey revealed something most of us learn the hard way. Culture doesn’t just influence whether a strategy succeeds. Culture IS the strategy. Why “Culture Eats Strategy for Breakfast” Isn’t Just a Poster on the Wall You’ve probably seen this quote attributed to Peter Drucker plastered on every startup’s office wall. But do you actually believe it? Albino puts it this way: “We need to have the right environment so people can thrive and then implement and then be successful in business.” Without that environment, even the most brilliant product strategy becomes a document that sits in a Google Drive folder, gathering digital dust. The Culture Paradox: Why Google, Amazon, Meta, and Microsoft All Win Differently During our conversation, I pushed Albino on something that had been bothering me. If culture is so critical, how do companies with wildly different cultures all succeed? Amazon’s frugality and bias for action looks nothing like Google’s innovative freedom and psychological safety. Microsoft’s collaborative enterprise focus differs dramatically from Meta’s move-fast-and-break-things mentality. His answer surprised me. While different cultures can succeed, Albino sees clear patterns in what works today: “Innovation is one of them. We need to have nowadays with so many changes with AI, technology, globalization, communications. We need to be innovative. We need to be adaptive. We need to embrace change as something that’s part of our day to day.” The successful organizations aren’t choosing between being people-centered OR innovative OR efficiency-driven. They’re becoming all three simultaneously. The old archetypes (pick your culture and stick with it) no longer apply in our rapidly evolving landscape. But here’s the critical insight for PMs: You need to understand which cultural attributes matter most to you personally. Because while multiple cultures can succeed, not every culture will allow YOU to succeed. The Real Reason You’re Miserable at Work Albino shared something that hits close to home for many experienced PM’s: “People join organizations because of the company and they leave the organization most likely because of the boss.” This tracks with every conversation I’ve had as an executive coach. The PMs who come to me aren’t struggling with their OKRs or roadmaps. They’re struggling with leadership dynamics, unclear values, and cultural misalignment. Think about your own career. When you’ve been most energized, most productive, most creative. Was it because of the company mission statement? Or was it because you had a leader who created space for you to do your best work? When you’ve been most miserable, was it really about the compensation or the commute? Or was it about a leader who micromanaged, who didn’t value collaboration, who created an atmosphere of fear rather than trust? Culture doesn’t just make work more pleasant. It fundamentally determines whether you can bring your best self to the job. The Leadership Styles That Shape Product Cultures Here’s where Albino’s work gets really practical. He identifies four primary leadership archetypes that shape organizational culture, and understanding these can help you decode any company you’re considering: 1. The Controlling Leader This leader centralizes decision-making, micromanages execution, and views team members as resources rather than collaborators. They might get short-term results, but they create cultures where PMs become order-takers rather than strategic partners. Innovation dies because risk-taking gets punished. 2. The Competitive Leader Everything is a zero-sum game. Teams compete internally for resources, recognition, and rewards. This can drive individual performance but often at the expense of collaboration. For PMs, this means product launches succeed but platform thinking fails. You win your battle but lose the war. 3. The Collaborative Leader This is Albino’s people-centered leader. They invest in relationships, foster psychological safety, and view success as collective rather than individual. In product organizations, this looks like cross-functional partnerships that actually work, user research that influences decisions, and retrospectives that drive real improvement. 4. The Creative Leader These leaders embrace experimentation, tolerate failure, and push for innovation. They create cultures where PMs can propose bold ideas without fear. But without enough structure, these cultures can become chaotic. The best leaders, and the best cultures, combine elements of all four, calibrated to the organization’s specific needs. As a PM evaluating a new role, you need to assess not just the stated values but the actual leadership style you’ll experience day-to-day. The Questions You’re Not Asking in Interviews Most PMs treat interviews as one-way evaluations. The company assesses you; you try to impress them. Albino argues this is backwards. “This is a two-way assessment,” he told me. “You are also interviewing them.” I know what you’re thinking: “Tom, that’s easy to say when you have options. When you’re desperate for a job, you can’t afford to be picky.” I get it. But here’s the truth Albino helped me see: accepting a role at a company with cultural misalignment doesn’t solve your job search problem. It delays your job search problem by six months while making you miserable. Your objective isn’t to get as many offers as possible. Your objective is to get offers from places where you’ll thrive. So what questions should you actually ask? On Work-Life Integration: “How do you manage team collaboration across different locations and time zones?” These aren’t just logistics questions. They reveal whether the company trusts employees or requires surveillance. They show whether leadership believes productivity comes from presence or output. On Decision-Making: “Tell me about a recent product decision where you had significant disagreement among stakeholders. How did you resolve it?” This behavioral question (turned around on the company) reveals their true decision-making process. Do they rely on data, authority, consensus, or customer feedback? Do they value PM input or just expect execution? On Failure and Learning: “Describe a recent product launch that didn’t meet expectations. What happened, and how did the team respond?” The answer tells you everything about psychological safety. Do they blame individuals or examine systems? Do they learn from failures or hide them? On Growth and Development: “How do PMs typically grow in their careers here? Can you share specific examples of PMs who’ve advanced and what enabled their growth?” This reveals whether the culture actually invests in development or just talks about it in the handbook. But here’s Albino’s most important advice: “It’s very important that you are authentic, you are yourself. Don’t try to make an act there. It’s very common to do that just to cover the expectations of the potential employer. But you know what? Try to get rid of that fear and try to be yourself.” This is counterintuitive in a competitive job market. Every instinct tells you to mold yourself to what they want. But cultural misalignment has costs. Stress. Burnout. Short tenure. Another job search in six months. Better to be yourself, assess fit honestly, and find a place where you can actually thrive. How AI Is Changing Culture Assessment Here’s where Albino’s work gets really interesting for those of us in tech. He’s building an AI-powered tool to help companies assess cultural fit during hiring. Traditional culture fit assessment is notoriously unreliable. It often means “do I want to get a beer with this person,” which perpetuates homogeneity and bias. Or it gets delegated to a single interviewer who may not accurately represent the actual culture. Albino’s approach is diffe

    43 min
  2. I Tested 5 AI Tools to Write a PRD—Here's the Winner

    12/15/2025

    I Tested 5 AI Tools to Write a PRD—Here's the Winner

    TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they’d all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They’re all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing. What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they’re better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking. If you’re an early or mid-career PM in Silicon Valley, this matters to you. Because here’s the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn’t whether to use these tools. The question is whether you’re using the right ones most effectively. So let me walk you through exactly what I did, what I learned, and what you should do differently. The Setup: A Real-World Test Case Here’s how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.” So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD. For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that’s complex enough to stress-test these tools but straightforward enough that I could iterate quickly. But here’s the critical part that too many PMs get wrong when they start using AI for product work: I didn’t just throw a single sentence at these tools and expect magic. The “Back of the Napkin” Approach: Why You Still Need to Think “I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we’re gonna do this more, a little old-school AI approach where we’re gonna do some original human thinking.” This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless. Your job as a PM isn’t to become obsolete. It’s to become more effective. And that means doing the strategic thinking work that AI cannot do for you. So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here’s what I included: Why: The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.” Target User: Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.” This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage. Problem to Solve: What’s broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.” Key Elements: The feature set and approach. Success Metrics: How we’d measure success. Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that’s exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That’s all it took to create a foundation that would dramatically improve what came out of the AI tools. Round One: Generating the Full PRD With my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD. ChatGPT: The Reliable Generalist ChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring. The document it produced checked all the boxes. It had the sections you’d expect. The writing was clear. But when I read it, I couldn’t shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation. Here’s what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment. But here’s what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics. The problem with generic output isn’t that it’s wrong, it’s that it’s invisible. When you’re trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company’s actual strategy. ChatGPT’s output felt like it was written by someone who’d read a lot of PRDs but never actually shipped a product. One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren’t wrong, but they’re lazy. They don’t show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude’s output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.” Actionable Insight: Use ChatGPT when you need fast, serviceable documentation that doesn’t need to be exceptional. Think: internal updates, status reports, routine communications. Don’t rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context. Gemini: Better Than Expected Google’s Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming. What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations. Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren’t in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting. But here’s where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren’t terrible, but they weren’t compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer’s brain. For a PRD that you’re going to use internally with a team that already understands the context, Gemini’s output would work well. The text quality is strong enough, and if you’re in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini’s output directly into Google Docs and continue iterating there. But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It’s good, but not great. It’s the solid B+ student: reliably competent but rarely exceptional. Actionable Insight: Gemini is a strong choice if you’re working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It’s particularly good if you’re working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don’t expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents. Grok: Not Ready for Prime Time Let’s just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work. “I don’t have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated. Actionable Insight: Skip Grok for product documentation work right now. Maybe it’ll improve, but as of my testing, it’s simply not competitive with the other options. It felt like 1-2 years behind the others. ChatPRD: The Speci

    52 min
  3. The Future of Product Management in the Age of AI: Lessons From a Five Leader Panel

    12/08/2025

    The Future of Product Management in the Age of AI: Lessons From a Five Leader Panel

    Every few years, the world of product management goes through a phase shift. When I started at Microsoft in the early 2000s, we shipped Office in boxes. Product cycles were long, engineering was expensive, and user research moved at the speed of snail mail. Fast forward a decade and the cloud era reset the speed at which we build, measure, and learn. Then mobile reshaped everything we thought we knew about attention, engagement, and distribution. Now we are standing at the edge of another shift. Not a small shift, but a tectonic one. Artificial intelligence is rewriting the rules of product creation, product discovery, product expectations, and product careers. To help make sense of this moment, I hosted a panel of world class product leaders on the Fireside PM podcast: • Rami Abu-Zahra, Amazon product leader across Kindle, Books, and Prime Video• Todd Beaupre, Product Director at YouTube leading Home and Recommendations• Joe Corkery, CEO and cofounder of Jaide Health • Tom Leung (me), Partner at Palo Alto Foundry• Lauren Nagel, VP Product at Mezmo• David Nydegger, Chief Product Officer at OvivaThese are leaders running massive consumer platforms, high stakes health tech, and fast moving developer tools. The conversation was rich, honest, and filled with specific examples. This post summarizes the discussion, adds my own reflections, and offers a practical guide for early and mid career PMs who want to stay relevant in a world where AI is redefining what great product management looks like. Table of Contents * What AI Cannot Do and Why PM Judgment Still Matters * The New AI Literacy: What PMs Must Know by 2026 * Why Building AI Products Speeds Up Some Cycles and Slows Down Others * Whether the PM, Eng, UX Trifecta Still Stands * The Biggest Risks AI Introduces Into Product Development * Actionable Advice for Early and Mid Career PMs * My Takeaways and What Really Matters Going Forward * Closing Thoughts and Coaching Practice 1. What AI Cannot Do and Why PM Judgment Still Matters We opened the panel with a foundational question. As AI becomes more capable every quarter, what is left for humans to do. Where do PMs still add irreplaceable value. It is the question every PM secretly wonders. Todd put it simply: “At the end of the day, you have to make some judgment calls. We are not going to turn that over anytime soon.” This theme came up again and again. AI is phenomenal at synthesizing, drafting, exploring, and narrowing. But it does not have conviction. It does not have lived experience. It does not feel user pain. It does not carry responsibility. Joe from Jaide Health captured it perfectly when he said: “AI cannot feel the pain your users have. It can help meet their goals, but it will not get you that deep understanding.” There is still no replacement for sitting with a frustrated healthcare customer who cannot get their clinical data into your system, or a creator on YouTube who feels the algorithm is punishing their art, or a devops engineer staring at an RCA output that feels 20 percent off. Every PM knows this feeling: the moment when all signals point one way, but your gut tells you the data is incomplete or misleading. This is the craft that AI does not have. Why judgment becomes even more important in an AI world David, who runs product at a regulated health company, said something incredibly important: “Knowing what great looks like becomes more essential, not less. The PM's that thrive in AI are the ones with great product sense.” This is counterintuitive for many. But when the operational work becomes automated, the differentiation shifts toward taste, intuition, sequencing, and prioritization. Lauren asked the million dollar question. “How are we going to train junior PMs if AI is doing the legwork. Who teaches them how to think.” This is a profound point. If AI closes the gap between junior and senior PMs in execution tasks, the difference will emerge almost entirely in judgment. Knowing how to probe user problems. Knowing when a feature is good enough. Knowing which tradeoffs matter. Knowing which flaw is fatal and which is cosmetic. AI is incredible at writing a PRD. AI is terrible at knowing whether the PRD is any good. Which means the future PM becomes more strategic, more intuitive, more customer obsessed, and more willing to make thoughtful bets under uncertainty. 2. The New AI Literacy: What PMs Must Know by 2026 I asked the panel what AI literacy actually means for PMs. Not the hype. Not the buzzwords. The real work. Instead of giving gimmicky answers, the discussion converged on a clear set of skills that PMs must master. Skill 1: Understanding context engineering David laid this out clearly: “Knowing what LMS are good at and what they are not good at, and knowing how to give them the right context, has become a foundational PM skill.” Most PMs think prompt engineering is about clever phrasing. In reality, the future is about context engineering. Feeding models the right data. Choosing the right constraints. Deciding what to ignore. Curating inputs that shape outputs in reliable ways. Context engineering is to AI product development what Figma was to collaborative design. If you cannot do it, you are not going to be effective. Skill 2: Evals, evals, evals Rami said something that resonated with the entire panel: “Last year was all about prompts. This year is all about evals.” He is right. • How do you build a golden dataset.• How do you evaluate accuracy.• How do you detect drift.• How do you measure hallucination rates.• How do you combine UX evals with model evals.• How do you decide what good looks like.• How do you define safe versus unsafe boundaries. AI evaluation is now a core PM responsibility. Not exclusively. But PMs must understand what engineers are testing for, what failure modes exist, and how to design test sets that reflect the real world. Lauren said her PMs write evals side by side with engineering. That is where the world is going. Skill 3: Knowing when to trust AI output and when to override it Todd noted: “It is one thing to get an answer that sounds good. It is another thing to know if it is actually good.” This is the heart of the role. AI can produce strategic recommendations that look polished, structured, and wise. But the real question is whether they are grounded in reality, aligned with your constraints, and consistent with your product vision. A PM without the ability to tell real insight from confident nonsense will be replaced by someone who can. Skill 4: Understanding the physics of model changes This one surprised many people, but it was a recurring point. Rami noted: “When you upgrade a model, the outputs can be totally different. The evals start failing. The experience shifts.” PMs must understand: • Models get deprecated• Models drift• Model updates can break well tuned prompts• API pricing has real COGS implications• Latency varies• Context windows vary• Some tasks need agents, some need RAG, some need a small finetuned model This is product work now. The PM of 2026 must know these constraints as well as a PM of the cloud era understood database limits or API rate limits. Skill 5: How to construct AI powered prototypes in hours, not weeks It now takes one afternoon to build something meaningful. Zero code required. Prompt, test, refine. Whether you use Replit, Cursor, Vercel, or sandboxed agents, the speed is shocking. But this makes taste and problem selection even more important. The future PM must be able to quickly validate whether a concept is worth building beyond the demo stage. 3. Why Building AI Products Speeds Up Some Cycles and Slows Down Others This part of the conversation was fascinating because people expected AI to accelerate everything. The panel had a very different view. Fast: Prototyping and concept validation Lauren described how her teams can build working versions of an AI powered Root Cause Analysis feature in days, test it with customers, and get directional feedback immediately. “You can think bigger because the cost of trying things is much lower,” she said. For founders, early PMs, and anyone validating hypotheses, this is liberating. You can test ten ideas in a week. That used to take a quarter. Slow: Productionizing AI features The surprising part is that shipping the V1 of an AI feature is slower than most expect. Joe noted: “You can get prototypes instantly. But turning that into a real product that works reliably is still hard.” Why. Because: • You need evals.• You need monitoring.• You need guardrails.• You need safety reviews.• You need deterministic parts of the workflow.• You need to manage COGS.• You need to design fallbacks.• You need to handle unpredictable inputs.• You need to think about hallucination risk.• You need new UI surfaces for non deterministic outputs. Lauren said bluntly: “Vibe coding is fast. Moving that vibe code to production is still a four month process.” This should be printed on a poster in every AI startup office. Very Slow: Iterating on AI powered features Another counterintuitive point. Many teams ship a great V1 but struggle to improve it significantly afterward. David said their nutrition AI feature launched well but: “We struggled really hard to make it better. Each iteration was easy to try but difficult to improve in a meaningful way.” Why is iteration so difficult. Because model improvements may not translate directly into UX improvements. Users need consistency. Drift creates churn. Small changes in context or prompts can cause large changes in behavior. Teams are learning a hard truth: AI powered features do not behave like typical deterministic product flows. They require new iteration muscles that most orgs do not yet have. 4. The PM, Eng, UX Trifecta in the AI Era I asked whether the classic PM, Eng, UX triad is still the right model. The audience was expecting di

    1h 23m
  4. The Difference Between Encouragement and Truth: Lessons From Building What People Actually Need

    11/03/2025

    The Difference Between Encouragement and Truth: Lessons From Building What People Actually Need

    The Interview That Sparked This Essay Joe Corkery and I worked together at Google years ago, and he has since gone on to build a venture-backed company tackling a real and systemic problem in healthcare communication. This essay is my attempt to synthesize that conversation. It is written for early and mid career PMs in Silicon Valley who want to get sharper at product judgment, market discovery, customer validation, and knowing the difference between encouragement and signal. If you feel like you have ever shipped something, presented it to customers, and then heard polite nodding instead of movement and urgency, this is for you. Joe’s Unusual Career Arc Joe’s background is not typical for a founder. He is a software engineer. And a physician. And someone who has led business development in the pharmaceutical industry. That multidisciplinary profile allowed him to see something that many insiders miss: healthcare is full of problems that everyone acknowledges, yet very few organizations are structurally capable of solving. When Joe joined Google Cloud in 2014, he helped start the healthcare and life sciences product org. Yet the timing was difficult. As he put it: “The world wasn’t ready or Google wasn’t ready to do healthcare.” So instead of building healthcare products right away, he spent two years working on security, compliance, and privacy. That detour will matter later, because it set the foundation for everything he is now doing at Jaide. Years later, he left Google to build a healthcare company focused initially on guided healthcare search, particularly for women’s health. The idea resonated emotionally. Every customer interview validated the need. Investors said it was important. Healthcare organizations nodded enthusiastically. And yet, there was no traction. This created a familiar and emotionally challenging founder dilemma: * When everyone is encouraging you * But no one will pay you or adopt early * How do you know if you are early, unlucky, or wrong? This is the question at the heart of product strategy. False Positives: Why Encouragement Is Not Feedback If you have worked as a PM or founder for more than a few weeks, you have encountered positive feedback that turned out to be meaningless. People love your idea. Executives praise your clarity. Customers tell you they would definitely use it. Friends offer supportive high-fives. But then nothing moves. As Joe put it: “Everyone wanted to be supportive. But that makes it hard to know whether you’re actually on the right path.” This is not because people are dishonest. It is because people are kind, polite, and socially conditioned to encourage enthusiasm. In Silicon Valley especially, we celebrate ambition. We praise risk-taking. We cheer for the founder-in-the-garage mythology. If someone tells you that your idea is flawed, they fear they are crushing your passion. So even when we explicitly ask for brutal honesty, people soften their answers. This is the false positive trap. And if you misread encouragement as traction, you can waste months or even years. The Small Framing Change That Changes Everything Joe eventually realized that the problem was not the idea itself. The problem was how he was asking for feedback. When you present your idea as the idea, people naturally react supportively: * “That’s really interesting.” * “I could see that being useful.” * “This is definitely needed.” But when you instead present two competing ideas and ask someone to help you choose, you change the psychology of the conversation entirely. Joe explained it this way: “When we said, ‘We are building this. What do you think?’ people wanted to be encouraging. But when we asked, ‘We are choosing between these two products. Which one should we build?’ it gave them permission to actually critique.” This shift is subtle, but powerful. Suddenly: * People contrast. * Their reasoning surfaces. * Their hesitation becomes visible. * Their priorities emerge with clarity. By asking someone to choose between two ideas, you activate their decision-making brain instead of their supportive brain. It is no different from usability testing. If you show someone a screen and ask what they think, they are polite. If you give them a task and ask them to complete it, their actual friction appears immediately. In product discovery, friction is truth. How This Applies to PMs, Not Just Founders You may be thinking: this is interesting for entrepreneurs, but I work inside a company. I have stakeholders, OKRs, a roadmap, and a backlog that already feels too full. This technique is actually more relevant for PMs inside companies than for founders. Inside organizations, political encouragement is even more pervasive: * Leaders say they want innovation, but are risk averse. * Cross-functional partners smile in meetings, but quietly maintain objections. * Engineers nod when you present the roadmap, but may not believe in it. * Customers say they like your idea, but do not prioritize adoption. One of the most powerful tools you can use as a PM is explicitly framing your product decisions as explicit choices, rather than proposals seeking validation. For example: Instead of saying:“We are planning to build a new onboarding flow. Here is the design. Thoughts?” Say:“We are deciding between optimizing retention or acquisition next quarter. If we choose retention, the main lever is onboarding friction. Here are two possible approaches. Which outcome matters more to the business right now?” In the second framing: * The business goal is visible. * The tradeoff is unavoidable. * The decision owner is clear. * The conversation becomes real. This is how PMs build credibility and influence: not through slides or persuasion, but through framing decisions clearly. Jaide’s Pivot: From Health Search to AI Translation The result of Joe’s reframed feedback approach was unambiguous. Across dozens of conversations with healthcare executives and hospital leaders, one pattern emerged consistently: Translation was the urgent, budget-backed, economically meaningful problem. As Joe put it, after talking to more than 40 healthcare decision-makers: “Every single person told us to build the translation product. Not mostly. Not many. Every single one.” This kind of clarity is rare in product strategy. When you get it, you do not ignore it. You move. Jaide Health shifted its core focus to solving a very real, very measurable, and very painful problem in healthcare: the language gap affecting millions of patients. More than 25 million patients in the United States do not speak English well enough to communicate with clinicians. This leads to measurable harm: * Longer hospital stays * Increased readmission rates * Higher medical error rates * Lower comprehension of discharge instructions The status quo for translation relies on human interpreters who are expensive, limited, slow to schedule, and often unavailable after hours or in rare languages. Many clinicians, due to lack of resources, simply use Google Translate privately on their phones. They know this is not secure or compliant, but they feel like they have no better option. So Jaide built a platform that integrates compliance, healthcare-specific terminology, workflow embedding, custom glossaries, discharge summaries, and real-time accessibility. This is not simply “healthcare plus GPT”. It is targeted, workflow-integrated, risk-aware operational excellence. Product managers should study this pattern closely. The winning strategy was not inventing a new problem. It was solving a painful problem that everyone already agreed mattered. The Core PM Lesson: Focus on Problems With Urgent Budgets Behind Them A question I often ask PMs I coach: Who loses sleep if this problem is not solved? If the answer is: * “Not sure” * “Eventually the business will feel it” * “It would improve the experience” * “It could move a KPI if adoption increases” Then you do not have a real problem yet. Real product opportunities have: * A user who is blocked from achieving something meaningful * A measurable cost or consequence of inaction * An internal champion with authority to push change * An adjacent workflow that your product can attach to immediately * A budget owner who is willing to pay now, not later Healthcare translation checks every box. That is why Joe now has institutional adoption and a business with meaningful traction behind it. Why PMs Struggle With This in Practice If the lesson seems obvious, why do so many PMs fall into the encouragement trap? The reason is emotional more than analytical. It is uncomfortable to confront the possibility that your idea, feature, roadmap, strategy, or deck is not compelling enough yet. It is easier to seek validation than truth. In my first startup, we kept our product in closed beta for months longer than we should have. We told ourselves we were refining the UX, improving onboarding, solidifying architecture. The real reason, which I only admitted years later, was that I was afraid the product was not good enough. I delayed reality to protect my ego. In product work, speed of invalidation is as important as speed of iteration. If something is not working, you need to know as quickly as possible. The faster you learn, the more shots you get. The best PMs do not fall in love with their solutions. They fall in love with the moments of clarity that allow them to change direction quickly. Actionable Advice for Early and Mid Career PMs Below are specific behaviors and habits you can put into practice immediately. 1. Always test product concepts as choices, not presentations Instead of asking:“What do you think of this idea?” Ask:“We are deciding between these two approaches. Which one is more important for you right now and why?” This forces prioritization, not politeness. 2. Never ship a feature without observing real usage inside the workflow A

    40 min
  5. Atlas Gets a C+: Lessons from ChatGPT’s Browser That’s Brilliant, Broken, and Bursting with Potential

    10/24/2025

    Atlas Gets a C+: Lessons from ChatGPT’s Browser That’s Brilliant, Broken, and Bursting with Potential

    I didn’t plan to make a video today. I’d just wrapped a client call, remembered that OpenAI had released Atlas, and decided to record a quick unboxing for my Fireside PM community. I’d heard mixed things—some people raving about it, others underwhelmed—but I made a deliberate choice not to read any reviews beforehand. I wanted to go in blind, the way an actual user would. Within 30 minutes, I had my verdict: Atlas earns a C+. It’s ambitious, it’s fast, and it hints at a radical new way to experience the web. But it also stumbles in ways that remind you just how fragile early AI products can be—especially when ambition outpaces usability. This post isn’t a teardown or a fan letter. It’s a field report from someone who’s built and shipped dozens of products, from scrappy startups to billion-user platforms. My goal here is simple: unpack what Atlas gets wrong, acknowledge what it gets right, and pull out lessons every PM and product team can use. The Unboxing Experience When I first launched Atlas, I got the usual macOS security warning. I’m not docking points for that—this is an MVP, and once it hits the Mac App Store, those prompts will fade into the background. There was an onboarding window outlining the main features, but I barely glanced at it. I was eager to jump in and see the product in action. That’s not a unique flaw—it’s how most real users behave. We skip the instructions and go straight to testing the limits. That’s why the best onboarding happens in motion, not before use. There were some suggested prompts which I ignored but I would’ve loved contextual fly-outs or light tooltips appearing as I explored past the first 30 seconds of my experience: * “Try asking Atlas to summarize this page.” * “Highlight text to discuss it.” * “Atlas can compare this to other sources—want to see how?” Small, progressive cues like these are what turn exploration into mastery. The initial onboarding screen wasn’t wrong—it was just misplaced. It taught before I cared. And that’s a universal PM lesson: meet users where their curiosity is, not where your product tour is. When Atlas Stumbled Atlas’s biggest issue isn’t accuracy or latency—it’s identity. It doesn’t yet know what it wants to be. On one hand, it acts like a browser with ChatGPT built in. On the other, it markets itself as an intelligent agent that can browse for you. Right now, it does neither convincingly. When I tried simple commands like “Summarize this page” or “Open the next link and tell me what it says,” the experience broke down. Sometimes it responded correctly; other times, it ignored the context entirely. The deeper issue isn’t technical—it’s architectural. Atlas hasn’t yet resolved the question of who’s driving. Is the user steering and Atlas assisting, or is Atlas steering and the user supervising? That uncertainty creates friction. It’s like co-piloting with someone who keeps grabbing the wheel mid-turn. Then there’s the missing piece that could make Atlas truly special: action loops. The UI makes it feel like Atlas should be able to take action—click, save, organize—but it rarely does. You can ask it to summarize, but you can’t yet say “add this to my notes” or “book this flight.” Those are the natural next steps in the agentic journey, and until they arrive, Atlas feels like a chat interface masquerading as a browser. This isn’t a criticism of the vision—it’s a question of sequencing. The team is building for the agentic future before the product earns the right to claim that mantle. Until it can act, Atlas is mostly a neat wrapper around ChatGPT that doesn’t justify replacing Chrome, Safari, or Edge. Where Atlas Shines Despite the friction, there were moments where I saw real promise. When Atlas got it right, it was magical. I’d open a 3,000-word article, ask for a summary, and seconds later have a coherent, tone-aware digest. Having that capability integrated directly into the browsing experience—no copy-paste, no tab-switching—is an elegant idea. You can tell the team understands restraint. The UI is clean and minimal, the chat panel is thoughtfully integrated, and the speed is impressive. It feels engineered by people who care about quality. The challenge is that all of this could, in theory, exist as a plugin. The browser leap feels premature. Building a full browser is one of the hardest product decisions a company can make—it’s expensive, high-friction, and carries a huge switching cost for users. The most generous interpretation is that OpenAI went full browser to enable agentic workflows—where Atlas doesn’t just summarize, but acts on behalf of the user. That would justify the architecture. But until that capability arrives, the browser feels like infrastructure waiting for a reason to exist. Atlas today is a scaffolding for the future, not a product for the present. Lessons for Product Managers Even so, Atlas offers a rich set of takeaways for PMs building ambitious products. 1. Don’t Confuse Vision with MVP You earn the right to ship big ideas by nailing the small ones. Atlas’s long-term vision is compelling, but the MVP doesn’t yet prove why it needed to exist. Start with one unforgettable use case before scaling breadth. 2. Earn Every Switch Cost Changing browsers is one of the highest-friction user behaviors in software. Unless your product delivers something 10x better, start as an extension, not a replacement. 3. Design for Real Behavior, Not Ideal Behavior Most users skip onboarding. Expect it. Plan for it. Guide them in context instead of relying on their patience. 4. Choose a Metaphor and Commit Atlas tries to be both browser and assistant. Pick one. If you’re an assistant, drive. If you’re a browser, stay out of the way. Users shouldn’t have to guess who’s in control. 5. Autonomy Without Agency Frustrates Users It’s worse for an AI to understand what you want but refuse to act than to not understand at all. Until Atlas can take meaningful action, it’s not an agent—it’s a spectator. 6. Sequence Ambition Behind Value The product is building for a world that doesn’t exist yet. Ambition is great, but the order of operations matters. Earn adoption today while building for tomorrow. Advice for the Atlas Team If I were advising the Atlas PM and design teams directly, I’d focus on five things: * Clarify the core identity. Decide if you’re an AI browser with ChatGPT or a ChatGPT agent that uses a browser. Everything else flows from that choice. * Earn the right to replace Chrome. Give users one undeniably magical use case that justifies the switch—research synthesis, comparison mode, or task execution. * Fix the metaphor collision. Make it obvious who’s in control: human or AI. Even a “manual vs. autopilot” toggle would add clarity. * Build action loops. Move from summarization to completion. The browser of the future won’t just explain—it will execute. * Sequence ambition. Agentic work is the destination, but the current version needs to win users on everyday value first. None of this is out of reach. The bones are good. What’s missing is coherence. Closing Reflection Atlas is a fascinating case study in what happens when world-class technology meets premature positioning. It’s not bad—it’s unfinished. A C+ isn’t an insult. It’s a reminder that potential and product-market fit are two different things. Atlas is the kind of product that might, in a few releases, feel indispensable. But right now, it’s a prototype wearing the clothes of a platform. For every PM watching this unfold, the lesson is universal: don’t get seduced by your own roadmap. Ambition must be earned, one user journey at a time. That’s how trust is built—and in AI, trust is everything. If you or your team are wrestling with similar challenges—whether it’s clarifying your product vision, sequencing your roadmap, or improving PM leadership—I offer both 1:1 executive and career coaching at tomleungcoaching.com and expert product management consulting and fractional CPO services through my firm, Palo Alto Foundry. OK. Enough pontificating. Let’s ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com

    29 min
  6. From Cashmere Sweaters to Billion-Dollar Lessons: What PMs Can Learn from Jason Stoffer's Analysis of Quince

    10/02/2025

    From Cashmere Sweaters to Billion-Dollar Lessons: What PMs Can Learn from Jason Stoffer's Analysis of Quince

    Introduction One of the great joys of hosting my Fireside PM podcast is the opportunity to reconnect with people I’ve known for years and go deep into the mechanics of business building. Recently, I sat down with Jason Stoffer, partner at Maveron Capital, a venture firm with a laser focus on consumer companies. Jason and I go way back to my Seattle days, so this was both a reunion and an education. Our conversation turned into a masterclass on scaling consumer businesses, the art of finding moats, and the brutal realities of marketplaces. But beyond the case studies, what stood out were the actionable insights PMs can apply right now. If you’re an early or mid-career product manager in Silicon Valley, there are playbooks here you can borrow—not in theory, but in practice. Jason summed up his approach to analyzing companies like this: “So many founders can get caught in the moment that sometimes it’s best when we’re looking at a new investment to talk about if things go right, what can happen. What would an S-1 or public filing look like? What would the company look like at a big M&A event? And then you work backwards.” That mindset—begin with the end in mind—is as powerful for a product manager shipping features as it is for a VC evaluating billion-dollar bets. In this post, I’ll share: * The key lessons from Jason’s breakdown of Quince and StubHub * How these lessons apply directly to your PM career * Tactical moves you can make to future-proof your trajectory * Reflections on what surprised me most in this conversation And along the way, I’ll highlight specific frameworks and examples you can put into action this week. Part 1: Quince and the Power of Supply Chain Innovation When Jason first explained Quince’s model, I’ll admit I was skeptical. On its face, it sounds like yet another DTC apparel play. Sell cheap cashmere sweaters online? Compete with incumbents like Theory and Away? It didn’t sound differentiated. Jason disagreed. “Most people know Shein, and Shein was kind of working direct with factories. Quince’s innovation was asking, what do factories in Asia have during certain times of the year? They have excess capacity. Those are the same factories who are making a Theory shirt or an Away bag. Quince went to those factories and said, hey, make product for us, you hold the inventory, we’ll guarantee we’ll sell it.” That’s not a design tweak—it’s a supply chain disruption. Costco built an empire on this principle. TJX did the same. Walmart before them. If you can structurally rewire how goods get to consumers, you’ve got the foundation for a massive business. Lesson for PMs: Sometimes the real innovation isn’t visible in the interface. It’s hidden in the plumbing. As PMs, we often obsess over UI polish, onboarding flows, or feature prioritization. But step back and ask: what’s the equivalent of supply chain disruption in your domain? It might be a new data pipeline, a pricing model, or even a workflow that cuts out three layers of manual steps for your users. Those invisible shifts can unlock outsized value. Jason gave the example of Quince’s $50 cashmere sweater. “Anyone in retail knows that if you’re selling at a 12% gross margin and it’s apparel with returns, you’re making no money on that. What is it? It’s an alternative method of customer acquisition. You hook them with the sweater and sell them everything else.” In other words, they turned a P&L liability into a marketing hack. Actionable move for PMs: Identify your “$50 sweater.” What’s the feature you can offer that might look unprofitable or inconvenient in isolation, but serves as an on-ramp to deeper engagement? Maybe it’s a generous free tier in SaaS, or an intentionally unscalable white-glove onboarding process. Don’t dismiss those just because they don’t scale on day one. Part 2: Moats, Marketing, and Hero SKUs Jason emphasized that great retailers pair supply chain execution with marketing innovation. Costco has rotisserie chickens and $2 hot dogs. Quince has $50 cashmere sweaters. These “hero SKUs” create shareable moments and lasting brand associations. “You’re pairing supply chain innovation with marketing innovation, and it’s super effective,” Jason explained. Lesson for PMs: Don’t just think about your feature set—think about your hero feature. What’s the one thing that makes users say, “You have to try this product”? Too often, PM roadmaps are a laundry list of incremental improvements. Instead, design at least one feature that can carry your brand in conversations, tweets, and TikToks. Think about Figma’s multiplayer cursors or Slack’s playful onboarding. These are features that double as marketing. Part 3: StubHub and the Economics of Trust After Quince, Jason shifted to a very different case study: StubHub. Here, the lesson wasn’t about supply chain but about moats built on trust, liquidity, and cash flow mechanics. “Customers will pay for certainty even if they hate you,” Jason said. Think about that. StubHub’s fees are infamous. Buyers grumble, sellers grumble. And yet, if you need a Taylor Swift ticket and want to be sure it’s legit, you go to StubHub. That reliability is the moat. Lesson for PMs: Trust is an underrated product feature. In consumer software, this might mean uptime and reliability. In enterprise SaaS, it might mean compliance and security certifications. In AI, it could mean interpretability and guardrails. Don’t underestimate how much people will endure friction if they can be sure you’ll deliver. Jason also pointed out StubHub’s cash flow hack: “StubHub gets money from buyers up front and then pays the sellers later. That’s a beautiful business model. If you create a cash flow cycle where you’re getting the money first and delivering later, you raise a lot less equity and get diluted less.” This is a reminder that product decisions can have financial implications. As PMs, you may not directly set billing cycles, but you can influence monetization models, free trial design, or even refund policies—all of which affect working capital. Actionable move for PMs: Partner with finance. Ask them: what product levers could improve cash conversion cycles? Could prepayment discounts, annual billing, or usage-based pricing reduce working capital strain? Thinking beyond the feature spec makes you more valuable to your company—and accelerates your own career. Part 4: Five Takeaways from StubHub Jason listed five lessons from StubHub: * Trust is a moat – Even if users complain, reliability keeps them loyal. * Liquidity is a moat – Scale compounds, especially in marketplaces. * Cash flow mechanics matter – Payment terms can determine survival. * Tooling locks in supply – Seller-facing tools create stickiness. * Scale itself compounds – Once you’re ahead, momentum carries you. Part 5: What Surprised Me Most As I listened back to this conversation, two surprises stood out. First, the sheer size of value retail. Jason noted that TJX is worth $157 billion. Burlington, $22 billion. Costco, $418 billion. These aren’t sexy tech names, but they are empires. It made me rethink my assumptions about what “boring” industries can teach us. Second, Jason’s humility about being wrong. “Reddit might be one,” he admitted when I asked about his biggest misses. “I had no idea that LLMs would use their data in a way that would make it incredibly important. I was dead wrong. I said sit on the sidelines.” That candor is refreshing—and a reminder that even seasoned investors get it wrong. The key is to keep learning. Lesson for PMs: Admit your misses. Write them down. Share them. Don’t hide them. Your credibility grows when you own your blind spots and show how you’ve adjusted. Closing Thoughts Talking with Jason felt like being back in business school—but with sharper edges. These aren’t abstract frameworks. They’re battle-tested strategies from companies that scaled to billions. As PMs, our job isn’t just to ship features. It’s to build businesses. That requires thinking about supply chains, trust, cash flow, and marketing moats. If you found this helpful and want to go deeper, check out Jason’s Substack, Ringing the Bell, where he publishes his case studies. And if you want to level up your own career trajectory, I offer 1:1 executive, career, and product coaching at tomleungcoaching.com. Shape the Future of PMAnd if you haven’t yet, I’d love your input on my Future of Product Management survey. It only takes about 5 minutes, and by filling it out you’ll get early access to the results plus an invitation to a live readout with a panel of top product leaders. The survey explores how AI, team structures, and skill sets are reshaping the PM role for 2026 and beyond.OK. Let’s ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com

    39 min
  7. Learning Faster Than the Market

    09/25/2025

    Learning Faster Than the Market

    When I sit down with product leaders who’ve spent decades shaping how Silicon Valley builds products, I’m always struck by how their career arcs echo the very lessons they now teach. Michael Margolis is no exception. Michael started his career as an anthropologist, stumbled into educational software in the late 90s, helped scale Gmail during its formative years, and eventually became one of the first design researchers at Google Ventures (GV). For fifteen years, he sat at the intersection of startups and product discovery, helping founders learn faster, save years of wasted effort, and—sometimes—kill their darlings before they drained all the fuel. In our conversation, Michael didn’t just share war stories. He laid out a concrete, repeatable framework for product teams—whether you’re a PM at a FAANG company or a fresh hire at a Series A startup—on how to cut through noise, get to the truth, and accelerate learning cycles. This post is my attempt to capture those lessons. If you’re an early to mid-career PM in Silicon Valley trying to sharpen your craft, this is for you. From Anthropology to Gmail: The Value of Unorthodox Beginnings Michael’s path to Google wasn’t a linear “go to Stanford CS, join a startup, IPO” narrative. Instead, he started in anthropology and educational software, producing floppy-disk learning titles at The Learning Company and Electronic Arts. That detour turned out to be foundational. “Studying anthropology was my introduction to usability and ethnography,” Michael told me. “It gave me a lens to look at people’s behaviors not just as data points but as cultural patterns.” For PMs, the lesson is clear: don’t discount the odd chapters of your own career. That sales job, that nonprofit internship, or that side hustle in teaching can become your secret weapon later. Michael carried those anthropology muscles into Gmail, where understanding human behavior at scale was just as critical as writing code. Actionable Advice for PMs: * Audit your own “non-linear” career experiences. What hidden skills—interviewing, pattern-recognition, narrative-building—could you bring into product work? * When hiring, don’t filter only for straight-line resumes. The best PMs often bring unexpected perspectives. The Google Years: Scaling Research at Hyper-speed Michael joined Gmail in 2006, when it was still young but maturing fast. He quickly noticed how different the rhythm was compared to the slow, expensive ethnographic studies he had done for consulting clients like Walmart.com. “At Walmart,” he explained, “I had to compress these big, long expensive projects into something faster. Gmail demanded that same speed, but at enormous scale.” At Google, the prime “clients” for his research were often designers. The questions he answered were things like: How do we attract Outlook users? How do we make the interface intuitive enough for mass adoption? This difference matters for PMs: in big companies, research questions often start downstream—how to refine, polish, or optimize. In startups, questions live upstream: What should we build at all? Knowing where you sit in that spectrum changes the kind of research (and product bets) you should prioritize. Jumping to Google Ventures: Bringing UXR Into VC In 2010, Michael made a bold move: leaving the mothership to become one of the very first design researchers embedded inside a venture capital firm. GV was trying to differentiate itself by not just writing checks but also offering operational help—design, hiring, PR. “I got lucky,” he recalled. “GV had already hired Braden Kowitz as their design partner, and Braden said, ‘I need a researcher.’ That was my break.” Working with founders was a shock. They didn’t act like Google PMs. “It was like they were playing by a different set of rules. They’d say, ‘Here’s where we’re going. You can help me, or get out of my way.’” That forced Michael to reinvent how he showed value. Instead of writing reports that might sit unread, he had to deliver insights in real-time, in ways founders couldn’t ignore. The Watch Party Method: Stop Writing Reports Here’s where the gold nuggets come in. Michael realized traditional reports weren’t cutting it. Instead, he invented what he calls “watch parties.” “I don’t do the research study unless the whole team watches,” he said. “I compress it into a day—five interviews with bullseye customers, the whole team in a virtual backroom. By the end, they’ve seen it all, they’re debriefing themselves, and alignment happens automatically. I haven’t written a report in years.” Think about that. No 30-page decks. No long hand-offs. Just visceral, shared observation. Actionable Advice for PMs: * Next time you run a user test, insist that at least your core team attends live. Skip the sanitized recap slides. * At the end of a session, have the team summarize their top three takeaways. When they say it, it sticks. Bullseye Customers: Getting Uncomfortably Specific One of Michael’s most powerful contributions is the bullseye customer exercise. “A bullseye customer,” he explained, “is the very specific subset of your target market who is most likely to adopt your product first. The key is to define not just inclusion criteria but also exclusion criteria.” Founders (and PMs) often resist narrowing. They want to believe their TAM is huge. But Michael’s method forces rigor. He described grilling teams until they admit things like: Actually, if this person doesn’t work from home, they probably won’t care. Or if they’ve never paid for a premium tool, they won’t convert. Example: Imagine you’re building a new coffee subscription. Your bullseye might be: Remote tech workers in San Francisco, ages 25-35, who already spend $50+ per month on specialty coffee, and who like experimenting with new roasters. If your product doesn’t delight them, it won’t magically resonate with “all coffee drinkers.” Actionable Advice for PMs: * Write down both inclusion and exclusion criteria for your bullseye. * Add triggers: life events that make adoption more likely (e.g., new job, new diagnosis, move to a new city). * Recruit five people who fit it exactly. If they’re lukewarm, rethink your product. Why Five Interviews Is Enough Michael swears by the number five. “After three interviews, you’re not sure if it’s a pattern,” he said. “By five, you hit data saturation. Everyone sees the signal. Any more and the team is begging you to stop so they can make changes.” For PMs under pressure, this is liberating. You don’t need 100 customer calls. You need five of the right customers, observed by the right team members, in a compressed timeframe. Multiple Prototypes: Don’t Ask Customers to Imagine Another Margolis rule: never show just one prototype. “If you show one, the team gets too attached, and the customer can only react. With three, I can say: compare and contrast. What do you love? What do you hate? I collect the Lego pieces and assemble the next iteration.” Sometimes those prototypes aren’t even original mockups—they’re competitor landing pages. As Michael joked: “Have you tested your competitor’s prototypes? No? Then you’ve left something out.” Actionable Advice for PMs: * When exploring value props, mock up three different landing pages. Don’t ask “Which do you prefer?” Instead ask: “Which elements matter most, and why?” * Treat mild praise as a “no.” Only visceral excitement counts as signal. Founders, Stubbornness, and the Henry Ford Trap I pressed Michael on what happens when founders dismiss customer feedback by invoking Henry Ford’s famous line about “faster horses.” He smiled. “The beauty of bullseye customers is it forces accountability. If you told me these people are your dream users, and they shrug, then you can’t hand-wave it away. Either change your customer definition or your product.” This is a crucial lesson for PMs who work with visionary leaders. Conviction is necessary, but unchecked conviction can sink a product. Anchoring on bullseye customers creates a shared contract that keeps both egos and hypotheses grounded. Bright Spots > Exit Interviews When teams ask him to interview churned customers, Michael often refuses. “There are a bazillion reasons people don’t use something,” he said. “It’s inefficient. Instead, I go find the bright spots—the power users who love it. I want to know why they’re on fire, and then go find more people like them.” This “bright spot” focus helps PMs avoid premature pivots. Instead of chasing every no, double down on the yeses until you understand the common thread. Case Study: Refrigerated Medications and Zipline To illustrate, Michael shared a project with Zipline, the drone-delivery company. They wanted to deliver specialty medications. The core question: was speed or timing more important? Through interviews, the bright spot insight emerged: refrigeration was the killer constraint. Patients didn’t care about “fastest possible” delivery in the abstract. They cared about not leaving refrigerated drugs on their porch. That nuance completely changed the product and infrastructure design. For PMs, the takeaway is that sometimes the decisive factor isn’t the flashy benefit you advertise (“we’re the fastest!”) but a practical detail you only uncover through careful listening. AI and the Future of Research We couldn’t avoid the AI question. Has it changed his process? “I worry about how AI is creating distance between teams and customers,” Michael admitted. “If my bot talks to your bot and spits out a report, you miss the nuance. The power of research is in the stories, the details, the visceral reactions.” That said, he does use AI for quick prototype copywriting and summaries. But he insists on live team observation for the real work.

    1h 1m

Ratings & Reviews

5
out of 5
2 Ratings

About

Product Management podcast where 20 year PM veteran Tom Leung interviews VP's, CPO's, and CEO's who rose up from product to talk about their careers, the art and science of product management, and advice for other PM's. Watch video on YouTube. firesidepm.co Learn more about host Tom Leung at http://tomleungcoaching.com firesidepm.substack.com