The Experimentation Edge

Growthbook

How do product teams decide what to build and what not to? Each episode explores how you can use experimentation, A/B testing, and evidence-based decision-making to ship better features, reduce risk, and drive measurable business impact. Hosted by Ashley Stirrup, CMO at GrowthBook, the show goes beyond theory to unpack real decisions, real experiments, and real outcomes so you can learn how modern product organizations turn hypotheses into results.

Episodes

  1. What UPS Learned Generating Half a Billion From 80+ Apps and One Experimentation Team

    13H AGO

    What UPS Learned Generating Half a Billion From 80+ Apps and One Experimentation Team

    SummaryDave Massey walked into UPS in 2016 and immediately got pulled into a meeting about AB testing tools. By the end of the day, he owned the platform—and the problem: UPS hadn't run a single meaningful experiment. Three years later, senior leadership gave him a hard number to hit. Prove UX could move revenue, or the pilot dies. His first test—removing navigation from the checkout flow—delivered $35 million in incremental revenue. Senior leaders didn't believe it. They made him defend the results upside down and sideways. When the dust settled, the data held. Today, Massey's team has driven over half a billion dollars in incremental revenue by treating UPS.com like the e-commerce business it actually is. Massey's approach is simple: test everything, especially what senior leaders think will work. His team, Journey Experience and Design Innovation (nicknamed J.E.D.I.), has built a reputation for saying no with data, not opinion. When a business unit demanded required recipient emails to capture customer data, J.E.D.I. ran the test in 24 hours and killed it. Conversion tanked. Two years later, the international team asked for the same feature—but framed it as a customs solution. That test passed. Same feature, different reason, different outcome. That's the edge Massey's team delivers: rigorous hypothesis design, a UX research team embedded in the experimentation workflow, and zero tolerance for untested ideas. Timestamps03:09 Dave's first day at UPS: inheriting an AB testing tool with no program  05:59 Senior leadership's ultimatum: prove UX ROI or kill the pilot  08:38 First test result: $35M from removing navigation in checkout  09:48 Defending the numbers: how Massey's team survived scrutiny  11:07 Why a data-driven engineering culture made experimentation inevitable  16:12 Team size: 80 people supporting almost 80 customer-facing applications  19:08 The 24-hour test: when required email fields killed conversion  22:28 Why Massey embeds UX research inside the experimentation team  24:41 AI at UPS: treating it as a tool, not a replacement  Takeaways- Massey's first test removed navigation from UPS's shipping checkout flow and delivered $35 million in incremental revenue—proving e-commerce best practices apply even when customers think "this is just a tool, not e-commerce."  - J.E.D.I.'s win rate stays high because UX research and experimentation teams operate under the same leader, giving the program both behavioral metrics and voice-of-customer insight before tests ever launch.  - When senior leaders push ideas, Massey's team tests them instead of arguing—then delivers results that either validate the idea or identify three better alternatives the data actually supports.  - The same feature (required recipient email) failed for customer data capture but passed for international customs—proof that framing and customer benefit matter more than the feature itself.  - UPS runs everything centrally now, but the real win is that demand for testing has decentralized—business units across the company now come to J.E.D.I. asking to test their ideas. Connect with the guestLinkedIn: https://www.linkedin.com/in/masseycreates/ Learn more about UPS: https://www.ups.com SponsorGrowthbook helps you ship features with confidence by bringing experimentation and feature flagging into one open-source platform. No more guessing whether that new checkout flow actually moved the needle, waiting weeks for data team bandwidth, or flying blind on rollouts. Growthbook gives you a single place to run A/B tests, manage feature flags, and analyze results against your existing data warehouse. With powerful stats built in, it takes the complexity out of experimentation, helps you catch regressions before they hit every user, and makes it easy to test ideas that keep your product improving and your metrics moving in the right direction. See a demo at https://www.growthbook.io/ (03:09) - Dave's first day at UPS: inheriting an AB testing tool with no program (05:59) - Senior leadership's ultimatum: prove UX ROI or kill the pilot (08:38) - First test result: $35M from removing navigation in checkout (09:48) - Defending the numbers: how Massey's team survived scrutiny (11:07) - Why a data-driven engineering culture made experimentation inevitable (16:12) - Team size: 80 people supporting almost 80 customer-facing applications (19:08) - The 24-hour test: when required email fields killed conversion (22:28) - Why Massey embeds UX research inside the experimentation team (24:41) - AI at UPS: treating it as a tool, not a replacement

    23 min
  2. How Experimentation Led to Annual Growth at Fanatics

    4D AGO

    How Experimentation Led to Annual Growth at Fanatics

    Summary Most e-commerce companies test a handful of features each month. Fanatics runs nearly 100 experiments monthly and delivers a big portion of the company's total annual growth through experimentation alone. Medha Umarji, VP of Growth and Experimentation at the multi-billion dollar sports merchandising retailer, explains how she built a program that scales from 10 tests per month to 100—and maintains enough rigor to spot false positives before they become costly decisions. The difference isn't tooling or headcount. It's culture. When your CEO reads Excel spreadsheets for fun and actively wants data to prove him wrong, you stop debating whether to test and start debating how to test smarter. Medha shares the frameworks Fanatics uses to balance speed with rigor: a "do no harm" track for brand plays that won't show up in conversion metrics, a small-sample framework for teams that can't hit statistical significance thresholds, and an experimentation Wiki that feeds a continuous iteration flywheel. One surprising test on ad removal initially showed 95% statistical significance—until they replicated it and found the result was a false positive. The lesson: even at scale, you need to double-click on causality. Timestamps 03:09 How Fanatics scaled from 10 to 100 experiments per month over 10 years 05:25 Why some leadership teams embrace experimentation and others resist it 07:06 How experimentation consistently delivers a big portion of Fanatics' annual growth 08:20 What happens when your CEO consumes Excel spreadsheets and questions everything 10:35 How top-down humility shapes an entire company's testing culture 12:10 The ad removal test that looked like a 95% win—then failed replication 15:55 How Fanatics built an experimentation Wiki that powers their growth engine 22:45 The "do no harm" framework for features that don't measure cleanly in A/B tests 25:20 Why lowering barriers to adoption matters more than statistical perfection early on 26:27 Your odds of winning at experimentation are worse than roulette Takeaways Replication catches false positives: A 95% confidence level still means 1 in 20 results are noise—if a critical test outcome can't be explained through micro-metrics, run it again before committing resources.Top-down buy-in shifts the conversation from "why test?" to "how do we test?": When leadership treats data as the tiebreaker, teams stop defending opinions and start building better experiments.Frameworks like "do no harm" and "small sample" expand who can test: Not every initiative needs 30,000 orders to ship value—lower the barrier for teams that can't hit statistical thresholds while protecting core KPIs.Documenting experiments in a centralized Wiki creates a growth flywheel: Fanatics' Wiki feeds their roadmap with iterations on already-built features, reducing tech dependency and accelerating velocity.Micro-metrics establish causality beyond top-line KPIs: If revenue moves but scroll depth, cart adds, and product views don't follow the same pattern, question the result before declaring a win.Connect with the guest LinkedIn: https://www.linkedin.com/in/medhaumarji/ Learn more about Fanatics https://www.fanatics.com/

    29 min
  3. Inside Chess.com's Plan to Run 1,000 Experiments in a Single Year

    APR 21

    Inside Chess.com's Plan to Run 1,000 Experiments in a Single Year

    Summary Chess.com ran its first A/B test in 2023. Two years later, the team is on track to run 1,000 experiments in a single year—and they've already shipped 195 in Q1.  In this episode, Ashley Stirrup sits down with Nafis Shaikh, Director of Product Management at Chess.com, to get inside the experimentation engine powering one of the world's most beloved gaming products.  Nafis brings experience from Zynga and Prodigy and a refreshingly honest take on what changes when a product built on passion suddenly has to serve a 10-million-DAU user base that spans absolute beginners to rated FIDE players. He and Ashley get into why one-size-fits-all doesn't actually fit anyone, how to measure an AI coach when you can't tell whether users have their volume on, and a game review experiment that completely upended the team's assumptions about how players want to learn.  Nafis also shares practical advice for product managers trying to introduce experimentation culture to organizations that have never done it before—starting with a simple pre/post test rather than a fancy platform. If you lead product, care about experimentation maturity, or just want to hear how a classic product is scaling its learning loop, this one's worth your time. Timestamps [00:35] – Chess.com's experimentation origin story and the 1,000-test goal[05:01] – Designing for a user base that spans beginners to FIDE-rated players[07:30] – The four metrics dimensions Nafis uses to evaluate tests[12:03] – How do you A/B test an AI coach when you can't tell who's listening?[15:49] – Embracing humility and the shift away from "we know what works"[20:50] – The game review test that surprised everyone: 80% of users review wins[24:06] – Advice for PMs introducing experimentation at a new company[29:15] – The onboarding debate and personalization from session zero Takeaways Scale test volume to learning speed, not just shipping speedBuild hypotheses around user psychology, not just KPI movementAccept that being wrong is the point—experimentation only works when leadership embraces humilityStart simple if you're new to experimentation; a clean pre/post comparison beats a fancy platform you don't useReposition features around how users actually feel, not how you assume they should feelDesign onboarding around the shortest path to value, not the longest path to personalization Guest LinkedIn: https://www.linkedin.com/in/nafis-shaikh-20161916/ Company website: https://www.chess.com

    32 min
  4. Scaling Experimentation: Ancestry’s VP on AI Storytelling, Paywalls, and Decision Quality

    APR 14

    Scaling Experimentation: Ancestry’s VP on AI Storytelling, Paywalls, and Decision Quality

    Summary What happens when A/B testing stops being a tool and becomes your operating system? Suresh Teckchandani, VP of Product & Technology at Ancestry (formerly PayPal and eBay), shares how the team scaled experimentation from isolated tests to capability-building that drives roadmap and revenue. He details the “growth metering” and paywall experiments that unlocked a 5.3% lift in key engagement and improved conversions—then became platform features. Suresh explains Ancestry’s centralized experimentation platform with self-serve access for PMs and engineers, why “obvious” UX changes can be the riskiest, and how removing friction actually hurt engagement by 20–25% due to user mental models. He also breaks down a major growth lever: AI-powered storytelling that turns raw records into narratives, delivering 30%+ CTR lift and a 5x increase in story views. You’ll learn how Ancestry balances input vs. output metrics, when not to test, and why the best leaders optimize for decision quality over win counts—with clean baselines, right audiences, adequate sample sizes, and true statistical significance. Timestamps [00:45] – Ancestry’s experimentation maturity: metering, paywalls, and the 5.3% lift that unlocked capabilities [03:48] – From isolated tests to a capability mindset: experimentation as an operating system [05:34] – Balancing wins with learning: zooming out for subscription engagement and NPS [08:49] – Operating model: centralized platform, self-serve dashboards, and baseline resets [11:59] – Counterintuitive UX lesson: removing friction backfired (–20–25% CTR); respect mental models [15:10] – AI storytelling as a growth lever: record comparisons into narratives, 30%+ CTR and 5x views [18:27] – Input vs. output metrics: when to roll back and how to link short- and long-term outcomes [30:00] – Parting advice: test what changes CX, avoid vanity testing, and optimize decision quality Takeaways - Build capabilities, not just tests—use experiments to unlock platform features (e.g., metering, paywalls). - Democratize experimentation with a centralized platform and self-serve tooling; reset baselines regularly. - Test “obvious” UX changes; preserve helpful friction and align with user mental models. - Turn data into narratives with AI to deepen engagement and increase discovery. - Define input and output metrics; ship only what improves core outcomes (retention, sign-ups), and roll back fast if not. - Optimize for decision quality: right audience, sufficient sample sizes, clean baselines, and true statistical significance. SponsorGrowthbook helps you ship features with confidence by bringing experimentation and feature flagging into one open-source platform. No more guessing whether that new checkout flow actually moved the needle, waiting weeks for data team bandwidth, or flying blind on rollouts. Growthbook gives you a single place to run A/B tests, manage feature flags, and analyze results against your existing data warehouse. With powerful stats built in, it takes the complexity out of experimentation, helps you catch regressions before they hit every user, and makes it easy to test ideas that keep your product improving and your metrics moving in the right direction. See a demo at https://www.growthbook.io/

    25 min
  5. From $1M to $35M ARR: Fyxer’s Growth Engineering Playbook—PLG Loops, AI, and 1,000 Experiments

    APR 2

    From $1M to $35M ARR: Fyxer’s Growth Engineering Playbook—PLG Loops, AI, and 1,000 Experiments

    Summary How do you drive hypergrowth without guessing? Kameron Tanseli, Head of Growth Engineering at Fyxer—an AI assistant for your email—breaks down the experimentation playbook that helped the company scale from $1M to $35M ARR, with sights set on $100–$150M. Kameron explains how startups should think about A/B testing differently: de-risk big bets, not just button colors. He shares a risk-based approach to when to run rigorous tests vs. ship-and-measure, why a 25% win rate is a sign you’re testing ambitiously, and how PLG features should be shipped first, then rapidly iterated to drive usage. You’ll hear how Fyxer uses AI to speed the entire lifecycle—Claude, Cursor desktop cloud agents, GrowthBook, and BigQuery—plus how a Slack-first changelog and an internal “AI data scientist” democratize insights. Kameron also details turning everyday product usage into growth loops, personalizing signup paths, and measuring success by movement in global ARR, not just local metrics. He closes with candid advice for new growth engineers: expect to struggle early, be T-shaped, and adopt your customer’s language. Timestamps [00:34] – Startup A/B testing mindset: de-risking big bets with only a 25% win rate [02:45] – When to A/B test vs. ship: risk appetite, funnel stage, and non-inferiority tests [04:43] – 360 experiments with 4 people: scaling to 1,000 using AI and Cursor cloud agents [08:22] – Separating feature impact from momentum: PLG and trial model moves ARR [10:29] – Ship PLG features, then iterate to drive usage; measuring DAU and revenue impact [11:40] – Habit loops to growth loops: turning product features into PLG (scheduling case study) [16:47] – Building an experimentation culture: founder buy-in, Slack changelog, shared data [26:50] – The modern growth stack: Claude, Cursor, GrowthBook, BigQuery, and DOT in Slack Takeaways - Prioritize by risk: run rigorous A/B tests where you have volume; use before/after or non-inferiority for low-risk in-product changes. - Test big levers—not just UI: pricing models, usage limits, onboarding pathways—and judge success by ARR movement, not micro-metrics. - Ship first, then optimize: launch PLG features and immediately run experiments to increase adoption; track daily active usage per feature. - Build growth loops from habits: design shareable artifacts and personalized signup paths; drive users back to your domain to capture value. - Scale experimentation with AI: use Cursor desktop/cloud agents for parallel builds and visual QA; orchestrate docs/analysis via Claude; automate cleanups and reporting. - Make experimentation company-wide: centralize data (BigQuery), broadcast wins/losses in Slack via GrowthBook, and auto-correlate metric dips to releases.

    31 min
  6. Five Pixels That Cost LinkedIn a Million Dollars a Month: What Makram Mansour Learned as an Experimentation Leader at LinkedIn

    APR 2

    Five Pixels That Cost LinkedIn a Million Dollars a Month: What Makram Mansour Learned as an Experimentation Leader at LinkedIn

    Summary How do you build a culture where nothing ships without evidence—and leaders actually act on the data? Makram Mansour, Head of Marketplace at ID.me and former experimentation leader at LinkedIn and Intuit, shares the systems, mindsets, and guardrails behind “experimenting everywhere.” At LinkedIn, he helped support 10,000+ annual experiments with 2,000 weekly platform users, and he explains the hard-earned lessons (like a 5px UI tweak causing a million-dollar ad loss) that led to a “test before release” mandate. At Intuit, he operationalized “fail forward,” partnering with HR to rewrite OKRs so teams are rewarded for learning, not just launching. Makram breaks down why to shift from MVP to MVT (minimum viable test), how to surface leap-of-faith assumptions with PRFAQs and “unit of one” prototypes, and where AI now unlocks faster, safer front-end testing. He also details critical guardrails—cost visibility for AI infrastructure, ethical and inclusion metrics, and the people-process-technology triad—plus practical ways to remove bottlenecks via a center of excellence. If you’re starting from scratch or scaling your program, you’ll learn how to personalize responsibly at the top of the funnel, define your North Star and signposts, and stack early wins while building influence across the org. Timestamps [00:45] – Makram’s path: running experimentation at LinkedIn and Intuit, and why nothing ships without an A/B test [02:15] – Costly lessons: 5px banner change, algorithm tweaks, and the case for rigorous guardrails [06:40] – Leadership discipline: killing features (voice meetups, LinkedIn Stories) and changing OKRs to reward learning [11:05] – People, process, technology: top-down and bottom-up tracks, and embedding “fail forward” [13:40] – From MVP to MVT: validating leap-of-faith assumptions, PRFAQ, and rapid “unit of one” prototypes [15:55] – Bottlenecks and unlocks: engineering/data science capacity, centers of excellence, and AI for fast front-end tests [22:45] – Personalization at the top of funnel: avoid waste, design reviews, and right-size testing before building [25:45] – Guardrail metrics that matter: AI infra costs, ethics/compliance, and fairness-by-design [29:45] – ID.me now: zero-to-one builds, vision-to-values, North Star and leading indicators [33:30] – How to start at a new org: crawl-walk-run, small wins, relationships, and over-communication Takeaways - Shift from MVP to MVT: list leap-of-faith assumptions and design minimum viable tests before you build. - Institutionalize learning: align OKRs with “fail forward,” and be willing to kill low-performing features quickly. - Build the triad: pair an easy-to-use platform with training, top-down sponsorship, and clear launch processes. - Add real guardrails: track AI infrastructure costs, ethics/compliance, and inclusion metrics alongside growth KPIs. - Unblock teams: create a center of excellence for data science and enable rapid variants with AI-powered tooling. - Start small and visible: rack up quick wins, over-communicate progress, and grow influence through relationships.

    38 min
  7. Shipping Faster, Safely: Truist’s SVP on AI, Developer Experience, and Human-in-the-Loop Banking

    APR 2

    Shipping Faster, Safely: Truist’s SVP on AI, Developer Experience, and Human-in-the-Loop Banking

    How do you boost developer velocity in a highly regulated industry—without sacrificing safety or customer trust? Charles Williams, Senior Vice President and Software Engineering Director at Truist (formed from the BB&T and SunTrust merger), shares how his team elevates developer experience to ship faster and more reliably. Charles breaks down shifting quality “left” with automation, measuring success with both DORA metrics and developer sentiment, and why human-in-the-loop is non-negotiable for AI in finance. He details Truist’s governance model—steering committees, enterprise architecture, and clear guardrails—to avoid tool sprawl while building a purpose-built AI ecosystem: Microsoft Copilot for productivity, GitLab’s AI-enabled DevSecOps platform for engineering, and separate consumer-facing capabilities. Expect practical insights on starting with low-risk, high-yield use cases (unit tests, docs, security triage), tracking AI utilization, and upskilling teams in prompt engineering so developers can “manage” AI agents effectively. Charles also explores the path to personalized experiences balanced with privacy, why branches should be enhanced—not reduced—by AI, and the cultural skills leaders need now: empathy, neurodiversity awareness, and change management. He closes with where AI is driving ROI first—developer onboarding and pipeline productivity—with code quality gains following close behind. Timestamps [00:02] – Truist overview and Charles’s mandate: improving developer experience at scale [00:56] – AI as a strategic priority; shifting quality left with automation to remove bottlenecks [02:19] – Measuring success: DORA metrics plus sentiment—eliminating toil to drive happiness [04:33] – Human-in-the-loop AI for high-stakes finance; customer and internal use cases [07:25] – How Truist evaluates tools: personas, pain points, and starting with tests, docs, security [08:35] – The stack: Microsoft Copilot, GitLab’s AI gateway approach, and tracking utilization [10:36] – New skills and culture: prompt engineering, “managing” AI agents, and strong governance [20:45] – What’s next: personalization vs privacy, fintech agility + bank stability, and where AI pays off now Takeaways - Shift quality left with automated checks so developers catch issues early without human gatekeeping. - Measure DORA metrics and developer sentiment; remove mundane toil to increase speed and satisfaction. - Keep humans in the loop for AI-assisted coding and customer answers—trust but verify in regulated contexts. - Build an AI ecosystem with clear purposes (productivity, engineering, consumer) and a steering committee to avoid duplication. - Start with low-risk, high-yield AI use cases—unit tests, documentation, and security triage—to build confidence and momentum. - Upskill teams in prompt engineering and AI oversight so developers can effectively direct and review AI “agents.”

    28 min
  8. From Chatbots to Open‑World Agents: Microsoft’s Marco Casalaina on Evals, Go‑Live Metrics, and Copilot Velocity

    MAR 17

    From Chatbots to Open‑World Agents: Microsoft’s Marco Casalaina on Evals, Go‑Live Metrics, and Copilot Velocity

    AI is moving so fast that what “good” looked like a few months ago is already outdated.  So how do you measure value, ship safely, and scale what works? Marco Casalaina, VP of Products, Core AI and AI Futurist at Microsoft, joins to unpack how his team builds and evaluates next‑gen AI—at hyperspeed.  Marco leads the AI Futures team and previously led Azure OpenAI, Azure Cognitive Services, Responsible AI, and AI Studio; before Microsoft, he ran Salesforce Einstein.  He explains why enterprise value is best measured by go‑lives and real usage, how Microsoft’s Foundry equips developers with agent‑specific evals (tool call accuracy, task adherence), and why old metrics like “accepted completions” don’t fit modern dev loops.  We dig into model routing now productized across model families, orchestration frameworks and the Copilot SDK, shared memory experiments, and the rise of self‑verifying agents that iterate to defined thresholds.  Expect concrete examples—from rewriting docs for coding agents to Ralph loops with browser testing—and practical advice for leaders: major in evals, set acceptable error rates by use case, and get hands‑on with the tools daily. Timestamps [00:45] – Guest intro and Microsoft’s enterprise AI focus [02:07] – Measuring value: go‑lives, telemetry thresholds, and token volume [03:43] – From chatbots to agents: Foundry evals (tool calls, task adherence) and A/B testing in Microsoft 365 Copilot [06:17] – When “good” changes monthly: model routing productized across model families [07:23] – Orchestration and Copilot SDK: agents that create their own tools; OpenClaw and shared memory experiments [11:45] – Engagement redefined: coding agents read your docs; writing for agents vs. humans [14:55] – New dev loops: why accepted completions died; Ralph loops and self‑verifying builds [17:06] – Evals in practice and guardrails: thresholds, non‑determinism, and out‑of‑domain tests; how to keep up without burning out Takeaways - Measure value by go‑lives and real usage (token volume), not time in portals or playgrounds. - Evolve evals for agents: track tool call accuracy/success and task completion/adherence; A/B test models and strategies. - Productize adaptability with model routing to match tasks to the right model family as capabilities shift. - Build self‑verification into workflows: pair agents with automated testing (e.g., browser runners) and iterate to thresholds, not perfection. - Write for agents as readers: tighten documentation, ship vetted code samples, and monitor bot traffic patterns. - Guardrail open‑world agents: add out‑of‑domain evals and explicit capability limits; set acceptable error rates based on the stakes of your use case.

    28 min
  9. Experimentation at Scale: Upwork’s VP of Engineering on Blast Radius, CPQI, and AI-Driven Ops

    MAR 11

    Experimentation at Scale: Upwork’s VP of Engineering on Blast Radius, CPQI, and AI-Driven Ops

    Summary What do you test rigorously—and what do you ship fast and fix forward—when every change could impact millions? Vinoj Kumar, Vice President of Engineering at Upwork, leads at the intersection of infrastructure and product, where feedback loops are longer and the blast radius is wider. He shares a pragmatic framework for experimentation—blast radius x reversibility—that sets testing rigor, plus how he measures success in product terms: faster search, resilient marketplace trust, and developer velocity. Vinoj explains why “high engagement” can mask low-quality experiences, how his team instrumented an internal NL chatbot with turns-to-success and downstream signals (like fewer JIRA tickets), and how a composite metric—cost per quality inference (CPQI)—aligns finance, engineering, and data science by uniting cloud costs, performance, and model accuracy. He details where AI is already paying off (build pipelines, incident detection, testing), how to monitor model drift post-launch, and why some wins on paper must be killed in production to protect trust—like a high-hit-rate caching project that surfaced stale profile data. Expect concrete practices: shadow traffic, slow canaries, synthetic staging that mirrors reality, feature flags, LLMs-as-judges, and the mindset to tie infrastructure to business outcomes. Timestamps [00:45] – Guest intro: Infrastructure meets product—and why experimentation looks different [01:36] – Deciding what to test: blast radius x reversibility; canaries, shadow traffic, ship-and-monitor [03:09] – Defining “good”: internal dev metrics vs. marketplace outcomes—and when engagement lies [06:23] – Case study: “Talk to Data” chatbot—thumbs, turns-to-success, and reduced JIRA tickets [09:45] – CPQI: a composite metric for cost, performance, and model quality that breaks silos [16:55] – AI in engineering: build-time gains, MTTR/MTTD, agentic testing, and drift monitoring [24:06] – The caching miss: 92% hit rate, stale data, trust risks—and what to do instead [29:12] – Career advice: balance stability with bold experiments; always link infra to business value Takeaways - Decide testing rigor with blast radius x reversibility; reserve heavy testing for irreversible, high-impact systems. - Measure quality by efficiency and success ratio—not raw clicks or query counts. - Instrument NL tools with “turns to success” and track downstream impact (e.g., fewer ad hoc data tickets). - Build composite metrics (e.g., CPQI) to align finance, engineering, and data science around shared outcomes. - Use AI to accelerate builds, detect incidents sooner, and evaluate models; watch MTTR and MTTD. - Treat ML features as living systems: feature-flag rollouts, realistic staging, drift monitoring, and LLM-as-judge evaluations—and be willing to kill “wins” that erode trust.

    31 min
  10. From Spreadsheets to AI: How Moxie Pest Control Boosted Conversions 5% with Data and Call Coaching

    MAR 5

    From Spreadsheets to AI: How Moxie Pest Control Boosted Conversions 5% with Data and Call Coaching

    Think pest control isn’t a digital business? Think again. Raj Mehta, Vice President of Product and Technology at Moxie Pest Control, outlines how he turned a spreadsheet-run operation into a data-driven engine across 9,000+ daily calls. Raj shares how consolidating fragmented systems into a data lake unlocked automation—from shrinking lead routing from 20–25 minutes to under 30 seconds—to deploying AI-powered call intelligence that scores every sales and retention conversation against a playbook. He breaks down a practical roadmap for traditional businesses: build MVPs, pilot in one branch with a trained feedback team, iterate fast, then scale. You’ll hear how he positioned AI as a growth amplifier (not a job cutter), the difference between deterministic automation and LLM use cases, and the measurable impact: a 5% lift in conversion that compounds in a recurring-revenue model. Plus, Raj’s concise advice for leaders bringing AI into operations without breaking trust or momentum. Timestamps [00:45] – Guest intro: Raj Mehta, Moxie’s tech transformation, and 9,000+ daily calls [01:20] – Starting point: 90% of ops in spreadsheets; why a data lake became the foundation [02:45] – Automating time-to-lead: 25 minutes to 30 seconds and a 5% conversion lift [04:27] – Roadmap design: MVPs, single-branch pilots, and scaling what works [06:05] – Culture building: framing AI as growth and upskilling, not headcount cuts [07:34] – Two lanes of automation: deterministic scripts vs. LLM-driven workflows [08:45] – Call intelligence: scoring every sales/retention call and coaching at scale [14:05] – Impact and advice: recurring revenue compounding and Raj’s playbook for getting started Takeaways - Build a single source of truth (data lake) to power automation and AI reliably. - Cut time-to-lead with workflow automation and track the downstream impact on conversion. - Pilot in one branch with a trained “feedback team,” iterate, then roll out—don’t scale too soon. - Position AI as a growth multiplier; retain and upskill top performers to shape the culture. - Separate deterministic automation from LLM use cases; do deep discovery with frontline teams. - Use AI call intelligence to score every call against your playbook, surface coaching themes, and save manager time.

    17 min
  11. Stop Running Experiments, Start Earning Them

    FEB 24

    Stop Running Experiments, Start Earning Them

    If 80% of A/B tests fail, how do you de-risk decisions that touch pricing, product, and brand? Aleksandra (Aleks) Bass, Chief Product & Technology Officer at Typeform, shares how her team “earns the right to A/B test” with medical-grade rigor—moving from literature reviews and user tests to simulated trials before exposing changes to customers. She details Typeform’s repositioning from “forms” to an AI engagement platform—and the pricing and packaging bet behind it: a 15% drop in new business count offset by a 32% increase in ASP and a 25% lift in annual attach. Aleks unpacks how Typeform AI acts as a co-pilot that doubled activation and boosted one-day conversion, plus the design shifts (CTA altitude and onboarding) that increased adoption. She also reveals why they moved video features down-tier and what a head-to-head test showed: video interviewers generated 14x more words and 10x fewer skipped questions with comparable completion time. Finally, Aleks breaks down the cultural side—eliminating “anti-knowledge,” standardizing experiment design, and creating a cross-functional review that prevents false learnings—along with how her data engineering team evaluates LLMs for quality, latency, and trust. Timestamps [00:45] – Rethinking experimentation: “earn the right to A/B test” with staged rigor   [03:31] – Pricing and packaging shift: from forms to flows, ASP up 32%, annual attach up 25%   [05:34] – Typeform AI as a co-pilot: doubling activation and lifting one-day conversion   [07:27] – Adoption lessons: elevating AI CTAs and reducing friction to use   [10:02] – Behind the scenes: model selection, quality bars, and why MVP can backfire in AI   [12:40] – Moving video down-tier: demand signals, cannibalization checks, and net gains   [14:52] – Video vs. standard forms: 14x more words, 10x fewer skips, similar completion time   [20:22] – Building an experimentation culture: process resistance, “anti-knowledge,” and cross-functional review   [30:16] – Leader playbook: visibility, empathy, and incentives for rigorous testing Takeaways - Implement a staged experimentation funnel—discovery, simulation, then customer A/B—to reduce risk.   - Use pricing experiments to trade volume for revenue quality; pair higher monthly prices with stronger annual discounts to grow annual attach.   - Treat AI as an activation lever: elevate AI-first CTAs and streamline onboarding to boost adoption.   - Add video interviewer options to increase response richness (14x more words) while keeping completion rates steady.   - Enforce experiment hygiene: change one variable at a time, randomize at the right unit (account vs. user), and run long enough for effect size.   - Purge “anti-knowledge” by standardizing design, instituting cross-functional reviews, and only codifying learnings supported by repeatable data.

    32 min

About

How do product teams decide what to build and what not to? Each episode explores how you can use experimentation, A/B testing, and evidence-based decision-making to ship better features, reduce risk, and drive measurable business impact. Hosted by Ashley Stirrup, CMO at GrowthBook, the show goes beyond theory to unpack real decisions, real experiments, and real outcomes so you can learn how modern product organizations turn hypotheses into results.

You Might Also Like