Summary How do you build a culture where nothing ships without evidence—and leaders actually act on the data? Makram Mansour, Head of Marketplace at ID.me and former experimentation leader at LinkedIn and Intuit, shares the systems, mindsets, and guardrails behind “experimenting everywhere.” At LinkedIn, he helped support 10,000+ annual experiments with 2,000 weekly platform users, and he explains the hard-earned lessons (like a 5px UI tweak causing a million-dollar ad loss) that led to a “test before release” mandate. At Intuit, he operationalized “fail forward,” partnering with HR to rewrite OKRs so teams are rewarded for learning, not just launching. Makram breaks down why to shift from MVP to MVT (minimum viable test), how to surface leap-of-faith assumptions with PRFAQs and “unit of one” prototypes, and where AI now unlocks faster, safer front-end testing. He also details critical guardrails—cost visibility for AI infrastructure, ethical and inclusion metrics, and the people-process-technology triad—plus practical ways to remove bottlenecks via a center of excellence. If you’re starting from scratch or scaling your program, you’ll learn how to personalize responsibly at the top of the funnel, define your North Star and signposts, and stack early wins while building influence across the org. Timestamps [00:45] – Makram’s path: running experimentation at LinkedIn and Intuit, and why nothing ships without an A/B test [02:15] – Costly lessons: 5px banner change, algorithm tweaks, and the case for rigorous guardrails [06:40] – Leadership discipline: killing features (voice meetups, LinkedIn Stories) and changing OKRs to reward learning [11:05] – People, process, technology: top-down and bottom-up tracks, and embedding “fail forward” [13:40] – From MVP to MVT: validating leap-of-faith assumptions, PRFAQ, and rapid “unit of one” prototypes [15:55] – Bottlenecks and unlocks: engineering/data science capacity, centers of excellence, and AI for fast front-end tests [22:45] – Personalization at the top of funnel: avoid waste, design reviews, and right-size testing before building [25:45] – Guardrail metrics that matter: AI infra costs, ethics/compliance, and fairness-by-design [29:45] – ID.me now: zero-to-one builds, vision-to-values, North Star and leading indicators [33:30] – How to start at a new org: crawl-walk-run, small wins, relationships, and over-communication Takeaways - Shift from MVP to MVT: list leap-of-faith assumptions and design minimum viable tests before you build. - Institutionalize learning: align OKRs with “fail forward,” and be willing to kill low-performing features quickly. - Build the triad: pair an easy-to-use platform with training, top-down sponsorship, and clear launch processes. - Add real guardrails: track AI infrastructure costs, ethics/compliance, and inclusion metrics alongside growth KPIs. - Unblock teams: create a center of excellence for data science and enable rapid variants with AI-powered tooling. - Start small and visible: rack up quick wins, over-communicate progress, and grow influence through relationships.