The MAD Podcast with Matt Turck

Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

  1. 8月21日

    How to Build a Beloved AI Product - Granola CEO Chris Pedregal

    Granola is the rare AI startup that slipped into one of tech’s most crowded niches — meeting notes — and still managed to become the product founders and VCs rave about. In this episode, MAD Podcast host Matt Turck sits down with Granola co-founder & CEO Chris Pedregal to unpack how a two-person team in London turned a simple “second brain” idea into Silicon Valley’s favorite AI tool. Chris recounts a year in stealth onboarding users one by one, the 50 % feature-cut that unlocked simplicity, and why they refused to deploy a meeting bot or store audio even when investors said they were crazy. We go deep on the craft of building a beloved AI product: choosing meetings (not email) as the data wedge, designing calendar-triggered habit loops, and obsessing over privacy so users trust the tool enough to outsource memory. Chris opens the hood on Granola’s tech stack — real-time ASR from Deepgram & Assembly, echo cancellation on-device, and dynamic routing across OpenAI, Anthropic and Google models — and explains why transcription, not LLM tokens, is the biggest cost driver today. He also reveals how internal eval tooling lets the team swap models overnight without breaking the “Granola voice.” Looking ahead, Chris shares a roadmap that moves beyond notes toward a true “tool for thought”: cross-meeting insights in seconds, dynamic documents that update themselves, and eventually an AI coach that flags blind spots in your work. Whether you’re an engineer, designer, or founder figuring out your own AI strategy, this conversation is a masterclass in nailing product-market fit, trimming complexity, and future-proofing for the rapid advances still to come. Hit play, like, and subscribe if you’re ready to learn how to build AI products people can’t live without. Granola Website - https://www.granola.ai X/Twitter - https://x.com/meetgranola Chris Pedregal LinkedIn - https://www.linkedin.com/in/pedregal X/Twitter - https://x.com/cjpedregal FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Introduction: The Granola Story (01:41) Building a "Life-Changing" Product (04:31) The "Second Brain" Vision (06:28) Augmentation Philosophy (Engelbart), Tools That Shape Us (09:02) Late to a Crowded Market: Why it Worked (13:43) Two Product Founders, Zero ML PhDs (16:01) London vs. SF: Building Outside the Valley (19:51) One Year in Stealth: Learning Before Launch (22:40) "Building For Us" & Finding First Users (25:41) Key Design Choices: No Meeting Bot, No Stored Audio (29:24) Simplicity is Hard: Cutting 50% of Features (32:54) Intuition vs. Data in Making Product Decisions (36:25) Continuous User Conversations: 4–6 Calls/Week (38:06) Prioritizing the Future: Build for Tomorrow's Workflows (40:17) Tech Stack Tour: Model Routing & Evals (42:29) Context Windows, Costs & Inference Economics (45:03) Audio Stack: Transcription, Noise Cancellation & Diarization Limits (48:27) Guardrails & Citations: Building Trust in AI (50:00) Growth Loops Without Virality Hacks (54:54) Enterprise Compliance, Data Footprint & Liability Risk (57:07) Retention & Habit Formation: The "500 Millisecond Window" (58:43) Competing with OpenAI and Legacy Suites (01:01:27) The Future: Deep Research Across Meetings & Roadmap (01:04:41) Granola as Career Coach?

    1 小时 8 分钟
  2. 8月7日

    Anthropic's Surprise Hit: How Claude Code Became an AI Coding Powerhouse

    What happens when an internal hack turns into a $400 million AI rocket ship? In this episode, Matt Turck sits down with Boris Cherny, the creator of Claude Code at Anthropic, to unpack the wild story behind the fastest-growing AI coding tool on the planet. Boris reveals how Claude Code started as a personal productivity tool, only to become Anthropic’s secret weapon — now used by nearly every engineer at the company and rapidly spreading across the industry. You’ll hear how Claude Code’s “agentic” approach lets AI not just suggest code, but actually plan, edit, debug, and even manage entire projects—sometimes with a whole fleet of subagents working in parallel. We go deep on why Claude Code runs in the terminal (and why that’s a feature, not a bug), how its Claude.md memory files let teams build a living, shareable knowledge base, and why safety and human-in-the-loop controls are baked into every action. Boris shares real stories of onboarding times dropping from weeks to days, and how even non-coders are hacking Cloud Code for everything from note-taking to business metrics. Anthropic Website - https://www.anthropic.com X/Twitter - https://x.com/AnthropicAI Boris Cherny LinkedIn - https://www.linkedin.com/in/bcherny X/Twitter - https://x.com/bcherny FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:15) Did You Expect Claude Code’s Success? (04:22) How Claude Code Works and Origins (08:05) Command Line vs IDE: Why Start Claude Code in the Terminal? (11:31) The Evolution of Programming: From Punch Cards to Agents (13:20) Product Follows Model: Simple Interfaces and Fast Evolution (15:17) Who Is Claude Code For? (Engineers, Designers, PMs & More) (17:46) What Can Claude Code Actually Do? (Actions & Capabilities) (21:14) Agentic Actions, Subagents, and Workflows (25:30) Claude Code’s Awareness, Memory, and Knowledge Sharing (33:28) Model Context Protocol (MCP) and Customization (35:30) Safety, Human Oversight, and Enterprise Considerations (38:10) UX/UI: Making Claude Code Useful and Enjoyable (40:44) Pricing for Power Users and Subscription Models (43:36) Real-World Use Cases: Debugging, Testing, and More (46:44) How Does Claude Code Transform Onboarding? (49:36) The Future of Coding: Agents, Teams, and Collaboration (54:11) The AI Coding Wars: Competition & Ecosystem (57:27) The Future of Coding as a Profession (58:41) What’s Next for Claude Code

    1 小时
  3. 7月17日

    Ex‑DeepMind Researcher Misha Laskin on Enterprise Super‑Intelligence | Reflection AI

    What if your company had a digital brain that never forgot, always knew the answer, and could instantly tap the knowledge of your best engineers, even after they left? Superintelligence can feel like a hand‑wavy pipe‑dream— yet, as Misha Laskin argues, it becomes a tractable engineering problem once you scope it to the enterprise level. Former DeepMind researcher Laskin is betting on an oracle‑like AI that grasps every repo, Jira ticket and hallway aside as deeply as your principal engineer—and he’s building it at Reflection AI. In this wide‑ranging conversation, Misha explains why coding is the fastest on‑ramp to superintelligence, how “organizational” beats “general” when real work is on the line, and why today’s retrieval‑augmented generation (RAG) feels like “exploring a jungle with a flashlight.” He walks us through Asimov, Reflection’s newly unveiled code‑research agent that fuses long‑context search, team‑wide memory and multi‑agent planning so developers spend less time spelunking for context and more time shipping. We also rewind his unlikely journey—from physics prodigy in a Manhattan‑Project desert town, to Berkeley’s AI crucible, to leading RLHF for Google Gemini—before he left big‑lab comfort to chase a sharper vision of enterprise super‑intelligence. Along the way: the four breakthroughs that unlocked modern AI, why capital efficiency still matters in the GPU arms‑race, and how small teams can lure top talent away from nine‑figure offers. If you’re curious about the next phase of AI agents, the future of developer tooling, or the gritty realities of scaling a frontier‑level startup—this episode is your blueprint. Reflection AI Website - https://reflection.ai LinkedIn - https://www.linkedin.com/company/reflectionai Misha Laskin LinkedIn - https://www.linkedin.com/in/mishalaskin X/Twitter - https://x.com/mishalaskin FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:42) Reflection AI: Company Origins and Mission (04:14) Making Superintelligence Concrete (06:04) Superintelligence vs. AGI: Why the Goalposts Moved (07:55) Organizational Superintelligence as an Oracle (12:05) Coding as the Shortcut: Hands, Legs & Brain for AI (16:00) Building the Context Engine (20:55) Capturing Tribal Knowledge in Organizations (26:31) Introducing Asimov: A Deep Code Research Agent (28:44) Team-Wide Memory: Preserving Institutional Knowledge (33:07) Multi-Agent Design for Deep Code Understanding (34:48) Data Retrieval and Integration in Asimov (38:13) Enterprise-Ready: VPC and On-Prem Deployments (39:41) Reinforcement Learning in Asimov's Development (41:04) Misha's Journey: From Physics to AI (42:06) Growing Up in a Science-Driven Desert Town (53:03) Building General Agents at DeepMind (56:57) Founding Reflection AI After DeepMind (58:54) Product-Driven Superintelligence: Why It Matters (01:02:22) The State of Autonomous Coding Agents (01:04:26) What's Next for Reflection AI

    1 小时 6 分钟
  4. 7月10日

    The Rise of Agentic Commerce — Emily Glassberg Sands (Stripe)

    Agentic commerce is no longer science fiction — it’s arriving in your browser, your development IDE, and soon, your bank statement. In this episode of The MAD Podcast, Matt Turck sits down with Emily Glassberg Sands, Stripe’s Head of Information, to explore how autonomous “buying bots” and the Model Context Protocol (MCP) are reshaping the very mechanics of online transactions. Emily explains why intent, not clicks, will become the primary interface for shopping and how Stripe’s rails are adapting for tokens, one-time virtual cards, and real-time risk scoring that can tell good bots from bad ones in milliseconds. We also go deep into Stripe's strategic AI choices. Drawing on $1.4 trillion in annual payment flow—1.3 percent of global GDP—Stripe decided to train its own payments foundation model, turning tens of billions of historical charges into embeddings that boost fraud-catch recall from 59 percent to 97 percent. Emily walks us through the tech: why they chose a BERT encoder over GPT-style decoders, how three MLEs in a “research bubble” birthed the model, and what it takes to run it in production with five-nines reliability and tight latency budgets. We zoom out to Stripe’s unique vantage point on the broader AI economy. Their data shows the top AI startups hitting $30 million in ARR three times faster than the fastest SaaS companies did a decade ago, with more than half of that revenue already coming from overseas markets. Emily unpacks the new billing playbook—usage-based pricing today, outcome-based pricing tomorrow—and explains why tiny teams of 20–30 people can now build global, vertically focused AI businesses almost overnight. Stripe Website - https://stripe.com X/Twitter - https://x.com/stripe? Emily Glassberg Sands LinkedIn - https://www.linkedin.com/in/egsands X/Twitter - https://x.com/emilygsands FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:45) How Big Is Stripe? Latest Stats Revealed (04:06) What Does “Head of Information” at Stripe Actually Do? (05:43) From Harvard to Stripe: Emily’s Unusual Journey (08:54) Why Stripe Built Its Own Foundation Model (13:19) Cracking the Code: How Stripe Handles Complex Payment Data (16:25) Foundation Model vs. Traditional ML: What’s Winning? (20:09) Inside Stripe’s Foundation Model: How It Was Built (24:35) How Stripe Makes AI Decisions Transparent (28:38) Where Stripe Uses AI (And Where It Doesn’t) (34:10) How Stripe’s AI Drives Revenue for Businesses (41:22) Real-Time Fraud Detection: Stripe’s Secret Sauce (42:51) The Future of Shopping: AI Agents & Agentic Commerce (46:20) How Agentic Commerce Is Changing Stripe (49:36) Stripe’s Vision for a World of AI-Powered Buyers (55:46) What Is MCP? Stripe’s Take on Agent-to-Agent Protocols (59:31) Stripe’s Data on AI Startups Monetizing 3× Faster (01:03:03) How AI Companies Go Global — From Day One (01:07:48) The New Rules: Billing & Pricing for AI Startups (01:10:57) How Stripe Builds AI Literacy Across the Company (01:14:05) Roadmap: Risk-as-a-Service, Order Intent, and Beyond

    1 小时 15 分钟
  5. 7月3日

    AI Engineering Revolution: Winners, Chaos & What’s Next | FirstMark

    Welcome to a special FirstMark Deep Dive edition of the MAD Podcast. In this episode, Matt Turck and David Waltcher unpack the explosive impact of generative AI on engineering — hands-down the biggest shift the field has seen in decades. You’ll get a front-row seat to the real numbers and stories behind the AI code revolution, including how companies like Cursor hit a $500M valuation in record time, and why GitHub Copilot now serves 15 million developers. Matt and David break down the six trends that shaped the last 20 years of developer tools, and reveal why coding is the #1 use case for generative AI (hint: it’s all about public data, structure, and ROI). You’ll hear how AI is making engineering teams 30-50% faster, but also why this speed is breaking traditional DevOps, overwhelming QA, and turning top engineers into full-time code reviewers. We get specific: 82% of engineers are already using AI to write code, but this surge is creating new security vulnerabilities, reliability issues, and a total rethink of team roles. You’ll learn why code review and prompt engineering are now the most valuable skills, and why computer science grads are suddenly facing some of the highest unemployment rates. We also draw wild historical parallels—from the Gutenberg Press to the Ford assembly line—to show how every productivity boom creates new problems and entire industries to solve them. Plus: what CTOs need to know about hiring, governance, and architecture in the AI era, and why being “AI native” can make a startup more credible than a 10-year-old giant. Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck David Waltcher LinkedIn - https://www.linkedin.com/in/davidwaltcher X/Twitter - https://x.com/davidwaltcher FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap (00:00) Intro & episode setup (01:50) The 6 waves that led to GenAI engineering (04:30) Why coding is such fertile ground for Generative AI (08:25) Break-out dev-tool winners: Cursor, Copilot, Replit, V0 (11:25) Early stats: Teams Are Shipping Code Faster with AI (13:32) Copilots vs Autonomous Agents: The Current Reality (14:14) Lessons from History: Every Tech Boom Creates New Problems (21:53) FirstMark Survey: The Headaches AI Is Creating for Developers (22:53) What’s Now Breaking: Security, CI/CD flakes, QA Overload (29:16) The New CTO Playbook to Adapt to the AI Revolution (33:23) What Happens to Engineering Orgs if Everyone is a Coder? (40:19) Founder opportunities & the dev-tool halo effect (44:24) The Built-in Credibility of AI-Native Startups (46:16) The Irony of Dev Tools As Biggest Winners in the AI Gold Rush (47:43) What’s Next for AI and Engineering?

    50 分钟
  6. 6月26日

    Guillermo Rauch: Why Software Development Will Never Be the Same

    In this episode, Vercel CEO Guillermo Rauch goes deep on how V0, their text-to-app platform, has already generated over 100 million applications and doubled Vercel’s user base in under a year. Guillermo reveals how a tiny SWAT team inside Vercel built V0 from scratch, why “vibe coding” is making software creation accessible to everyone (not just engineers), and how the AI Cloud is automating DevOps, making cloud infrastructure self-healing, and letting companies expose their data to AI agents in just five lines of code. You’ll hear why “every company will have to rethink itself as a token factory,” how Vercel’s Next.js went from a conference joke to powering Walmart, Nike, and Midjourney, and why the next billion app creators might not write a single line of code. Guillermo breaks down the difference between vibe coding and agentic engineering, shares wild stories of users building apps from napkin sketches, and explains how Vercel is infusing “taste” and best practices directly into their AI models. We also dig into the business side: how Vercel’s AI-powered products are driving explosive growth, why retention and margins are strong, and how the company is adapting to a new wave of non-technical users. Plus: the future of MCP servers, the security challenges of agent-to-agent communication, and why prompting and AI literacy are now must-have skills. Vercel Website - https://vercel.com X/Twitter - https://x.com/vercel Guillermo Rauch LinkedIn - https://www.linkedin.com/in/rauchg X/Twitter - https://x.com/rauchg FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (02:08) What Is V0 and Why Did It Take Off So Fast? (04:10) How Did a Tiny Team Build V0 So Quickly? (07:51) V0 vs Other AI Coding Tools (10:35) What is Vibe Coding? (17:05) Is V0 Just Frontend? Moving Toward Full Stack and Integrations (19:40) What Skills Make a Great Vibe Coder? (23:35) Vibe Coding as the GUI for AI: The Future of Interfaces (29:46) Developer Love = Agent Love (33:41) Having Taste as Developer (39:10) MCP Servers: The New Protocol for AI-to-AI Communication (43:11) Security, Observability, and the Risks of Agentic Web (45:25) Are Enterprises Ready for the Agentic Future? (49:42) Closing the Feedback Loop: Customer Service and Product Evolution (56:06) The Vercel AI Cloud: From Pixels to Tokens (01:10:14) How Vercel Adapts to the ICP Change? (01:13:47) Retention, Margins, and the Business of AI Products (01:16:51) The Secret Behind Vercel Last Year Growth (01:24:15) The Importance of Online Presence (01:30:49) Everything, Everywhere, All at Once: Being CEO 101 (01:34:59) Guillermo's Advice to Younger Self

    1 小时 46 分钟
  7. 6月20日

    Inside Canva’s $3B ARR AI Design Rocketship — CTO Brendan Humphreys on Magic Studio & Canva Code

    Canva just announced $3 billion in ARR, 230 million monthly active users, and 24 million paying subscribers—including 95% of the Fortune 500. Even more impressive? They’ve been profitable for seven years while growing at 40–50% per year. In this episode, Canva’s Head of Engineering, Brendan Humphreys, reveals how he went from employee #12 to leading 2,300 engineers across continents, and why Canva’s “pragmatic excellence” lets them ship AI features at breakneck speed—like launching Canva Code to 100 million users in just three months. Brendan shares the story of Canva’s AI journey: building an in-house ML team back in 2017, acquiring visual AI startups like Kaleido and Leonardo AI, and why they use a hybrid of OpenAI, Anthropic, Google, and their own foundation models. He explains how Canva’s App Store gives niche AI startups instant access to millions, and why their $200M Creator Fund is designed to reward contributors in the AI era. You’ll also hear how AI tools like Copilot are making Canva’s senior engineers 30% more productive, why “vibe coding” isn’t ready for prime time, and the unique challenges of onboarding junior engineers in an AI-driven world. We also dig into Canva’s approach to technical debt, scaling from 12 to 5,000 employees, and why empathy is a core engineering skill at Canva. Canva Website - https://www.canva.com X/Twitter - https://x.com/canva Brendan Humphreys LinkedIn - https://www.linkedin.com/in/brendanhumphreys X/Twitter - https://x.com/brendanh FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:14) Canva’s Mind-Blowing Growth and Profitable Journey (03:41) Why Brendan Left Atlassian to Join a Tiny Startup (06:17) What Being a Founder Taught Brendan About Leadership (07:24) Growing with Canva: From 12 Employees to 2,300 Engineers (10:02) How Canva Runs a Global Team from Sydney to Europe (13:16) Is AI a Threat or a Superpower for Canva? (15:22) The Real Story Behind Canva’s AI and Machine Learning Team (17:23) How Canva Ships New AI Features So Fast (19:19) A Tour of Canva’s Latest AI-Powered Products (21:03) From Design Tool to All-in-One Productivity Platform (26:21) Keeping Up the Pace: How Canva Moves So Quickly (30:22) The Future: AI Agents, Copilots, and Smarter Workflows (33:14) How AI Tools Are Changing the Way Engineers Work (35:47) Rethinking Hiring and Training in the Age of AI (37:01) Why Empathy Matters in Engineering at Canva (39:41) Building vs. Buying: How Canva Chooses Its AI Tech (41:23) Lessons Learned: Technical Debt and Scaling Pains (51:18) Shipping Fast Without Breaking Things (53:08) What’s Next: AI Video, New Features, and Big Ambitions

    57 分钟
  8. 6月12日

    GitHub CEO: The AI Coding Gold Rush, Vibe Coding & Cursor

    AI coding is in full-blown gold-rush mode, and GitHub sits at the epicenter. In this episode, GitHub CEO Thomas Dohmke tells Matt Turck how a $7.5 B acquisition in 2018 became a $2 B ARR rocket ship, and reveals how Copilot was born from a secret AI strategy years before anyone else saw the opportunity. We dig into the dizzying pace of AI innovation: why developer tools are suddenly the fastest-growing startups in history, how GitHub’s multi-model approach (OpenAI, Anthropic Claude 4, Gemini 2.5, and even local LLMs) gives you more choice and speed, and why fine-tuning models might be overrated. Thomas explains how Copilot keeps you in the “magic flow state,” how even middle schoolers are using it to hack Minecraft. The conversation then zooms out to the competitive battlefield: Cursor’s $10 B valuation, Mistral’s new code model, and a wave of AI-native IDE forks vying for developer mind-share. We discuss why 2025’s “coding agents” could soon handle 90 % of the world’s code, the survival of SaaS and why the future of coding is about managing agents, not just writing code. GitHub Website - https://github.com/ X/Twitter - https://x.com/github Thomas Dohmke LinkedIn - https://www.linkedin.com/in/ashtom X/Twitter - https://twitter.com/ashtom FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:50) Why AI Coding Is Ground Zero for Generative AI (02:40) The $7.5B GitHub Acquisition: Microsoft’s Strategic Play (06:21) GitHub’s Role in the Azure Cloud Ecosystem (10:25) How GitHub Copilot Beat Everyone to Market (16:09) Copilot & VS Code Explained for Non-Developers (21:02) GitHub Models: Multi-Model Choice and What It Means (25:31) The Reality of Fine-Tuning AI Models for Enterprise (29:13) The Dizzying Pace and Political Economy of AI Coding Tools (36:58) Competing and Partnering: Microsoft’s Unique AI Strategy (41:29) Does Microsoft Limit Copilot’s AI-Native Potential? (46:44) The Bull and Bear Case for AI-Native IDEs Like Cursor (52:09) Agent Mode: The Next Step for AI-Powered Coding (01:00:10) How AI Coding Will Change SaaS and Developer Skills

    1 小时 5 分钟
5
共 5 分
23 个评分

关于

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

你可能还喜欢