The MAD Podcast with Matt Turck

Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

  1. -7 Ч

    Intelligence Isn’t Enough: Why Energy & Compute Decide the AGI Race – Eiso Kant

    Frontier AI is colliding with real-world infrastructure. Eiso Kant (Co-CEO & Co-Founder, Poolside) joins the MAD Podcast to unpack Project Horizon— a multi-gigawatt West Texas build—and why frontier labs must own energy, compute, and intelligence to compete. We map token economics, cloud-style margins, and the staged 250 MW rollout using 2.5 MW modular skids. Then we get operational: the CoreWeave anchor partnership, environmental choices (SCR, renewables + gas + batteries), community impact, and how Poolside plans to bring capacity online quickly without renting away margin—plus the enterprise motion (defense to Fortune 500) powered by forward deployed research engineers. Finally, we go deep on training. Eiso lays out RL2L (Reinforcement Learning to Learn)— aimed at reverse-engineering the web’s thoughts and actions— why intelligence may commoditize, what that means for agents, and how coding served as a proxy for long-horizon reasoning before expanding to broader knowledge work. Poolside Website - https://poolside.ai X/Twitter - https://x.com/poolsideai Eiso Kant LinkedIn - https://www.linkedin.com/in/eisokant/ X/Twitter - https://x.com/eisokant FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) Blog - https://www.mattturck.com LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Cold open – “Intelligence becomes a commodity” (00:23) Host intro – Project Horizon & RL2L (01:19) Why Poolside exists amid frontier labs (04:38) Project Horizon: building one of the largest US data center campuses (07:20) Why own infra: scale, cost, and avoiding “cosplay” (10:06) Economics deep dive: $8B for 250 MW, capex/opex, margins (16:47) CoreWeave partnership: anchor tenant + flexible scaling (18:24) Hiring the right tail: building a physical infra org (30:31) RL today → agentic RL and long-horizon tasks (37:23) RL2L revealed: reverse-engineering the web’s thoughts & actions (39:32) Continuous learning and the “hot stove” limitation (43:30) Agents debate: thin wrappers, differentiation, and model collapse (49:10) “Is AI plateauing?”—chip cycles, scale limits, and new axes (53:49) Why software was the proxy; expanding to enterprise knowledge work (55:17) Model status: Malibu → Laguna (small/medium/large) (57:31) Poolside's Commercial Reality today: defense; Fortune 500; FDRE (1:02:43) Global team, avoiding the echo chamber (1:04:34) Next 12–18 months: frontier models + infra scale (1:05:52) Closing

    1 ч. 6 мин.
  2. 30 ОКТ.

    State of AI 2025 with Nathan Benaich: Power Deals, Reasoning Breakthroughs, Real Revenue

    Power is the new bottleneck, reasoning got real, and the business finally caught up. In this wide-ranging conversation, I sit down with Nathan Benaich, Founder and General Partner at Air Street Capital, to discuss the newly published 2025 State of AI report—what’s actually working, what’s hype, and where the next edge will come from. We start at the physical layer: energy procurement, PPAs, off-grid builds, and why water and grid constraints are turning power—not GPUs—into the decisive moat. From there, we move into capability: reasoning models acting as AI co-scientists in verifiable domains, and the “chain-of-action” shift in robotics that’s taking us from polished demos to dependable deployments. Along the way, we examine the market reality—who’s making real revenue, how margins actually behave once tokens and inference meet pricing, and what all of this means for builders and investors. We also zoom out to the ecosystem: NVIDIA’s position vs. custom silicon, China’s split stack, and the rise of sovereign AI (and the “sovereignty washing” that comes with it). The policy and security picture gets a hard look too—regulation’s vibe shift, data-rights realpolitik, and what agents and MCP mean for cyber risk and adoption. Nathan closes with where he’s placing bets (bio, defense, robotics, voice) and three predictions for the next 12 months. Nathan Benaich Blog - https://www.nathanbenaich.com X/Twitter - https://x.com/nathanbenaich Source: State of AI Report 2025 (9/10/2025) Air Street Capital Website - https://www.airstreet.com X/Twitter - https://x.com/airstreet Matt Turck (Managing Director) Blog - https://www.mattturck.com LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap (0:00) – Cold Open: “Gargantuan money, real reasoning” (0:40) – Intro: State of AI 2025 with Nathan Benaich (02:06) – Reasoning got real: from chain-of-thought to verified math wins (04:11) – AI co-scientist: hypotheses, wet-lab validation, fewer “dumb stochastic parrots” (04:44) – Chain-of-action robotics: plan → act you can audit (05:13) – Humanoids vs. warehouse reality: where robots actually stick first (06:32) – The business caught up: who’s making real revenue now (08:26) – Adoption & spend: Ramp stats, retention, and the shadow-AI gap (11:00) – Margins debate: tokens, pricing, and the thin-wrapper trap (14:02) – Bubble or boom? Wall Street vs. SF vibes (and circular deals) (19:54) – Power is the bottleneck: $50B/GW capex and the new moat (21:02) – PPAs, gas turbines, and off-grid builds: the procurement game (23:54) – Water, grids, and NIMBY: sustainability gets political (25:08) – NVIDIA’s moat: 90% of papers, Broadcom/AMD, and custom silicon (28:47) – China split-stack: Huawei, Cambricon, and export zigzags (30:30) – Sovereign AI or “sovereignty washing”? Open source as leverage (40:40) – Regulation & safety: from Bletchley to “AI Action”—the vibe shift (44:06) – Safety budgets vs. lab spend; models that game evals (44:46) – Data rights realpolitik: $1.5B signals the new training cost (47:04) – Cyber risk in the agent era: MCP, malware LMs, state actors (50:19) – Agents that convert: search → commerce and the demo flywheel (54:18) – VC lens: where Nathan is investing (bio, defense, robotics, voice) (68:29) – Predictions: power politics, AI neutrality, end-to-end discoveries (1:02:13) – Wrap: what to watch next & where to find the report (stateof.ai)

    1 ч. 3 мин.
  3. 23 ОКТ.

    Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

    Are we failing to understand the exponential, again? My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today’s AI models can spark alien insights in code, math, and science—including Julian’s timeline for when AI could produce Nobel-level breakthroughs. We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart’s law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic’s launch process. Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity. Julian Schrittwieser Blog - https://www.julian.ac X/Twitter - https://x.com/mononofu Viral post: Failing to understand the exponential, again (9/27/2025) Anthropic Website - https://www.anthropic.com X/Twitter - https://x.com/anthropicai Matt Turck (Managing Director) Blog - https://www.mattturck.com LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap (00:00) Cold open — “We’re not seeing any slowdown.” (00:32) Intro — who Julian is & what we cover (01:09) The “exponential” from inside frontier labs (04:46) 2026–2027: agents that work a full day; expert-level breadth (08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value (10:26) Move 37 — what actually happened and why it mattered (13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel? (16:25) Discontinuity vs smooth progress (and warning signs) (19:08) Does pre-training + RL get us there? (AGI debates aside) (20:55) Sutton’s “RL from scratch”? Julian’s take (23:03) Julian’s path: Google → DeepMind → Anthropic (26:45) AlphaGo (learn + search) in plain English (30:16) AlphaGo Zero (no human data) (31:00) AlphaZero (one algorithm: Go, chess, shogi) (31:46) MuZero (planning with a learned world model) (33:23) Lessons for today’s agents: search + learning at scale (34:57) Do LLMs already have implicit world models? (39:02) Why RL on LLMs took time (stability, feedback loops) (41:43) Compute & scaling for RL — what we see so far (42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards (44:36) RL training data & the “flywheel” (and why quality matters) (48:02) RL & Agents 101 — why RL unlocks robustness (50:51) Should builders use RL-as-a-service? Or just tools + prompts? (52:18) What’s missing for dependable agents (capability vs engineering) (53:51) Evals & Goodhart — internal vs external benchmarks (57:35) Mechanistic interpretability & “Golden Gate Claude” (1:00:03) Safety & alignment at Anthropic — how it shows up in practice (1:03:48) Jobs: human–AI complementarity (comparative advantage) (1:06:33) Inequality, policy, and the case for 10× productivity → abundance (1:09:24) Closing thoughts

    1 ч. 10 мин.
  4. 16 ОКТ.

    How GPT-5 Thinks — OpenAI VP of Research Jerry Tworek

    What does it really mean when GPT-5 “thinks”? In this conversation, OpenAI’s VP of Research Jerry Tworek explains how modern reasoning models work in practice—why pretraining and reinforcement learning (RL/RLHF) are both essential, what that on-screen “thinking” actually does, and when extra test-time compute helps (or doesn’t). We trace the evolution from O1 (a tech demo good at puzzles) to O3 (the tool-use shift) to GPT-5 (Jerry calls it “03.1-ish”), and talk through verifiers, reward design, and the real trade-offs behind “auto” reasoning modes. We also go inside OpenAI: how research is organized, why collaboration is unusually transparent, and how the company ships fast without losing rigor. Jerry shares the backstory on competitive-programming results like ICPC, what they signal (and what they don’t), and where agents and tool use are genuinely useful today. Finally, we zoom out: could pretraining + RL be the path to AGI? This is the MAD Podcast —AI for the 99%. If you’re curious about how these systems actually work (without needing a PhD), this episode is your map to the current AI frontier. OpenAI Website - https://openai.com X/Twitter - https://x.com/OpenAI Jerry Tworek LinkedIn - https://www.linkedin.com/in/jerry-tworek-b5b9aa56 X/Twitter - https://x.com/millionint FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:01) What Reasoning Actually Means in AI (02:32) Chain of Thought: Models Thinking in Words (05:25) How Models Decide Thinking Time (07:24) Evolution from O1 to O3 to GPT-5 (11:00) Before OpenAI: Growing up in Poland, Dropping out of School, Trading (20:32) Working on Robotics and Rubik's Cube Solving (23:02) A Day in the Life: Talking to Researchers (24:06) How Research Priorities Are Determined (26:53) Collaboration vs IP Protection at OpenAI (29:32) Shipping Fast While Doing Deep Research (31:52) Using OpenAI's Own Tools Daily (32:43) Pre-Training Plus RL: The Modern AI Stack (35:10) Reinforcement Learning 101: Training Dogs (40:17) The Evolution of Deep Reinforcement Learning (42:09) When GPT-4 Seemed Underwhelming at First (45:39) How RLHF Made GPT-4 Actually Useful (48:02) Unsupervised vs Supervised Learning (49:59) GRPO and How DeepSeek Accelerated US Research (53:05) What It Takes to Scale Reinforcement Learning (55:36) Agentic AI and Long-Horizon Thinking (59:19) Alignment as an RL Problem (1:01:11) Winning ICPC World Finals Without Specific Training (1:05:53) Applying RL Beyond Math and Coding (1:09:15) The Path from Here to AGI (1:12:23) Pure RL vs Language Models

    1 ч. 16 мин.
  5. 2 ОКТ.

    Sonnet 4.5 & the AI Plateau Myth — Sholto Douglas (Anthropic)

    Sholto Douglas, a top AI researcher at Anthropic, discusses the breakthroughs behind Claude Sonnet 4.5—the world's leading coding model—and why we might be just 2-3 years from AI matching human-level performance on most computer-facing tasks. You'll discover why RL on language models suddenly started working in 2024, how agents maintain coherency across 30-hour coding sessions through self-correction and memory systems, and why the "bitter lesson" of scale keeps proving clever priors wrong. Sholto shares his path from top-50 world fencer to Google's Gemini team to Anthropic, explaining why great blog posts sometimes matter more than PhDs in AI research. He discusses the culture at big AI labs and why Anthropic is laser-focused on coding (it's the fastest path to both economic impact and AI-assisted AI research). Sholto also discusses how the training pipeline is still "held together by duct tape" with massive room to improve, and why every benchmark created shows continuous rapid progress with no plateau in sight. Bold predictions: individuals will soon manage teams of AI agents working 24/7, robotics is about to experience coding-level breakthroughs, and policymakers should urgently track AI progress on real economic tasks. A clear-eyed look at where AI stands today and where it's headed in the next few years. Anthropic Website - https://www.anthropic.com Twitter - https://x.com/AnthropicAI Sholto Douglas LinkedIn - https://www.linkedin.com/in/sholto Twitter - https://x.com/_sholtodouglas FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (01:09) The Rapid Pace of AI Releases at Anthropic (02:49) Understanding Opus, Sonnet, and Haiku Model Tiers (04:14) From Australian Fencer to AI Researcher (12:01) The YouTube Effect: Mastery Through Observation (16:16) Breaking Into AI Research Without Traditional Signals (18:29) Google, Gemini, and Building Inference Stacks (23:05) Why Anthropic? Culture and Mission Differences (25:08) What Is "Taste" in AI Research? (31:46) Sonnet 4.5: Best Coding Model in the World (36:40) From 7 Hours to 30 Hours: The Long-Context Breakthrough (38:41) How AI Agents Self-Correct and Maintain Coherency (43:13) The Role of Memory in Extended Coding Sessions (46:28) Breakthroughs Behind the Performance Jump (47:42) Pre-Training vs. RL: Textbooks vs. Worked Problems (52:11) Test-Time Compute: The New Scaling Axis (55:55) Why RL Finally Started Working on LLMs in 2024 (59:38) Defining AGI: Better Than Humans at Computer Tasks (01:02:05) Are We Hitting a Plateau? Evidence Says No (01:03:41) The GDP Eval: Measuring AI Across Economic Sectors (01:05:47) Preparing for 10-100x Individual Leverage & Robotics

    1 ч. 10 мин.
  6. 11 СЕНТ.

    Goodbye Excel? AI Agents for Self-Driving Finance – Pigment CEO

    The most successful enterprises are about to become autonomous — and Eléonore Crespo, Co-CEO of Pigment, is building the nervous system that makes it possible. In this conversation, Eléonore reveals how her $400 million AI platform is already running supply chains for Coca-Cola, powering finance for the hottest newly public companies like Figma and Klarna, and processing thousands of financial scenarios for Uber and Snowflake faster and more accurately than any human team ever could. Eléonore predicts Excel will outlive most AI companies (but maybe only as a user interface, not a calculation engine) explains why she deliberately chose to build from Paris instead of Silicon Valley, and shares her contrarian take on why the AI revolution will create more CFOs, not fewer. You'll discover why Pigment's three-agent system (Analyst, Modeler, Planner) avoids the hallucination problems plaguing other AI companies, how they achieved human-level accuracy in financial analysis, and the accelerating timeline for fully autonomous enterprise planning that will make your current workforce obsolete. Pigment Website - https://www.pigment.com Twitter - https://x.com/gopigment Eléonore Crespo LinkedIn - linkedin.com/in/eleonorecrespo FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (01:22) Building Pigment: 500 Employees, $400M Raised, 60% US Revenue (03:20) From Quantum Physics to Google to Index Ventures (06:56) Why Being a VC Was the Perfect Founder Training Ground (11:35) The Impatience Factor: What Makes Great Founders (13:27) Hiring for AI Fluency in the Modern Enterprise (14:54) Pigment's Internal AI Strategy: Committees and Guardrails (17:30) The Three AI Agents: Analyst, Modeler, and Planner (22:15) Why Three Agents Instead of One: Technical Architecture (24:10) Agent Coordination: How the Supervisor Agent Works (24:46) Real Example: Budget Variance Analysis Across 50 Products (27:15) The Human-in-the-Loop Approach: Recommendations Not Actions (27:36) Solving Hallucination: Why Structured Data Changes Everything (30:08) Behind the Scenes: Verification Agents and Audit Trails (31:57) Beyond Accuracy: Enabling the Impossible at Scale (36:21) Will AI Finally Kill Excel? Eleanor's Contrarian Take (38:23) The Vision: Fully Autonomous Enterprise Planning (40:55) Real-Time Supply Chain Adaptation: The Ukraine Example (42:20) Multi-LLM Strategy: OpenAI, Anthropic, and Partner Integration (44:32) Token Economics: Why Pigment Isn't Token-Intensive (48:30) Customer Adoption: Excitement vs. Change Management Challenges (50:51) Top-Down AI Demand vs. Bottom-Up Implementation Reality (53:08) The Reskilling Challenge: Everyone Becomes a Mini CFO (57:38) Building a Global Company from Europe During COVID (01:00:02) Managing a US Executive Team from Paris (01:01:14) SI Partner Strategy: Why Boutique Firms Come Before Deloitte (01:03:28) The $100 Billion Vision: Beyond Performance Management (01:05:08) Success Metrics: Innovation Over Revenue

    1 ч. 6 мин.
  7. 4 СЕНТ.

    AI Video’s Wild Year – Runway CEO on What’s Next

    2025 has been a breakthrough year for AI video. In this episode of the MAD Podcast, Matt Turck sits down with Cristóbal Valenzuela, CEO & Co-Founder of Runway, to explore how AI is reshaping the future of filmmaking, advertising, and storytelling - faster, cheaper, and in ways that were unimaginable even a year ago. Cris and Matt discuss: * How AI went from memes and spaghetti clips to IMAX film festivals. * Why Gen-4 and Aleph are game-changing models for professionals. * How Hollywood, advertisers, and creators are adopting AI video at scale. * The future of storytelling: what happens to human taste, craft, and creativity when anyone can conjure movies on demand? * Runway’s journey from 2018 skeptics to today’s cutting-edge research lab. If you want to understand the future of filmmaking, media, and creativity in the AI age, this is the episode. Runway Website - https://runwayml.com X/Twitter - https://x.com/runwayml Cristóbal Valenzuela LinkedIn - https://www.linkedin.com/in/cvalenzuelab X/Twitter - https://x.com/c_valenzuelab FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro – AI Video's Wild Year (01:48) Runway's AI Film Festival Goes from Chinatown to IMAX (04:02) Hollywood's Shift: From Ignoring AI to Adopting It at Scale (06:38) How Runway Saves VFX Artists' Weekends of Work (07:31) Inside Gen-4 and Aleph: Why These Models Are Game-Changers (08:21) From Editing Tools to a "New Kind of Camera" (10:00) Beyond Film: Gaming, Architecture, E-Commerce & Robotics Use Cases (10:55) Why Advertising Is Adopting AI Video Faster Than Anyone Else (11:38) How Creatives Adapt When Iteration Becomes Real-Time (14:12) What Makes Someone Great at AI Video (Hint: No Preconceptions) (15:28) The Early Days: Building Runway Before Generative AI Was "Real" (20:27) Finding Early Product-Market Fit (21:51) Balancing Research and Product Inside Runway (24:23) Comparing Aleph vs. Gen-4, and the Future of Generalist Models (30:36) New Input Modalities: Editing with Video + Annotations, Not Just Text (33:46) Managing Expectations: Twitter Demos vs. Real Creative Work (47:09) The Future: Real-Time AI Video and Fully Explorable 3D Worlds (52:02) Runway's Business Model: From Indie Creators to Disney & Lionsgate (57:26) Competing with the Big Labs (Sora, Google, etc.) (59:58) Hyper-Personalized Content? Why It May Not Replace Film (01:01:13) Advice to Founders: Treat Your Company Like a Model — Always Learning (01:03:06) The Next 5 Years of Runway: Changing Creativity Forever

    1 ч. 5 мин.
  8. 21 АВГ.

    How to Build a Beloved AI Product - Granola CEO Chris Pedregal

    Granola is the rare AI startup that slipped into one of tech’s most crowded niches — meeting notes — and still managed to become the product founders and VCs rave about. In this episode, MAD Podcast host Matt Turck sits down with Granola co-founder & CEO Chris Pedregal to unpack how a two-person team in London turned a simple “second brain” idea into Silicon Valley’s favorite AI tool. Chris recounts a year in stealth onboarding users one by one, the 50 % feature-cut that unlocked simplicity, and why they refused to deploy a meeting bot or store audio even when investors said they were crazy. We go deep on the craft of building a beloved AI product: choosing meetings (not email) as the data wedge, designing calendar-triggered habit loops, and obsessing over privacy so users trust the tool enough to outsource memory. Chris opens the hood on Granola’s tech stack — real-time ASR from Deepgram & Assembly, echo cancellation on-device, and dynamic routing across OpenAI, Anthropic and Google models — and explains why transcription, not LLM tokens, is the biggest cost driver today. He also reveals how internal eval tooling lets the team swap models overnight without breaking the “Granola voice.” Looking ahead, Chris shares a roadmap that moves beyond notes toward a true “tool for thought”: cross-meeting insights in seconds, dynamic documents that update themselves, and eventually an AI coach that flags blind spots in your work. Whether you’re an engineer, designer, or founder figuring out your own AI strategy, this conversation is a masterclass in nailing product-market fit, trimming complexity, and future-proofing for the rapid advances still to come. Hit play, like, and subscribe if you’re ready to learn how to build AI products people can’t live without. Granola Website - https://www.granola.ai X/Twitter - https://x.com/meetgranola Chris Pedregal LinkedIn - https://www.linkedin.com/in/pedregal X/Twitter - https://x.com/cjpedregal FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Introduction: The Granola Story (01:41) Building a "Life-Changing" Product (04:31) The "Second Brain" Vision (06:28) Augmentation Philosophy (Engelbart), Tools That Shape Us (09:02) Late to a Crowded Market: Why it Worked (13:43) Two Product Founders, Zero ML PhDs (16:01) London vs. SF: Building Outside the Valley (19:51) One Year in Stealth: Learning Before Launch (22:40) "Building For Us" & Finding First Users (25:41) Key Design Choices: No Meeting Bot, No Stored Audio (29:24) Simplicity is Hard: Cutting 50% of Features (32:54) Intuition vs. Data in Making Product Decisions (36:25) Continuous User Conversations: 4–6 Calls/Week (38:06) Prioritizing the Future: Build for Tomorrow's Workflows (40:17) Tech Stack Tour: Model Routing & Evals (42:29) Context Windows, Costs & Inference Economics (45:03) Audio Stack: Transcription, Noise Cancellation & Diarization Limits (48:27) Guardrails & Citations: Building Trust in AI (50:00) Growth Loops Without Virality Hacks (54:54) Enterprise Compliance, Data Footprint & Liability Risk (57:07) Retention & Habit Formation: The "500 Millisecond Window" (58:43) Competing with OpenAI and Legacy Suites (01:01:27) The Future: Deep Research Across Meetings & Roadmap (01:04:41) Granola as Career Coach?

    1 ч. 8 мин.
5
из 5
Оценок: 24

Об этом подкасте

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

Вам может также понравиться