The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

  1. 5 DAYS AGO

    GitHub CEO: The AI Coding Gold Rush, Vibe Coding & Cursor

    AI coding is in full-blown gold-rush mode, and GitHub sits at the epicenter. In this episode, GitHub CEO Thomas Dohmke tells Matt Turck how a $7.5 B acquisition in 2018 became a $2 B ARR rocket ship, and reveals how Copilot was born from a secret AI strategy years before anyone else saw the opportunity. We dig into the dizzying pace of AI innovation: why developer tools are suddenly the fastest-growing startups in history, how GitHub’s multi-model approach (OpenAI, Anthropic Claude 4, Gemini 2.5, and even local LLMs) gives you more choice and speed, and why fine-tuning models might be overrated. Thomas explains how Copilot keeps you in the “magic flow state,” how even middle schoolers are using it to hack Minecraft. The conversation then zooms out to the competitive battlefield: Cursor’s $10 B valuation, Mistral’s new code model, and a wave of AI-native IDE forks vying for developer mind-share. We discuss why 2025’s “coding agents” could soon handle 90 % of the world’s code, the survival of SaaS and why the future of coding is about managing agents, not just writing code. GitHub Website - https://github.com/ X/Twitter - https://x.com/github Thomas Dohmke LinkedIn - https://www.linkedin.com/in/ashtom X/Twitter - https://twitter.com/ashtom FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:50) Why AI Coding Is Ground Zero for Generative AI (02:40) The $7.5B GitHub Acquisition: Microsoft’s Strategic Play (06:21) GitHub’s Role in the Azure Cloud Ecosystem (10:25) How GitHub Copilot Beat Everyone to Market (16:09) Copilot & VS Code Explained for Non-Developers (21:02) GitHub Models: Multi-Model Choice and What It Means (25:31) The Reality of Fine-Tuning AI Models for Enterprise (29:13) The Dizzying Pace and Political Economy of AI Coding Tools (36:58) Competing and Partnering: Microsoft’s Unique AI Strategy (41:29) Does Microsoft Limit Copilot’s AI-Native Potential? (46:44) The Bull and Bear Case for AI-Native IDEs Like Cursor (52:09) Agent Mode: The Next Step for AI-Powered Coding (01:00:10) How AI Coding Will Change SaaS and Developer Skills

    1h 5m
  2. 5 JUN

    Inside the Paper That Changed AI Forever - Cohere CEO Aidan Gomez on 2025 Agents

    What really happened inside Google Brain when the “Attention is All You Need” paper was born? In this episode, Aidan Gomez — one of the eight co-authors of the Transformers paper and now CEO of Cohere — reveals the behind-the-scenes story of how a cold email and a lucky administrative mistake landed him at the center of the AI revolution. Aidan shares how a group of researchers, given total academic freedom, accidentally stumbled into one of the most important breakthroughs in AI history — and why the architecture they created still powers everything from ChatGPT to Google Search today. We dig into why synthetic data is now the secret sauce behind the world’s best AI models, and how Cohere is using it to build enterprise AI that’s more secure, private, and customizable than anything else on the market. Aidan explains why he’s not interested in “building God” or chasing AGI hype, and why he believes the real impact of AI will be in making work more productive, not replacing humans. You’ll also get a candid look at the realities of building an AI company for the enterprise: from deploying models on-prem and air-gapped for banks and telecoms, to the surprising demand for multimodal and multilingual AI in Japan and Korea, to the practical challenges of helping customers identify and execute on hundreds of use cases. Cohere Website - https://cohere.com X/Twitter - https://x.com/cohere Aidan Gomez LinkedIn - https://ca.linkedin.com/in/aidangomez X/Twitter - https://x.com/aidangomez FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (02:00) The Story Behind the Transformers Paper (03:09) How a Cold Email Landed Aidan at Google Brain (10:39) The Initial Reception to the Transformers Breakthrough (11:13) Google’s Response to the Transformer Architecture (12:16) The Staying Power of Transformers in AI (13:55) Emerging Alternatives to Transformer Architectures (15:45) The Significance of Reasoning in Modern AI (18:09) The Untapped Potential of Reasoning Models (24:04) Aidan’s Path After the Transformers Paper and the Founding of Cohere (25:16) Choosing Enterprise AI Over AGI Labs (26:55) Aidan’s Perspective on AGI and Superintelligence (28:37) The Trajectory Toward Human-Level AI (30:58) Transitioning from Researcher to CEO (33:27) Cohere’s Product and Platform Architecture (37:16) The Role of Synthetic Data in AI (39:32) Custom vs. General AI Models at Cohere (42:23) The AYA Models and Cohere Labs Explained (44:11) Enterprise Demand for Multimodal AI (49:20) On-Prem vs. Cloud (50:31) Cohere’s North Platform (54:25) How Enterprises Identify and Implement AI Use Cases (57:49) The Competitive Edge of Early AI Adoption (01:00:08) Aidan’s Concerns About AI and Society (01:01:30) Cohere’s Vision for Success in the Next 3–5 Years

    1h 2m
  3. 29 MAY

    AI That Ends Busy Work — Hebbia CEO on “Agent Employees”

    What if the smartest people in finance and law never had to do “stupid tasks” again? In this episode, we sit down with George Sivulka, founder of Hebbia, the AI company quietly powering 50% of the world’s largest asset managers and some of the fastest-growing law firms. George reveals how Hebbia’s Matrix platform is automating the equivalent of 50,000 years of human reading — every year — and why the future of work is hybrid teams of humans and AI “agent employees.” You’ll get the inside story on how Hebbia went from a stealth project at Stanford to a multinational company trusted by the Department of Defense, and why their spreadsheet-inspired interface is leaving chatbots in the dust. George breaks down the technical secrets behind Hebbia’s ISD architecture (and why they killed RAG), how they process billions of pages with near-zero hallucinations, and what it really takes to sell AI into the world’s most regulated industries. We also dive into the future of organizational design, why generalization beats specialization in AI, and how “prompting is the new management skill.” Plus: the real story behind AI hallucinations, the myth of job loss, and why naiveté might be the ultimate founder superpower. Hebbia Website - https://www.hebbia.com Twitter - https://x.com/HebbiaAI George Sivulka LinkedIn - https://www.linkedin.com/in/sivulka Twitter - https://x.com/gsivulka FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (01:46) What is Hebbia (02:49) Evolving Hebbia’s mission (04:45) The founding story and Stanford's inspiration (09:45) The rise of agent employees and AI in organizations (12:36) The future of AI-powered work (15:17) AI research trends (19:49) Inside Matrix: Hebbia’s flagship AI platform (24:02) Why Hebbia isn’t just another chatbot (28:27) Moving beyond RAG: Hebbia’s unique architecture (34:10) Tackling hallucinations in high-stakes AI (35:59) Research culture and avoiding industry groupthink (39:40) Innovating go-to-market and enterprise sales (41:57) Real-world value: Cost savings and new revenue (43:49) How AI is changing junior roles (45:55) Leadership and perspective as a young founder (47:16) Hebbia’s roadmap: Success in the next 3 years

    48 min
  4. 22 MAY

    AI Eats the World: Benedict Evans on What Really Matters Now

    What if the “AI revolution” is actually… stuck in the messy middle? In this episode, Benedict Evans returns to tackle the big question we left hanging a year ago: Is AI a true paradigm shift, or just another tech platform shift like mobile or cloud? One year later, the answer is more complicated — and more revealing — than anyone expected. Benedict pulls back the curtain on why, despite all the hype and model upgrades, the core LLMs are starting to look like commodities. We dig into the real battlegrounds: distribution, brand, and the race to build sticky applications. Why is ChatGPT still topping the App Store charts while Perplexity and Claude barely register outside Silicon Valley? Why did OpenAI just hire a CEO of Applications, and what does that signal about the future of AI products? We go deep on the “probabilistic” nature of LLMs, why error rates are still the elephant in the room, the future of consumer AI (is there a killer app beyond chatbots and image generators?), the impact of generative content on e-commerce and advertising, and whether “AI agents” are the next big thing — or just another overhyped demo. And, we ask: What happened to AI doomerism? Why did the existential risk debate suddenly vanish, and what risks should we actually care about? Benedict Evans LinkedIn - https://www.linkedin.com/in/benedictevans Threads - https://www.threads.net/@benedictevans FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:47) Is AI a Platform Shift or a Paradigm Shift? (07:21) Error Rates and Trust in AI (15:07) Adapting to AI’s Capabilities (19:18) Generational Shifts in AI Usage (22:10) The Commoditization of AI Models (27:02) Are Brand and Distribution the Real Moats in AI? (29:38) OpenAI: Research Lab or Application Company? (33:26) Big Tech’s AI Strategies: Apple, Google, Meta, AWS (39:00) AI and Search: Is ChatGPT a Search Engine? (42:41) Consumer AI Apps: Where’s the Breakout? (45:51) The Need for a GUI for AI (48:38) Generative AI in Social and Content (51:02) The Business Model of AI: Ads, Memory, and Moats (55:26) Enterprise AI: SaaS, Pilots, and Adoption (01:00:08) The Future of AI in Business (01:05:11) Infinite Content, Infinite SKUs: AI and E-commerce (01:09:42) Doomerism, Risks, and the Future of AI

    1h 15m
  5. 15 MAY

    Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)

    What happens when you try to build the “General Electric of AI” with just 14 people? In this episode, Jeremy Howard reveals the radical inside story of Answer AI — a new kind of AI R&D lab that’s not chasing AGI, but instead aims to ship thousands of real-world products, all while staying tiny, open, and mission-driven. Jeremy shares how open-source models like DeepSeek and Qwen are quietly outpacing closed-source giants, why the best new AI is coming out of China. You’ll hear the surprising truth about the so-called “DeepSeek moment,” why efficiency and cost are the real battlegrounds in AI, and how Answer AI’s “dialogue engineering” approach is already changing lives—sometimes literally. We go deep on the tools and systems powering Answer AI’s insane product velocity, including Solve It (the platform that’s helped users land jobs and launch startups), Shell Sage (AI in your terminal), and Fast HTML (a new way to build web apps in pure Python). Jeremy also opens up about his unconventional path from philosophy major and computer game enthusiast to world-class AI scientist, and why he believes the future belongs to small, nimble teams who build for societal benefit, not just profit. Fast.ai Website - https://www.fast.ai X/Twitter - https://twitter.com/fastdotai Answer.ai Website - https://www.answer.ai/ X/Twitter - https://x.com/answerdotai Jeremy Howard LinkedIn - https://linkedin.com/in/howardjeremy X/Twitter - https://x.com/jeremyphoward FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:39) Highlights and takeaways from ICLR Singapore (02:39) Current state of open-source AI (03:45) Thoughts on Microsoft Phi and open source moves (05:41) Responding to OpenAI’s open source announcements (06:29) The real impact of the Deepseek ‘moment’ (09:02) Progress and promise in test-time compute (10:53) Where we really stand on AGI and ASI (15:05) Jeremy’s journey from philosophy to AI (20:07) Becoming a Kaggle champion and starting Fast.ai (23:04) Answer.ai mission and unique vision (28:15) Answer.ai’s business model and early monetization (29:33) How a small team at Answer.ai ships so fast (30:25) Why Devin AI agent isn't that great (33:10) The future of autonomous agents in AI development (34:43) Dialogue Engineering and Solve It (43:54) How Answer.ai decides which projects to build (49:47) Future of Answer.ai: staying small while scaling impact

    55 min
  6. 8 MAY

    Why Influx Rebuilt Its Database for the IoT and Robotics Explosion

    InfluxDB just dropped its biggest update ever — InfluxDB 3.0 — and in this episode, we go deep with the team behind the world’s most popular open-source time series database. You’ll hear the inside story of how InfluxDB grew from 3,000 users in 2015 to over 1.3 million today, and why the company decided to rewrite its entire architecture from scratch in Rust, ditching Go and moving to object storage on S3. We break down the real technical challenges that forced this radical shift: the “cardinality problem” that choked performance, the pain of linking compute and storage, and why their custom query language (Flux) failed to catch on, leading to a humbling embrace of SQL as the industry standard. You’ll learn how InfluxDB is positioning itself in a world dominated by Databricks and Snowflake, and the hard lessons learned about monetization when 1.3 million users only yield 2,600 paying customers. InfluxData Website - https://www.influxdata.com X/Twitter - https://twitter.com/InfluxDB Evan Kaplan LinkedIn - https://www.linkedin.com/in/kaplanevan X/Twitter - https://x.com/evankaplan FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck Foursquare: Website - https://foursquare.com X/Twitter - https://x.com/Foursquare IG - instagram.com/foursquare (00:00) Intro (02:22) The InfluxDB origin story and why time series matters (06:59) The cardinality crisis and why Influx rebuilt in Rust (09:26) Why SQL won (and Flux lost) (16:34) Why UnfluxData bets on FDAP (22:51) IoT, Tesla Powerwalls, and real-time control systems (27:54) Competing with Databricks, Snowflake, and the “lakehouse” world (31:50) Open Source lessons, monetization, & what’s next

    36 min
  7. 1 MAY

    Dashboards Are Dead: Sigma’s BI Revolution for Trillion-Row Data

    Sigma Computing recently hit $100M in ARR — planning on doubling revenue again this year— and in this episode, CEO Mike Palmer reveals exactly how they did it by throwing out the old BI playbook. We open with the provocative claim that “the world did not need another BI tool,” and dig into why the last 20 years of business intelligence have been “boring.” He explains how Sigma’s spreadsheet-like interface lets anyone analyze billions of rows in seconds, and lives on top of Snowflake and Databricks, with no SQL required and no data extractions. Mike shares the inside story of Sigma’s journey: why they shut down their original product to rebuild from scratch, how Sutter Hill Ventures’ unique incubation model shaped the company, what it took to go from $2M to $100M ARR in just three years and raise a $200M round — even as the growth stage VC market dried up. We get into the technical details behind Sigma’s architecture: no caching, no federated queries, and real-time, Google Sheets-style collaboration at massive scale—features that have convinced giants like JP Morgan and ExxonMobil to ditch legacy dashboards for good. We also tackle the future of BI and the modern data stack: why 99.99% of enterprise data is never touched, what’s about to happen as the stack consolidates, and why Mike thinks “text-to-SQL” AI is a “terrible idea.” This episode is full of "spicey takes" - Mike shares his thoughts on how Google missed the zeitgeist, the reality behind Microsoft Fabric, when engineering hubris leads to failure, and many more. Sigma Website - https://www.sigmacomputing.com X/Twitter - https://x.com/sigmacomputing Mike Palmer LinkedIn - https://www.linkedin.com/in/mike-palmer-51a154 FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck Foursquare: Website - https://foursquare.com X/Twitter - https://x.com/Foursquare IG - instagram.com/foursquare (00:00) Intro (01:46) Why traditional BI is boring (04:15) What is business intelligence? (06:03) Classic BI roles and frustrations (07:09) Sigma’s origin story: Sutter Hill & the Snowflake echo (09:02) The spreadsheet problem: why nothing changed since 1985 (14:04) Rebooting the product during lockdown (16:14) Building a spreadsheet UX on top of Snowflake/Databricks (18:55) No caching, no federation: Sigma’s architectural choices (20:28) Spreadsheet interface at scale (21:32) Collaboration and real-time data workflows (24:15) Semantic layers, data governance & trillion-row performance (25:57) The modern data stack: fragmentation and consolidation (28:38) Democratizing data (29:36) Will hyperscalers own the data stack? (34:12) AI, natural language, and the limits of text-to-SQL

    42 min
  8. 24 APR

    Glean’s Breakthrough: CEO Arvind Jain on Scaling AI Agents & Search

    A week after OpenAI’s o3/o4-mini volleyed with Google’s Gemini 2.5 Flash, I sat down with Arvind Jain— ex-Google search luminary, Rubrik co-founder, and now CEO of Glean —just as his company released its agentic reasoning platform and swirled with rumors of a new round at a $7 billion valuation. We open on that whirlwind: why the model race is accelerating, why enterprises still gravitate to closed models, and when open-source variants finally take over. Arvind argues that LLMs should “fade into the background,” leaving application builders to pick the right engine for each task. From there, we trace Glean’s three-act arc—enterprise search powered by transformers (2019), retrieval-augmented chat the moment ChatGPT hit, and now agents that have already logged 50 million real actions inside Glean enterprise customers. Arvind lifts the hood on permission-aware ranking, tool-use orchestration, and the routing layer that swaps Gemini for GPT on the fly. Along the way, he answers the hard questions: Do agents really double efficiency? Where’s the moat when every startup promises the same? Why are humans still in the review loop, and for how long? The conversation crescendos with a vision of work where every employee is flanked by a team of proactive AI coworkers—all drawing from a horizontal knowledge layer that knows the firm’s language better than any newcomer. If you want to know what’s actually working with AI in the enterprise, how to build agents that deliver ROI, and what the next era of work will look like, this episode is packed with specifics, technical insights, and bold predictions from one of the sharpest minds in the space. Glean Website - https://www.glean.com X/Twitter - https://x.com/gleanai Arvind Jain LinkedIn - https://www.linkedin.com/in/jain-arvind X/Twitter - https://x.com/jainarvind FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro & Glean’s $7B valuation rumor (02:01) The AI model explosion: open vs. closed in the enterprise (06:19) Why enterprises choose open source AI (and when) (10:33) The agent era: what are AI agents and why now? (12:41) Automating business processes: real-world agent use cases (16:46) Are we there yet? The reality of AI agents in 2025 (19:24) Glean’s origin story: reinventing enterprise search (26:38) Glean agents: from apps to agentic platforms (31:22) Horizontal vs. vertical: Glean’s strategic platform choice (34:14) How Glean’s enterprise search works (39:34) Staying LLM-agnostic: integrating new AI models (42:11) The architecture of Glean agents: tool use and beyond (43:50) Data flywheels and personalization in Glean (47:06) Moats, competition, and the future of work with AI agents

    52 min

About

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada