Unsupervised Learning with Jacob Effron

by Redpoint Ventures

We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral. Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.

  1. VOR 1 TAG

    Ep 86: Yann LeCun on Leaving Meta, Breaking The LLM Paradigm, & Why Hinton is Wrong

    Yann LeCun, Turing Award winner and former Chief AI Scientist at Meta, joins Jacob Effron. The conversation centers on Yann's contrarian thesis that LLMs are a dead-end on the path to human-level intelligence, despite being useful products — because they can't predict the consequences of their actions, can't plan, and fundamentally can't model the messy, high-dimensional real world. He unpacks his alternative architecture, JEPA (Joint Embedding Predictive Architecture), which learns abstract representations rather than generating pixel-level predictions, and explains why this approach is essential for robotics, industrial applications, and any system that needs to operate beyond the substrate of language. Yann also reveals the real story behind his departure from Meta (he had zero technical influence on Llama, contrary to public narrative), the genesis of his Tapestry project for sovereign open-source AI, why he believes LLMs are intrinsically unsafe, where he diverges from his fellow Turing laureates Hinton and Bengio, and why he predicts the industry will recognize the paradigm shift by early 2027. Throughout, he offers candid reflections on the tension between research and product at major labs, and why he intentionally headquartered AMI Labs in Paris with zero Silicon Valley VC money.   (0:00) Introduction  (01:45) Why LLMs Aren't the Path to Intelligence  (07:51) AMI and World Models  (12:07) The JEPA Architecture Explained  (15:55) Problems with Robotics Models Today  (20:37) Silicon Valley Herd Behavior  (28:18) Tapestry: Sovereign AI for the Rest of the World  (35:49) OpenAI Is the Next Sun Microsystems  (40:51) Why Yann's Views Diverged from Hinton & Bengio  (44:32) LLMs Are Intrinsically Unsafe  (58:00) Why Yann Left Meta  (1:00:26) Reflections on FAIR  (1:12:11) Advice for PhD Students   LeWorldModel Paper: https://arxiv.org/abs/2603.19312   With your host:  @jacobeffron  - Partner at Redpoint

    1 Std. 22 Min.
  2. 23. APR.

    Ep 85: Has AI Infra Stabilized, FM Vibe Shift, & What's Next for Coding Agents

    This episode is a wide-ranging conversation between Jacob and Swyx (Shawn Wang), an AI engineer, podcaster, and now operator at Cognition, who sits at a uniquely informed intersection of builder, investor, and community organizer in the AI world. The two cover the current state of the AI engineering zeitgeist: from the stabilization of agent infrastructure and the surprising stickiness of Claude Code, to the competitive dynamics of the AI coding wars, the rise of open models, the threat to traditional SaaS, and the frontier questions around world models, memory, and what it actually means for AI to "understand" something. The episode is grounded in practitioner-level candor, with Swyx offering real takes from running AIE conferences, working inside Cognition, and thinking deeply about what the next wave of AI-native software development looks like.   (0:00) Intro (1:17) What the Top AI Engineers Are Thinking About (2:13) Has AI Infra Finally Stabilized? (6:39) When Does Doing RL In-House Make Sense? (11:26) Why Selling Dev Tools to Agents is Different (17:18) AI Coding Wars (29:04) Consumer AI Plateau (30:22) Codex vs Claude Code (44:52) Future of Open Models   With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq’d by VMWare)  @jordan_segall  - Partner at Redpoint

    55 Min.
  3. 9. APR.

    Ep 84: OpenAI’s Chief Scientist on Continual Learning Hype, RL Beyond Code, & Future Alignment Directions

    Jakub Pachocki, OpenAI's Chief Scientist, sits down with Jacob to cover the full arc of where AI research stands today and where it's headed. The conversation spans the explosive growth of coding agents and what it signals about near-term AI capability, the use of math and physics benchmarks as proxies for general intelligence, how reinforcement learning is being extended beyond easily-verified domains toward longer-horizon tasks, and what it means to run a research organization at the precise moment the models themselves are starting to accelerate the research. Jakub shares a candid take on the competitive landscape, why chain-of-thought monitoring is one of the most promising tools in the alignment toolkit, and — with unusual directness — why the concentration of power enabled by highly automated AI organizations is a societal problem that doesn't yet have an obvious solution.   (0:00) Intro (1:53) Research Intern Capability Timelines (4:59) Math Breakthroughs (7:59) RL Beyond Verifiable Tasks (12:32) RL vs In-Context (19:01) Allocating Compute Internally (28:18) AI for Science (31:40) Pattern Matching (33:23) Solving the Hardest Math Problems (37:40) Chain of Thought Monitoring (44:33) Generalization and Value Alignment in Models (47:57) Inside OpenAI (51:55) Quickfire   With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq’d by VMWare)  @jordan_segall  - Partner at Redpoint

    59 Min.
  4. 11. MÄRZ

    Ep 82: Behind Legora's $550M Raise, Model Competition, Doubling Revenue Every Quarter, & US Expansion

    Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside. Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug. On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth. The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.   [0:00] Intro [1:16] Legora's Series D Story [3:24] Why You Need Low Ego to Build in AI [5:58] From 60% to 100% Accuracy in One Summer [7:04] Law Firm Economics Shift [14:09] Pricing Seats Vs Outcomes [18:31] Why Foundation Models Entering Legal Helps Legora [30:10] Convincing a 75-Year-Old Partner to Go All In [33:02] Hiring Legal Engineers [34:32] Running an AI-Native Company [35:57] The Opus 4.5 Christmas Breakthrough [40:02] Building With Customers [44:01] All In On US Expansion [51:22] Stockholm Startup DNA   With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq’d by VMWare)  @jordan_segall  - Partner at Redpoint

    54 Min.
  5. 29. JAN.

    Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL

    This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for.   (0:00) Intro (1:26) Scaling Paradigms in AI (3:36) Challenges in Reinforcement Learning (11:48) AGI Timelines (18:36) Converging Labs (25:05) Jerry’s Departure from OpenAI (31:18) Pivotal Decisions in OpenAI’s Journey (35:06) Balancing Research and Product Development (38:42) The Future of AI Coding (41:33) Specialization vs. Generalization in AI (48:47) Hiring and Building Research Teams (55:21) Quickfire   With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq’d by VMWare)  @jordan_segall  - Partner at Redpoint

    1 Std. 3 Min.
  6. 18.12.2025 ·  BONUS

    AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, & Spicy 2026 Predictions

    Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities. They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas. The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.   (0:00) Intro (1:51) Reflections on NeurIPS Conference (5:14) Are AI Models Plateauing? (11:12) Reinforcement Learning and Enterprise Adoption (16:16) Future Research Vectors in AI (28:40) The Role of Neo Labs (39:35) The Myth of the Great Man Theory in Science (41:47) OpenAI's Code Red and Market Position (47:19) Disney and OpenAI's Strategic Partnership (51:28) Meta's Super Intelligence Team Challenges (54:33) US-China AI Chip Dynamics (1:00:54) Amazon's Nova Forge and Enterprise AI (1:03:38) End of Year Reflections and Predictions   With your co-hosts: @jacobeffron   - Partner at Redpoint, Former PM Flatiron Health @patrickachase   - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia   - Former COO Github, Founder Bitnami (acq’d by VMWare) @jordan_segall   - Partner at Redpoint

    1 Std. 18 Min.
  7. 15.12.2025

    Ep 80: CEO of Surge AI Edwin Chen on Why Frontier Labs Are Diverging, RL Environments & Developing Model Taste

    Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models.    Edwin reveals why optimizing for popular benchmarks like LMArena is "basically optimizing for clickbait," how one frontier lab's models regressed for 6-12 months without anyone knowing, and why the industry's approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why "there's never going to be a one size fits all solution" for AI models, and how frontier labs are taking surprisingly divergent paths to AGI.   (0:00) Intro (0:56) The Pitfalls of Optimizing for LMArena (4:34) Issues with Data Quality and Measurement (9:44) The Importance of Human Evaluations (13:40) The Rise of RL Environments (17:21) Challenges and Lessons in Model Training (19:59) Silicon Valley's Pivot Culture (23:06) Technology-Driven Approach (24:18) Quality Beyond Credentials (27:51) Impact of Scale Acquisition (28:35) Hiring for Research Culture (30:48) Divergence in AI Training Paradigms (34:16) Future of AI Models (39:32) Multimodal AI and Quality (43:44) Quickfire   With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq’d by VMWare)  @jordan_segall  - Partner at Redpoint

    48 Min.

Bewertungen und Rezensionen

5
von 5
3 Bewertungen

Info

We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral. Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.

Das gefällt dir vielleicht auch