The Information Bottleneck

Ravid Shwartz-Ziv & Allen Roush

Two AI Researchers - Ravid Shwartz Ziv, and Allen Roush, discuss the latest trends, news, and research within Generative AI, LLMs, GPUs, and Cloud Systems.

  1. EP20: Yann LeCun

    DEC 15

    EP20: Yann LeCun

    Yann LeCun – Why LLMs Will Never Get Us to AGI"The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I think is complete bullshit. It's just never going to work." After 12 years at Meta, Turing Award winner Yann LeCun is betting his legacy on a radically different vision of AI. In this conversation, he explains why Silicon Valley's obsession with scaling language models is a dead end, why the hardest problem in AI is reaching dog-level intelligence (not human-level), and why his new company AMI is building world models that predict in abstract representation space rather than generating pixels. Timestamps(00:00:14) – Intro and welcome (00:01:12) – AMI: Why start a company now? (00:04:46) – Will AMI do research in the open? (00:06:44) – World models vs LLMs (00:09:44) – History of self-supervised learning (00:16:55) – Siamese networks and contrastive learning (00:25:14) – JEPA and learning in representation space (00:30:14) – Abstraction hierarchies in physics and AI (00:34:01) – World models as abstract simulators (00:38:14) – Object permanence and learning basic physics (00:40:35) – Game AI: Why NetHack is still impossible (00:44:22) – Moravec's Paradox and chess (00:55:14) – AI safety by construction, not fine-tuning (01:02:52) – Constrained generation techniques (01:04:20) – Meta's reorganization and FAIR's future (01:07:31) – SSI, Physical Intelligence, and Wayve (01:10:14) – Silicon Valley's "LLM-pilled" monoculture (01:15:56) – China vs US: The open source paradox (01:18:14) – Why start a company at 65? (01:25:14) – The AGI hype cycle has happened 6 times before (01:33:18) – Family and personal background (01:36:13) – Career advice: Learn things with a long shelf life (01:40:14) – Neuroscience and machine learning connections (01:48:17) – Continual learning: Is catastrophic forgetting solved? Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 50m
  2. EP19: AI in Finance and Symbolic AI with Atlas Wang

    DEC 10

    EP19: AI in Finance and Symbolic AI with Atlas Wang

    Atlas Wang (UT Austin faculty, XTX Research Director) joins us to explore two fascinating frontiers: the foundations of symbolic AI and the practical challenges of building AI systems for quantitative finance. On the symbolic AI side, Atlas shares his recent work proving that neural networks can learn symbolic equations through gradient descent, a surprising result given that gradient descent is continuous while symbolic structures are discrete. We talked about why neural nets learn clean, compositional mathematical structures at all, what the mathematical tools involved are, and the broader implications for understanding reasoning in AI systems. The conversation then turns to neuro-symbolic approaches in practice: agents that discover rules through continued learning, propose them symbolically, verify them against domain-specific checkers, and refine their understanding. On the finance side, Atlas pulls back the curtain on what AI research looks like at a high-frequency trading firm. The core problem sounds simple (predict future prices from past data). Still, the challenge is extreme: markets are dominated by noise, predictions hover near zero correlation, and success means eking out tiny margins across astronomical numbers of trades. He explains why synthetic data techniques that work elsewhere don't translate easily, and why XTX is building time series foundation models rather than adapting language models. We also discuss the convergence of hiring between frontier AI labs and quantitative finance, and why this is an exceptional moment for ML researchers to consider the finance industry. Links: Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning - arxiv.org/abs/2506.21797Atlas website - https://www.vita-group.space/Guest: Atlas Wang (UT Austin / XTX) Hosts: Ravid Shwartz-Ziv & Allen Roush Music: “Kid Kodi” — Blue Dot Sessions. Source: Free Music Archive. Licensed CC BY-NC 4.0.

    1h 11m
  3. EP18: AI Robotics

    DEC 1

    EP18: AI Robotics

    In this episode, we hosted Judah Goldfeder, a PhD candidate at Columbia University and student researcher at Google, to discuss robotics, reproducibility in ML, and smart buildings. Key topics covered: Robotics challenges: We discussed why robotics remains harder than many expected, compared to LLMs. The real world is unpredictable and unforgiving, and mistakes have physical consequences. Sim-to-real transfer remains a major bottleneck because simulators are tedious to configure accurately for each robot and environment. Unlike text, robotics lacks foundation models, partly due to limited clean, annotated datasets and the difficulty of collecting diverse real-world data. Reproducibility crisis: We discussed how self-reported benchmarks can lead to p-hacking and irreproducible results. Centralized evaluation systems (such as Kaggle or ImageNet challenges), where researchers submit algorithms for testing on hidden test sets, seem to drive faster progress. Smart buildings: Judah's work at Google focuses on using ML to optimize HVAC systems, potentially reducing energy costs and carbon emissions significantly. The challenge is that every building is different. It makes the simulation configuration extremely labor-intensive. Generative AI could help by automating the process of converting floor plans or images into accurate building simulations. Links: Judah website - https://judahgoldfeder.com/ Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed

    1h 45m
  4. EP17: RL with Will Brown

    NOV 24

    EP17: RL with Will Brown

    In this episode, we talk with Will Brown, a research lead at Prime Intellect, about his journey into reinforcement learning (RL) and multi-agent systems, exploring their theoretical foundations and practical applications. We discuss the importance of RL in the current LLMs pipeline and the challenges it faces. We also discuss applying agentic workflows to real-world applications and the ongoing evolution of AI development. Chapters 00:00 Introduction to Reinforcement Learning and Will's Journey 03:10 Theoretical Foundations of Multi-Agent Systems 06:09 Transitioning from Theory to Practical Applications 09:01 The Role of Game Theory in AI 11:55 Exploring the Complexity of Games and AI 14:56 Optimization Techniques in Reinforcement Learning 17:58 The Evolution of RL in LLMs 21:04 Challenges and Opportunities in RL for LLMs 23:56 Key Components for Successful RL Implementation 27:00 Future Directions in Reinforcement Learning 36:29 Exploring Agentic Reinforcement Learning Paradigms 38:45 The Role of Intermediate Results in RL 41:16 Multi-Agent Systems: Challenges and Opportunities 45:08 Distributed Environments and Decentralized RL 49:31 Prompt Optimization Techniques in RL 52:25 Statistical Rigor in Evaluations 55:49 Future Directions in Reinforcement Learning 59:50 Task-Specific Models vs. General Models 01:02:04 Insights on Random Verifiers and Learning Dynamics 01:04:39 Real-World Applications of RL and Evaluation Challenges 01:05:58 Prime RL Framework: Goals and Trade-offs 01:10:38 Open Source vs. Closed Source Models 01:13:08 Continuous Learning and Knowledge Improvement Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed

    1h 6m
  5. EP16: AI News and Papers

    NOV 17

    EP16: AI News and Papers

    In this episode, we discuss various topics in AI, including the challenges of the conference review process, the capabilities of Kimi K2 thinking, the advancements in TPU technology, the significance of real-world data in robotics, and recent innovations in AI research. We also talk about the cool "Chain of Thought Hijacking" paper, how to use simple ideas to scale RL, and the implications of the Cosmos project, which aims to enable autonomous scientific discovery through AI. Papers and links: Chain-of-Thought Hijacking - https://arxiv.org/pdf/2510.26418Kosmos: An AI Scientist for Autonomous Discovery - https://t.co/9pCr6AUXAeJustRL: Scaling a 1.5B LLM with a Simple RL Recipe - https://relieved-cafe-fe1.notion.site/JustRL-Scaling-a-1-5B-LLM-with-a-Simple-RL-Recipe-24f6198b0b6b80e48e74f519bfdaf0a8 Chapters 00:00 Navigating the Peer Review Process 04:17 Kimi K2 Thinking: A New Era in AI 12:27 The Future of Tool Calls in AI 17:12 Exploring Google's New TPUs 22:04 The Importance of Real-World Data in Robotics 28:10 World Models: The Next Frontier in AI 31:36 Nvidia's Dominance in AI Partnerships 32:08 Exploring Recent AI Research Papers 37:46 Chain of Thought Hijacking: A New Threat 43:05 Simplifying Reinforcement Learning Training 54:03 Cosmos: AI for Autonomous Scientific Discovery Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed

    59 min
  6. EP14: AI News and Papers

    NOV 10

    EP14: AI News and Papers

    In this episode, we talked about AI news and recent papers. We explored the complexities of using AI models in healthcare (the Nature Medicine paper on GPT-5's fragile intelligence in medical contexts). We discussed the delicate balance between leveraging LLMs as powerful research tools and the risks of over-reliance, touching on issues such as hallucinations, medical disagreements among practitioners, and the need for better education on responsible AI use in healthcare. We also talked about Stanford's "Cartridges" paper, which presents an innovative approach to long-context language models. The paper tackles the expensive computational costs of billion-token context windows by compressing KV caches through a clever "self-study" method using synthetic question-answer pairs and context distillation. We discussed the implications for personalization, composability, and making long-context models more practical. Additionally, we explored the "Continuous Autoregressive Language Models" paper and touched on insights from the Smol Training Playbook. Papers discussed: The fragile intelligence of GPT-5 in medicine: https://www.nature.com/articles/s41591-025-04008-8Cartridges: Lightweight and general-purpose long context representations via self-study: https://arxiv.org/abs/2506.06266Continuous Autoregressive Language Models: https://arxiv.org/abs/2510.27688The Smol Training Playbook: https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook Music: “Kid Kodi” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. “Palms Down” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed This is an experimental format for us, just news and papers without a guest interview. Let us know what you think!

    57 min
  7. EP13: Recurrent-Depth Models and Latent Reasoning with Jonas Geiping

    NOV 7

    EP13: Recurrent-Depth Models and Latent Reasoning with Jonas Geiping

    In this episode, we host Jonas Geiping from ELLIS Institute & Max-Planck Institute for Intelligent Systems, Tübingen AI Center, Germany. We talked about his broad research on Recurrent-Depth Models and latent reasoning in large language models (LLMs). We talked about what these models can and can't do, what are the challenges and next breakthroughs in the field, world models, and the future of developing better models. We also talked about safety and interpretability, and the role of scaling laws in AI development. Chapters 00:00 Introduction and Guest Introduction 01:03 Peer Review in Preprint Servers 06:57 New Developments in Coding Models 09:34 Open Source Models in Europe 11:00 Dynamic Layers in LLMs 26:05 Training Playbook Insights 30:05 Recurrent Depth Models and Reasoning Tasks 43:59 Exploring Recursive Reasoning Models 46:46 The Role of World Models in AI 48:41 Innovations in AI Training and Simulation 50:39 The Promise of Recurrent Depth Models 52:34 Navigating the Future of AI Algorithms 54:44 The Bitter Lesson of AI Development 59:11 Advising the Next Generation of Researchers 01:06:42 Safety and Interpretability in AI Models 01:10:46 Scaling Laws and Their Implications 01:16:19 The Role of PhDs in AI Research Links and paper: Jonas' website - https://jonasgeiping.github.io/Scaling up test-time compute with latent reasoning: A recurrent depth approach - https://arxiv.org/abs/2502.05171The Smol Training Playbook: The Secrets to Building World-Class LLMs - https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbookVaultGemma: A Differentially Private Gemma Model - https://arxiv.org/abs/2510.15001 Music: “Kid Kodi” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. “Palms Down” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed

    1h 21m

Ratings & Reviews

5
out of 5
4 Ratings

About

Two AI Researchers - Ravid Shwartz Ziv, and Allen Roush, discuss the latest trends, news, and research within Generative AI, LLMs, GPUs, and Cloud Systems.

You Might Also Like