The Information Bottleneck

Ravid Shwartz-Ziv & Allen Roush

Two AI Researchers - Ravid Shwartz Ziv, and Allen Roush, discuss the latest trends, news, and research within Generative AI, LLMs, GPUs, and Cloud Systems.

  1. EP27: Medical Foundation Models - with Tanishq Abraham (Sophont.AI)

    4H AGO

    EP27: Medical Foundation Models - with Tanishq Abraham (Sophont.AI)

    Tanishq Abraham, CEO and co-founder of Sophont.ai, joins us to talk about building foundation models specifically for medicine. Sophont is trying to be something like an OpenAI or Anthropic but for healthcare  - training models across pathology, neuroimaging, and clinical text, to eventually fuse them into one multimodal system. The surprising part: their pathology model trained on 12,000 public slides performs on par with models trained on millions of private ones. Data quality beats data quantity. We talk about what actually excites Tanishq, which is not replacing doctors, but finding things doctors can't see. AI predicting gene mutations from a tissue slide, or cardiovascular risk from an eye scan. We also talk about the regulation and how the picture is less scary than people assume. Text-based clinical decision support can ship without FDA approval. Pharma partnerships offer near-term impact. The five-to-ten-year timeline people fear is really about drug discovery, not all of medical AI. Takeaways: The real promise of medical AI is finding hidden signals in existing data, not just automating doctorsSmall, curated public datasets can rival massive private onesMultimodal fusion is the goal, but you need strong individual encoders firstAI research itself might get automated sooner than biology or chemistryFDA regulation has more flexibility than most people think Timeline (00:12) Introduction and guest welcome (02:32) Anthropic's ad about ChatGPT ads (07:26) XAI merging into SpaceX (13:32) Vibe coding one year later (17:00) Claude Code and agentic workflows (21:52) Can AI automate AI research? (26:57) What is medical AI (31:06) Sofont as a frontier medical AI lab (33:52) Public vs. private data - 12K slides vs. millions (36:43) Domain expertise vs. scaling (41:54) Cancer, diabetes, and personal stakes (47:52) Classification vs. prediction in medicine (50:36) When doctors disagree (54:43) Quackery and AI (57:15) Uncertainty in medical AI (1:03:11) Will AI replace doctors? (1:07:24) Self-supervised learning on sleep data (1:10:10) Aligning modalities (1:13:17) FDA regulation (1:22:28) Closing Music: "Kid Kodi" - Blue Dot Sessions - via Free Music Archive - CC BY-NC 4.0."Palms Down" - Blue Dot Sessions - via Free Music Archive - CC BY-NC 4.0.Changes: trimmed About The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 26m
  2. EP26: Measuring Intelligence in the Wild -  Arena and the Future of AI Evaluation

    5D AGO

    EP26: Measuring Intelligence in the Wild - Arena and the Future of AI Evaluation

    Anastasios Angelopoulos, Co-Founder and CEO of Arena AI (formerly LMArena), joins us to talk about why static benchmarks are failing, how human preference data actually works under the hood, and what it takes to be the "gold standard" of AI evaluation. Anastasios sits at a fascinating intersection -   a theoretical statistician running the platform that every major lab watches when they release a model. We talk about the messiness of AI-generated code slop (yes, he hides Claude's commits too), then dig into the statistical machinery that powers Arena's leaderboards and why getting evaluation right is harder than most people think. We explore why style control is both necessary and philosophically tricky, where you can regress away markdown headers and response length, but separating style from substance is a genuinely unsolved causal inference problem. We also get into why users are surprisingly good judges of model quality, how Arena serves as a pre-release testing ground for labs shipping stealth models under codenames, and whether the fragmentation of the AI market (Anthropic going enterprise, OpenAI going consumer, everyone going multimodal) is actually a feature, not a bug. Plus, we discuss the role of rigorous statistics in the age of "just run it again," why structured decoding can hurt model performance, and what Arena's 2026 roadmap looks like. Timeline: (00:12) Introduction and Anastasios's Background (00:55) What Arena Does and Why Static Benchmarks Aren't Enough (02:26) Coverage of Use Cases - Is There Enough? (04:22) Style Control and the Bradley-Terry Methodology (08:35) Can You Actually Separate Style from Substance? (10:24) Measuring Slop - And the Anti-Slop Paper Plug (11:52) Can Users Judge Factual Correctness? (13:31) Tool Use and Agentic Evaluation on Arena (14:14) Intermediate Feedback Signals Beyond Final Preference (15:30) Tool Calling Accuracy and Code Arena (17:42) AI-Generated Code Slop and Hiding Claude's Commits (19:49) Do We Need Separate Code Streams for Humans and LLMs? (20:01) RL Flywheels and Arena's Preference Data (21:16) Focus as a Startup - Being the Evaluation Company (22:16) Structured vs. Unconstrained Generation (25:00) The Role of Rigorous Statistics in the Age of AI (29:23) LLM Sampling Parameters and Evaluation Complexity (30:56) Model Versioning and the Frequentist Approach to Fairness (32:12) Quantization and Its Effects on Model Quality (33:10) Pre-Release Testing and Stealth Models (34:23) Transparency - What to Share with the Public vs. Labs (36:27) When Winning Models Don't Get Released (36:59) Why Users Keep Coming Back to Arena (38:19) Market Fragmentation and Arena's Future Value (39:37) Custom Evaluation Frameworks for Specific Users (40:03) Arena's 2026 Roadmap - Science, Methodology, and New Paradigms (42:15) The Economics of Free Inference (43:13) Hiring and Closing Thoughts Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0."Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.Changes: trimmed About: The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    45 min
  3. EP25: Personalization, Data, and the Chaos of Fine-Tuning with Fred Sala (UW-Madison / Snorkel AI)

    FEB 17

    EP25: Personalization, Data, and the Chaos of Fine-Tuning with Fred Sala (UW-Madison / Snorkel AI)

    Fred Sala, Assistant Professor at UW-Madison and Chief Scientist at Snorkel AI, joins us to talk about why personalization might be the next frontier for LLMs, why data still matters more than architecture, and how weak supervision refuses to die. Fred sits at a rare intersection,  building the theory of data-centric AI in academia while shipping it to enterprise clients at Snorkel. We talk about the chaos of OpenClaw (the personal AI assistant that's getting people hacked the old-fashioned way, via open ports), then focus on one of the most important questions: how do you make a model truly yours? We dig into why prompting your preferences doesn't scale, why even LoRA might be too expensive for per-user personalization, and why activation steering methods like REFT could be the sweet spot. We also explore self-distillation for continual learning, the unsolved problem of building realistic personas for evaluation, and Fred's take on the data vs. architecture debate (spoiler: data is still undervalued). Plus, we discuss why the internet's "Ouroboros effect" might not doom pre-training as much as people fear, and what happens when models become smarter than the humans who generate their training data. Takeaways: Personalization requires ultra-efficient methods - even one LoRA per user is probably too expensive. Activation steering is the promising middle ground.The "pink elephant problem" makes prompt-based personalization fundamentally limited - telling a model what not to do often makes it do it more.Self-distillation can enable on-policy continual learning without expensive RL reward functions, dramatically reducing catastrophic forgetting.Data is still undervalued relative to architecture and compute, especially high-quality post-training data, which is actually improving, not getting worse.Weak supervision principles are alive and well inside modern LLM data pipelines, even if people don't call it that anymore. Timeline: (00:13) Introduction and Fred's Background (00:39) OpenClaw — The Personal AI Assistant Taking Over Macs (03:43) Agent Security Risks and the Privacy Problem (05:13) Cloud Code, Permissions, and Living Dangerously (07:47) AI Social Media and Agents Talking to Each Other (08:56) AI Persuasion and Competitive Debate (09:51) Self-Distillation for Continual Learning (12:43) What Does Continual Learning Actually Mean? (14:12) Updating Weights on the Fly — A Grand Challenge (15:09) The Personalization Problem — Motivation and Use Cases (17:41) The Pink Elephant Problem with Prompt-Based Personalization (19:58) Taxonomy of Personalization — Preferences vs. Tone vs. Style (21:31) Activation Steering, REFT, and Parameter-Efficient Fine-Tuning (27:00) Evaluating Personalization — Benchmarks and Personas (31:14) Unlearning and Un-Personalization (31:51) Cultural Alignment as Group-Level Personalization (41:00) Can LLM Personas Replace Surveys and Polling? (44:32) Is Continued Pre-Training Still Relevant? (46:28) Data vs. Architecture — What Matters More? (52:25) Multi-Epoch Training — Is It Over? (54:53) What Makes Good Data? Matching Real-World Usage (59:23) Decomposing Uncertainty for Better Data Selection (1:01:52) Mapping Human Difficulty to Model Difficulty (1:04:49) Scaling Small Ideas — From Academic Proof to Frontier Models (1:12:01) What Happens When Models Surpass Human Training Data? (1:15:24) Closing Thoughts Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0."Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.Changes: trimmed

    1h 16m
  4. EP24: Can AI Learn to Think About Money? -  with Bayan Bruss (Capital One)

    FEB 8

    EP24: Can AI Learn to Think About Money? - with Bayan Bruss (Capital One)

    Bayan Bruss, VP of Applied AI at Capital One, joins us to talk about building AI systems that can make autonomous financial decisions, and why money might be the hardest problem in machine learning. Bayan leads Capital One's AI Foundations team, where they're working toward a destination most people don't associate with banking: getting AI systems to perceive financial ecosystems, form beliefs about the future, and take actions based on those beliefs. It's a framework that sounds simple until you realize you're asking a model to predict whether someone will pay back a loan over 30 years while the world changes around them. We get into why LLMs are a bad fit for ingesting 5,000 credit card transactions, why synthetic data works surprisingly well for time series, and the tension between end-to-end learning and regulatory requirements that demand you know exactly what your model learned. We also discuss reasoning in language vs. in latent space - if you wouldn't trust a self-driving car that translated images to words before deciding to turn, should you trust a financial system that does all its reasoning in token space? Takeaways: Money is a behavioral science problem - AI in finance requires understanding people, not just numbers.Foundation models pre-trained on web text don't outperform purpose-built models for financial tasks. You're better off building a standalone encoder for financial data.Synthetic data works surprisingly well for time series - possibly because real-world time series lives on a simpler manifold than we assume.Explainability in ML is fundamentally unsatisfying because people want causality from non-causal models.Financial AI needs world models that can imagine alternative futures, not just fit historical data. Timeline: (00:24) Introduction and Bayan's Background (00:42) Claude Code, Vibe Coding - Hype or AGI? (05:59) The Future of Software Engineering and Abstraction (11:20) Abstraction Layers and Karpathy's Take (13:54) Hamming, Kuhn, and Scientific Revolutions in AI (19:24) Stack Overflow's Decline and Proof of Humanity (23:07) Why We Still Trust Humans Over LLMs (30:45) Deep Dive: AI in Banking and Consumer Finance (34:17) Are Markets Efficient? Behavioral Economics vs. Classical Views (37:14) The Components of a Financial Decision: Perception, Belief, Action (42:15) Protected Variables, Proxy Features, and Fairness in Lending (45:05) Explainability: Roller Skating on Marbles (47:55) Sparse Autoencoders, Interpretability, and Turtles All the Way Down (51:57) Foundation Models for Finance — Web Text vs. Purpose-Built (53:09) Time Series, Synthetic Data, and TabPFN (59:44) Feeding Tabular Data to VLMs - Graphs Beat Raw Numbers (1:03:35) Reasoning in Language vs. Latent Space (1:08:24) Is Language the Optimal Representation? Chinese Compression and Information Density (1:13:37) Personalization and Predicting Human Behavior (1:21:36) World Models, Uncertainty, and Professional Worrying (1:24:07) Prediction Markets and Insider Betting (1:26:33) Can LLMs Predict Stocks? (1:29:11) Multi-Agent Systems for Financial Decisions Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0."Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed About: The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 32m
  5. EP23: Building Open Source AI Frameworks: David Mezzetti on TxtAI and Local-First AI

    FEB 1

    EP23: Building Open Source AI Frameworks: David Mezzetti on TxtAI and Local-First AI

    David Mezzetti, creator of TxtAI, joins us to talk about building open source AI frameworks as a solo developer - and why local-first AI still matters in the age of API-everything. David's path from running a 50-person IT company through acquisition to building one of the most well-regarded AI orchestration libraries tells you how sometimes constraints breed better design. TextAI started during COVID when he was doing coronavirus literature research and realized semantic search could transform how we find information. We get into the evolution of the AI framework landscape - from the early days of vector embeddings to RAG to LLM orchestration. David was initially stubborn about not supporting OpenAI's API, wanting to keep everything local. He admits that probably cost him some early traction compared to LangChain, but it also shaped TextAI's philosophy: you shouldn't need permission to build with AI. We also talk about small models and some genuinely practical insights: a 20-million parameter model running on CPU might be all you need. On the future of coding with AI, David's come around on "vibe coding" and notes that well-documented frameworks with lots of examples are perfectly positioned for this new world. Takeaways: Local-first AI gives you control, reproducibility, and often better performance for your domainSmall models (even 20M parameters) can solve real problems on CPUGood documentation and examples make your framework AI-coding friendlyOpen source should mean actually contributing - not just publishing codeSolo developers can compete by staying focused and being willing to evolve Timeline: (00:14) Introduction and David's Background (07:44) TextAI History and Evolution (12:04) Framework Landscape: LangChain, LlamaIndex, Haystack (15:16) Can AI Re-implement Frameworks? (24:14) API Specs: OpenAI vs Anthropic (26:46) Running an Open Source Consulting Business (32:51) Origin Story: COVID, Kaggle, and Medical Literature (43:08) Open Source Philosophy and Giving Back (47:16) Ethics of Local AI and Developer Freedom (01:06:44) Human in the Loop and AI-Generated Code (01:09:31) The Future of Work and Automation Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0."Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed About: The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 15m
  6. EP22: Data Curation for LLMs with Cody Blakeney (Datology AI)

    JAN 20

    EP22: Data Curation for LLMs with Cody Blakeney (Datology AI)

    Cody Blakeney from Datology AI joins us to talk about data curation - the unglamorous but critical work of figuring out what to actually train models on. Cody's path from writing CUDA kernels to spending his days staring at weird internet text tells you something important: data quality can account for half or more of a model's final performance. That's on par with major architectural breakthroughs. We get into the differences between pre-training, mid-training, and post-training data. Mid-training in particular has become a key technique for squeezing value out of rare, high-quality datasets. Cody's team stumbled onto it while solving a practical problem: how do you figure out if a 5-billion-token dataset is actually useful when you can't afford hundreds of experimental runs? We also talk about data filtering and some genuinely surprising findings: the documents that make the best training data are often short and dense with information. Those nicely written blog posts with personal anecdotes? Turns out models don't learn as well from them. On synthetic data, Cody thinks pre-training is still in its early days, where most techniques are variations on a few core ideas, but there's huge potential. He's excited about connecting RL failures back to mid-training: when models fail at tasks, use that signal to generate targeted training data. Takeaways: Data work is high-leverage but underappreciatedMid-training helps extract signal from small, valuable datasetsGood filters favor dense, factual text over polished prose.Synthetic data for pre-training works surprisingly well, but remains primitive.Optimal data mixtures depend on model scale, where smaller models need more aggressive distribution shifts.Timeline (00:12) Introduction to Data Correlation in LLMs (05:14) The Importance of Data Quality (10:15) Pre-training vs Post-training Data (15:22) Strategies for Effective Data Utilization (20:15) Benchmarking and Model Evaluation (28:28) Maximizing Perplexity and Coherence (30:27) Measuring Quality in Data (32:56) The Role of Filters in Data Selection (34:19) Understanding High-Quality Data (39:15) Mid-Training and Its Importance (46:51) Future of Data Sources (48:13) Synthetic Data's Role in Pre-Training (53:10) Creating Effective Synthetic Data (57:39) The Debate on Pure Synthetic Data (01:00:25) Navigating AI Training and Legal Challenges (01:02:34) The Controversy of AI in the Art Community (01:05:29) Exploring Synthetic Data and Its Efficiency (01:11:21) The Future of Domain-Specific vs. General Models (01:22:06) Bias in Pre-trained Models and Data Selection (01:28:27) The Potential of Synthetic Data Over Human Data Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0."Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.Changes: trimmed AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 26m
  7. EP21: Privacy in the Age of Agents with Niloofar Mireshghallah

    JAN 7

    EP21: Privacy in the Age of Agents with Niloofar Mireshghallah

    Guest: Niloofar Mireshghallah (Incoming Assistant Professor at CMU, Member of Technical Staff at Humans and AI) In this episode, we dive into AI privacy, frontier model capabilities, and why academia still matters. We kick off by discussing GPT-5.2 and whether models rely more on parametric knowledge or context. Niloofar shares how reasoning models actually defer to context, even accepting obviously false information to "roll with it." On privacy, Niloofar challenges conventional wisdom: memorization isn't the problem anymore. The real threats are aggregation attacks (finding someone's pet name in HTML metadata), inference attacks (models are expert geoguessers), and input-output leakage in agentic workflows. We also explore linguistic colonialism in AI, or how models fail for non-English languages, sometimes inventing cultural traditions. The episode wraps with a call for researchers to tackle problems industry ignores: AI for science, education tools that preserve the struggle of learning, and privacy-preserving collaboration between small local models and large commercial ones. Timeline[0:00] Intro [1:03] GPT-5.2 first impressions and skepticism about the data cutoff claims [4:17] Parametric vs. context memory—when do models trust training vs. the prompt? [9:28] The messy problem of memory, weights, and online learning [16:12] Tool use changes model behavior in unexpected ways [17:15] OpenAI's "Advances in Sciences" paper and human-AI collaboration [24:17] Why deep research is getting less useful [28:17] Pre-training vs. post-training—which matters more? [30:35] Non-English languages and AI failures [33:23] Hilarious Farsi bugs: "I'll get back to you in a few days" and invented traditions [37:56] Linguistic colonialism—ChatGPT changed how we write [41:20] Why memorization isn't the real privacy threat [47:14] The three actual privacy problems: inference, aggregation, input-output leakage [54:33] Deep research stalking experiment—finding a cat's name in HTML [1:01:13] Privacy solutions for agentic systems [1:03:23] What Niloofar's excited about: AI for scientists, small models, niche problems [1:08:31] AI for education without killing the learning process [1:09:15] Closing: underrated life advice on health and sustainable habits Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 12m
  8. EP20: Yann LeCun

    12/15/2025

    EP20: Yann LeCun

    Yann LeCun – Why LLMs Will Never Get Us to AGI"The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I think is complete b******t. It's just never going to work." After 12 years at Meta, Turing Award winner Yann LeCun is betting his legacy on a radically different vision of AI. In this conversation, he explains why Silicon Valley's obsession with scaling language models is a dead end, why the hardest problem in AI is reaching dog-level intelligence (not human-level), and why his new company AMI is building world models that predict in abstract representation space rather than generating pixels. Timestamps(00:00:14) – Intro and welcome (00:01:12) – AMI: Why start a company now? (00:04:46) – Will AMI do research in the open? (00:06:44) – World models vs LLMs (00:09:44) – History of self-supervised learning (00:16:55) – Siamese networks and contrastive learning (00:25:14) – JEPA and learning in representation space (00:30:14) – Abstraction hierarchies in physics and AI (00:34:01) – World models as abstract simulators (00:38:14) – Object permanence and learning basic physics (00:40:35) – Game AI: Why NetHack is still impossible (00:44:22) – Moravec's Paradox and chess (00:55:14) – AI safety by construction, not fine-tuning (01:02:52) – Constrained generation techniques (01:04:20) – Meta's reorganization and FAIR's future (01:07:31) – SSI, Physical Intelligence, and Wayve (01:10:14) – Silicon Valley's "LLM-pilled" monoculture (01:15:56) – China vs US: The open source paradox (01:18:14) – Why start a company at 65? (01:25:14) – The AGI hype cycle has happened 6 times before (01:33:18) – Family and personal background (01:36:13) – Career advice: Learn things with a long shelf life (01:40:14) – Neuroscience and machine learning connections (01:48:17) – Continual learning: Is catastrophic forgetting solved? Music: "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

    1h 50m

Ratings & Reviews

5
out of 5
4 Ratings

About

Two AI Researchers - Ravid Shwartz Ziv, and Allen Roush, discuss the latest trends, news, and research within Generative AI, LLMs, GPUs, and Cloud Systems.

You Might Also Like