This episode explores Ministral 3, a family of 3B, 8B, and 14B long-context multimodal models built from a 24B parent through structured pruning and cascade distillation rather than separate full-scale training runs. It explains how the method works step by step, from teacher-student distillation and capacity-gap concerns to the staged pruning pipeline that extends each child model to 256k context windows while preserving useful capabilities. The discussion places the paper in context with earlier distillation and pruning work such as Hinton’s original distillation paper, DistilBERT, teacher-assistant distillation, and NVIDIA’s Minitron, arguing that the contribution is a practical model-family construction recipe rather than a brand-new paradigm. Listeners would find it interesting because it gets at a central 2026 question in AI deployment: whether smaller, cheaper models can stay competitive on long-context and multimodal tasks by amortizing one expensive parent run across several deployable descendants. Sources: 1. Ministral 3 — Alexander H. Liu, Kartik Khandelwal, Sandeep Subramanian, Victor Jouault, Abhinav Rastogi, Adrien Sadé, Alan Jeffares, Albert Jiang, Alexandre Cahill, Alexandre Gavaudan, Alexandre Sablayrolles, Amélie Héliou, Amos You, Andy Ehrenberg, Andy Lo, Anton Eliseev, Antonia Calvi, Avinash Sooriyarachchi, Baptiste Bout, Baptiste Rozière, Baudouin De Monicault, Clémence Lanfranchi, Corentin Barreau, Cyprien Courtot, Daniele Grattarola, Darius Dabert, Diego de las Casas, Elliot Chane-Sane, Faruk Ahmed, Gabrielle Berrada, Gaëtan Ecrepont, Gauthier Guinet, Georgii Novikov, Guillaume Kunsch, Guillaume Lample, Guillaume Martin, Gunshi Gupta, Jan Ludziejewski, Jason Rute, Joachim Studnia, Jonas Amar, Joséphine Delas, Josselin Somerville Roberts, Karmesh Yadav, Khyathi Chandu, Kush Jain, Laurence Aitchison, Laurent Fainsin, Léonard Blier, Lingxiao Zhao, Louis Martin, Lucile Saulnier, Luyu Gao, Maarten Buyl, Margaret Jennings, Marie Pellat, Mark Prins, Mathieu Poirée, Mathilde Guillaumin, Matthieu Dinot, Matthieu Futeral, Maxime Darrin, Maximilian Augustin, Mia Chiquier, Michel Schimpf, Nathan Grinsztajn, Neha Gupta, Nikhil Raghuraman, Olivier Bousquet, Olivier Duchenne, Patricia Wang, Patrick von Platen, Paul Jacob, Paul Wambergue, Paula Kurylowicz, Pavankumar Reddy Muddireddy, Philomène Chagniot, Pierre Stock, Pravesh Agrawal, Quentin Torroba, Romain Sauvestre, Roman Soletskyi, Rupert Menneer, Sagar Vaze, Samuel Barry, Sanchit Gandhi, Siddhant Waghjale, Siddharth Gandhi, Soham Ghosh, Srijan Mishra, Sumukh Aithal, Szymon Antoniak, Teven Le Scao, Théo Cachet, Theo Simon Sorg, Thibaut Lavril, Thiziri Nait Saada, Thomas Chabal, Thomas Foubert, Thomas Robert, Thomas Wang, Tim Lawson, Tom Bewley, Tom Bewley, Tom Edwards, Umar Jamil, Umberto Tomasini, Valeriia Nemychnikova, Van Phung, Vincent Maladière, Virgile Richard, Wassim Bouaziz, Wen-Ding Li, William Marshall, Xinghui Li, Xinyu Yang, Yassine El Ouahidi, Yihan Wang, Yunhao Tang, Zaccharie Ramzi, 2026 http://arxiv.org/abs/2601.08584 2. Distilling the Knowledge in a Neural Network — Geoffrey Hinton, Oriol Vinyals, Jeff Dean, 2015 https://scholar.google.com/scholar?q=Distilling+the+Knowledge+in+a+Neural+Network 3. Improved Knowledge Distillation via Teacher Assistant — Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, Hassan Ghasemzadeh, 2019 https://scholar.google.com/scholar?q=Improved+Knowledge+Distillation+via+Teacher+Assistant 4. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter — Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, 2019 https://scholar.google.com/scholar?q=DistilBERT,+a+distilled+version+of+BERT:+smaller,+faster,+cheaper+and+lighter 5. Compact Language Models via Pruning and Knowledge Distillation — Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, Pavlo Molchanov, 2024 https://scholar.google.com/scholar?q=Compact+Language+Models+via+Pruning+and+Knowledge+Distillation 6. LLM Pruning and Distillation in Practice: The Minitron Approach — S. T. Sreenivas, S. Muralidharan, R. Joshi, M. Chochowski, A. S. Mahabaleshwarkar, G. Shen, J. Zeng, Z. Chen, Y. Suhara, S. Diao, C. Yu, W. Chen, H. Ross, O. Olabiyi, A. Aithal, O. Kuchaiev, D. Korzekwa, P. Molchanov, M. Patwary, M. Shoeybi, J. Kautz, and B. Catanzaro, 2024 https://scholar.google.com/scholar?q=LLM+Pruning+and+Distillation+in+Practice:+The+Minitron+Approach 7. Distillation Scaling Laws — D. Busbridge, A. Shidani, F. Weers, J. Ramapuram, E. Littwin, and R. Webb, 2025 https://scholar.google.com/scholar?q=Distillation+Scaling+Laws 8. Distilled Pretraining: A Modern Lens of Data, In-Context Learning and Test-Time Scaling — S. Goyal, D. Lopez-Paz, and K. Ahuja, 2025 https://scholar.google.com/scholar?q=Distilled+Pretraining:+A+Modern+Lens+of+Data,+In-Context+Learning+and+Test-Time+Scaling 9. Pixtral 12B — P. Agrawal, S. Antoniak, E. B. Hanna, B. Bout, D. Chaplot, J. Chudnovsky, D. Costa, B. De Monicault, S. Garg, T. Gervet, et al., 2024 https://scholar.google.com/scholar?q=Pixtral+12B 10. Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs — Lu Yin et al., 2024 https://scholar.google.com/scholar?q=Junk+DNA+Hypothesis:+Pruning+Small+Pre-Trained+Weights+Irreversibly+and+Monotonically+Impairs+"Difficult"+Downstream+Tasks+in+LLMs 11. SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot — Elias Frantar and Dan Alistarh, 2023 https://scholar.google.com/scholar?q=SparseGPT:+Massive+Language+Models+Can+Be+Accurately+Pruned+in+One-Shot 12. Fast and Effective Weight Update for Pruned Large Language Models — Vladimir Boza, 2024 https://scholar.google.com/scholar?q=Fast+and+Effective+Weight+Update+for+Pruned+Large+Language+Models 13. Exploring Knowledge Purification in Multi-Teacher Knowledge Distillation for LLMs — Ruihan Jin et al., 2026 https://scholar.google.com/scholar?q=Exploring+Knowledge+Purification+in+Multi-Teacher+Knowledge+Distillation+for+LLMs 14. Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models — Siyan Zhao et al., 2026 https://scholar.google.com/scholar?q=Self-Distilled+Reasoner:+On-Policy+Self-Distillation+for+Large+Language+Models 15. Data Engineering for Scaling Language Models to 128K Context — Yao Fu et al., 2024 https://scholar.google.com/scholar?q=Data+Engineering+for+Scaling+Language+Models+to+128K+Context 16. How to Train Long-Context Language Models (Effectively) — Tianyu Gao et al., 2025 https://scholar.google.com/scholar?q=How+to+Train+Long-Context+Language+Models+(Effectively) 17. Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models — Jun Zhang et al., 2025 https://scholar.google.com/scholar?q=Train+Small,+Infer+Large:+Memory-Efficient+LoRA+Training+for+Large+Language+Models 18. AI Post Transformers: DeepSeek-V4 and Practical Million-Token Context — Hal Turing & Dr. Ada Shannon, 2026 https://podcast.do-not-panic.com/episodes/2026-04-25-deepseek-v4-and-practical-million-token-6f4de1.mp3 19. AI Post Transformers: Muon Is Scalable for LLM Training — Hal Turing & Dr. Ada Shannon, 2026 https://podcast.do-not-panic.com/episodes/2026-04-25-muon-is-scalable-for-llm-training-587ed8.mp3 20. AI Post Transformers: Learning to Reason with 13 Parameters — Hal Turing & Dr. Ada Shannon, 2026 https://podcast.do-not-panic.com/episodes/2026-04-14-learning-to-reason-with-13-parameters-54c87f.mp3 21. AI Post Transformers: AgenticQwen and Small Industrial Tool Agents — Hal Turing & Dr. Ada Shannon, 2026 https://podcast.do-not-panic.com/episodes/2026-04-27-agenticqwen-and-small-industrial-tool-ag-dc676d.mp3 Interactive Visualization: Ministral 3: Cascade Distillation for Long-Context Multimodal Models