Causal Bandits Podcast

Alex Molak

Causal Bandits Podcast with Alex Molak is here to help you learn about causality, causal AI and causal machine learning through the genius of others. The podcast focuses on causality from a number of different perspectives, finding common grounds between academia and industry, philosophy, theory and practice, and between different schools of thought, and traditions. Your host, Alex Molak is an a machine learning engineer, best-selling author, and an educator who decided to travel the world to record conversations with the most interesting minds in causality to share them with you.Enjoy and stay causal!Keywords: Causal AI, Causal Machine Learning, Causality, Causal Inference, Causal Discovery, Machine Learning, AI, Artificial Intelligence

  1. Do Heterogeneous Treatment Effects Exist? | Stephen Senn X Richard Hahn S2E9 | CausalBanditsPodcast

    JAN 30

    Do Heterogeneous Treatment Effects Exist? | Stephen Senn X Richard Hahn S2E9 | CausalBanditsPodcast

    Send us a text Do Heterogeneous Treatment Effects Exist? For the last 50 years, we've designed cars to be safe... For the 50th-percentile male. Well, that's actually not 100% correct. According to Stanford's report, we introduced "female" crash test dummies in the 1960s, but... They were just scaled-down versions of male dummies and... Represented the 5th percentile of females in terms of body size and mass (aka the smallest 5% of women in the general population). These dummies also did not take into account female-typical injury tolerance, biomechanics, spinal alignment, and more. But... Does it matter for actual safety? In the episode, we cover: - Do heterogeneous treatment effects (different effects in different contexts) exist? - If so, can we actually detect them? - Is it more ethical to look for heterogeneous treatment effects or rather look at global averages? Video version available on the Youtube:  https://youtu.be/V801RQTBpp4 Recorded on Nov 12, 2025 in Malaga, Spain. ------------------------------------------------------------------------------------------------------ About Richard Professor Richard Hahn, PhD, is a professor of statistics at Arizona State University (ASU). He develops novel statistical methods for analyzing data arising from the social sciences, including psychology, economics, education, and business. His current focus revolves around causal inference using regression tree models, as well as foundational issues in Bayesian statistics. Connect with Richard: - Richard on LinkedIn: https://www.linkedin.com/in/richard-hahn-a1096050/ About Stephen Stephen Senn, PhD, is a statistician and consultant who specializes in drug development clinical trials. He is a former Group Head at Ciba-Geigy and has taught at the University of Glasgow and University College London (UCL). He is the author of "Statistical Issues in Drug Development," "Crossover Trials in Clinical Research," and "Dicing with Death." Connect with Stephen: - Stephen on LinkedIn: Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 8m
  2. Causal Inference & the "Bayesian-Frequentist War" | Richard Hahn S2E8 | CausalBanditsPodcast.com

    12/27/2025

    Causal Inference & the "Bayesian-Frequentist War" | Richard Hahn S2E8 | CausalBanditsPodcast.com

    Send us a text *What can we learn about causal inference from the “war” between Bayesians and frequentists?* What can we learn about causal inference from the “war” between Bayesians and frequentists? In the episode, we cover: - What can we learn from the “war” between Bayesians and frequentists? - Why do Bayesian Additive Regression Trees (BART) “just work”? - Do heterogeneous treatment effects exist? - Is RCT generalization a heterogeneity problem? In the episode, we accidentally coined a new term: “feature-level selection bias.” ------------------------------------------------------------------------------------------------------ Video version available on the Youtube:  https://youtu.be/-hRS8eU3Tow Recorded in Arizona, US. ------------------------------------------------------------------------------------------------------ *About The Guest* Professor Richard Hahn, PhD, is a professor of statistics at Arizona State University (ASU). He develops novel statistical methods for analyzing data arising from the social sciences, including psychology, economics, education, and business. His current focus revolves around causal inference using regression tree models, as well as foundational issues in Bayesian statistics. Connect with Richard: - Richard on LinkedIn: https://www.linkedin.com/in/richard-hahn-a1096050/ - Richard's web page: https://methodologymatters.substack.com/about *About The Host* Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4 ). Connect with Alex: - Alex on the Internet: https://bit.ly/aleksander-molak *Links* Repo - https://stochtree.ai Papers - Hahn et al (2020) - "Bayesian Regression Tree Models for Causal Inference" (https://projecteuclid.org/journals/bayesian-analysis/volume-15/issue-3/Bayesian-Regression-Tree-Models-for-Causal-Inference--Regularization-Confounding/10.1214/19-BA1195.full) - Yeager, ..., Dweck et al (2019) - "A national experiment reveals where a growth mindset improves achievement" (https://www.nature.com/articles/s41586-019-1466-y) - Herren, Hahn, et al (20 Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 24m
  3. The Causal Gap: Truly Responsible AI Needs to Understand the Consequences | Zhijing Jin S2E7

    10/30/2025

    The Causal Gap: Truly Responsible AI Needs to Understand the Consequences | Zhijing Jin S2E7

    Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on the "causal scientist" - What's missing in responsible AI - Why ethics matter for agentic systems - Is causality a necessary element of moral reasoning? ------------------------------------------------------------------------------------------------------ Video version available on Youtube:  https://youtu.be/Frb6eTW2ywk Recorded on Aug 18, 2025 in Tübingen, Germany. ------------------------------------------------------------------------------------------------------ About The Guest Zhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto. Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 3m
  4. Create Your Causal Inference Roadmap. Causal Inference, TMLE & Sensitivity | Mark van der Laan S2E6 | CausalBanditsPodcast.com

    09/22/2025

    Create Your Causal Inference Roadmap. Causal Inference, TMLE & Sensitivity | Mark van der Laan S2E6 | CausalBanditsPodcast.com

    Send us a text Create Your Causal Inference Roadmap. Causal Inference, TMLE & Sensitivity If you're into causal inference and machine learning you probably heard about double machine learning (DML). DML is one of the most popular frameworks leveraging machine learning algorithms for causal inference, while offering good statistical properties. Yet... There's another framework that also leverages machine learning for causal inference that was created years earlier. Welcome to the world of targeted maximum likelihood estimation (TMLE). Our today's guest, Prof. Mark van der Laan (UC Berkeley) is the godfather of TMLE. In the episode, we discuss: - Similarities and differences between DML and TMLE - How to build a causal roadmap for your project - How Mark uses math to solve real-world problems - Why uncertainty quantification is so important ------------------------------------------------------------------------------------------------------ Video version available on the Youtube: https://youtu.be/qr5JolEAuJU Recorded on Sep 16, 2025 in Berkeley, California, US. ------------------------------------------------------------------------------------------------------ *About The Guest* Mark van der Laan is a Professor in Biostatistics and Statistics at UC Berkeley. He's the godfather of Targeted Maximum Likelihood Estimation (TMLE), a semiparametric framework that uses machine learning to estimate causal effects or other statistical parameters from observational data, and its new incarnation Targeted Machine Learning. *About The Host* Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4 ). Connect with Alex: - Alex on the Internet: https://bit.ly/aleksander-molak *Links* Libraries - Deep LTMLE (Python): https://github.com/shirakawatoru/dltmle Papers - Dang, ..., van der Laan et al. (2023) - "A Causal Roadmap for Gen Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 30m
  5. Causal Inference, Human Behavior, Science Crisis & The Power of Causal Graphs | Julia Rohrer S2E5 | CausalBanditsPodcast.com

    06/04/2025

    Causal Inference, Human Behavior, Science Crisis & The Power of Causal Graphs | Julia Rohrer S2E5 | CausalBanditsPodcast.com

    Send us a text *Causal Inference From Human Behavior, Reproducibility Crisis & The Power of Causal Graphs* Is Jonathan Heidt right that social media causes the mental health crisis in young people? If so, how can we be sure? Can other disciplines learn something from the reproducibility crisis in Psychology, and what is multiverse analysis? Join us for a conversation on causal inference from human behavior, the reproducibility crisis in sciences, and the power of causal graphs! ------------------------------------------------------------------------------------------------------ Audio version available on YouTube: https://youtu.be/YQetmI-y5gM Recorded on May 16, 2025, in Leipzig, Germany. ------------------------------------------------------------------------------------------------------ *About The Guest* Julia Rohrer, PhD, is a researcher and personality psychologist at the University of Leipzig. She's interested in the effects of birth order, age patterns in personality, human well-being, and causal inference. Her works have been published in top journals, including Nature Human Behavior. She has been an active advocate for increased research transparency, and she continues this mission as a senior editor of Psychological Science. Julia frequently gives talks about good practices in science and causal inference. You can read Julia's blog at https://www.the100.ci/ *Links* Papers - Rohrer, J. (2024) "Causal inference for psychologists who think that causal inference is not for them" (https://compass.onlinelibrary.wiley.com/doi/10.1111/spc3.12948) - Bailey, D., ..., Rohrer, J. et al (2024) "Causal inference on human behaviour" (https://www.nature.com/articles/s41562-024-01939-z.epdf) - Rohrer, J. et al (2024) "The Effects of Satisfaction with Different Domains of Life on General Life Satisfaction Vary Between Individuals (But We Cannot Tell You Why)" (https://doi.org/10.1525/collabra.121238) - Rohrer et al (2017) "Probing Birth-Order Effects Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 22m
  6. MSFT Scientist: Agents, Causal AI & Future of DoWhy | Amit Sharma S2E4 | CausalBanditsPodcast.com

    04/14/2025

    MSFT Scientist: Agents, Causal AI & Future of DoWhy | Amit Sharma S2E4 | CausalBanditsPodcast.com

    Send us a text *Agents, Causal AI & The Future of DoWhy* The idea of agentic systems taking over more complex human tasks is compelling. New "production-grade" frameworks to build agentic systems pop up, suggesting that we're close to achieving full automation of these challenging multi-step tasks. But is the underlying agentic technology itself ready for production? And if not, can LLM-based systems help us making better decisions? Recent new developments in the DoWhy/PyWhy ecosystem might bring some answers. Will they—combined with new methods for validating causal models now available in DoWhy—impact the way we build and interact with causal models in industry? ------------------------------------------------------------------------------------------------------ Video version available on Youtube:  https://youtu.be/8yWKQqNFrmY Recorded on Mar 12, 2025 in Bengaluru, India. ------------------------------------------------------------------------------------------------------ *About The Guest* Amit Sharma is a Principal Researcher at Microsoft Research and one of the original creators of the open-source Python library DoWhy, considered the "scikit-learn of causal inference." He holds a PhD in Computer Science from Cornell University. His research focuses on causality and its intersection with LLM-based and agentic systems. Amit deeply cares about the social impact of machine learning systems and sees causality as one of the main drivers of more useful and robust systems. Connect with Amit: - Amit on LinkedIn: https://www.linkedin.com/in/amitshar/ - Amit on BlueSky: - Amit 's web page: http://amitsharma.in/ *About The Host* Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4 ). Connect with Alex: - Alex on the Internet: https://bit.ly/aleksander-molak Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 10m
  7. Causal Secrets of N=1 Experiments | Eric Daza S2E3 | CausalBanditsPodcast.com

    03/31/2025

    Causal Secrets of N=1 Experiments | Eric Daza S2E3 | CausalBanditsPodcast.com

    Send us a text  📽️ FREE Online Course on Causality  📕 Causal Inference & Discovery in Python Causal Secrets of N=1 Experiments Join me for a one of a kind conversation on the opportunities and challenges of n-of-1 trials, Eric's causal journey, his path into statistics, his love of sci-fi, and how single-subject experiments could reshape personalized medicine. Video version available here About The Guest Dr. ​Eric J. Daza is a biostatistician and health data scientist with over 22 years of experience (Cornell, UNC Chapel Hill, Stanford). He works at Boehringer Ingelheim. Eric is a creator of Stats-of-1, a health innovation newsletter & podcast on n-of-1 trials, single-case designs, switchback experiments, and personal AI for digital health/medicine. All views and opinions expressed by Dr. Eric J. Daza represent no one but himself. These views and opinions do not represent the views and opinions of his employer. Connect with Eric: Eric on LinkedInEric on BlueSkyEric's web page About The Host Connect with Alex: Alex on the Internet  👉🏼 Consulting and Causal AI Training For Your Team: hello causalpython.io Episode Links Papers Daza (2018) - "Causal Analysis of Self-tracked Time Series Data Using a Counterfactual Framework for N-of-1 Trials"Matias, Daza et al (2022) - "What possibly affects nighttime heart rate? Conclusions from N-of-1 observational data"Books Asimov, I (1991) - "Foundation"Apps StudyUWebpages Stats-of-1Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    1h 1m
  8. From Quantum Physics to Causal AI at Spotify | Ciarán Gilligan-Lee S2E2 | CausalBanditsPodcast.com

    01/29/2025

    From Quantum Physics to Causal AI at Spotify | Ciarán Gilligan-Lee S2E2 | CausalBanditsPodcast.com

    Send us a text From Quantum Causal Models to Causal AI at Spotify Ciarán loved Lego. Fascinated by the endless possibilities offered by the blocks, he once asked his parents what he could do as an adult to keep building with them. The answer: engineering. As he delved deeper into engineering, Ciarán noticed that its rules relied on a deeper structure. This realization inspired him to pursue quantum physics, which eventually brought him face-to-face with fundamental questions about causality. Today, Ciarán blends his deep understanding of physics and quantum causal models with applied work at Spotify, solving complex problems in innovative ways. Recently, while collaborating with one of his students, he stumbled upon a new interesting question: could we learn something about the early history of the universe by applying causal inference methods in astrophysics? Could we? Hear it from Ciarán himself. Join us for this one-of-a-kind conversation! ------------------------------------------------------------------------------------------------------ Video version and episode links available on YouTube Recorded on Nov 6, 2024 in Dublin, Ireland. ------------------------------------------------------------------------------------------------------ About The Guest Ciarán Gilligan-Lee is Head of the Causal Inference Research Lab at Spotify and Honorary Associate Professor at University College London. He got interested in causality during his studies in quantum physics. This interest led him to study quantum causal models. He published in Nature Machine Intelligence, Nature Quantum Information, Physical Review Letters, New Journal of Physics and more. In his free time, he writes for New Scientist and helps his students apply causal methods in new fields (e.g., astrophysics). Connect with Ciarán: - Ciarán on LinkedIn: https://www.linkedin.com/in/ciaran-gilligan-lee/ - Ciarán's web page: https://www.ciarangilliganlee.com/ About The Host Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur Support the show Causal Bandits Podcast Causal AI || Causal Machine Learning || Causal Inference & Discovery Web: https://causalbanditspodcast.com Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

    52 min
4.8
out of 5
8 Ratings

About

Causal Bandits Podcast with Alex Molak is here to help you learn about causality, causal AI and causal machine learning through the genius of others. The podcast focuses on causality from a number of different perspectives, finding common grounds between academia and industry, philosophy, theory and practice, and between different schools of thought, and traditions. Your host, Alex Molak is an a machine learning engineer, best-selling author, and an educator who decided to travel the world to record conversations with the most interesting minds in causality to share them with you.Enjoy and stay causal!Keywords: Causal AI, Causal Machine Learning, Causality, Causal Inference, Causal Discovery, Machine Learning, AI, Artificial Intelligence

You Might Also Like