Education Futures

Svenia Busson & Laurent Jolie

A podcast about the future of education in the age of AI. We bring together interdisciplinary voices to explore how we can shape more desirable futures for learning.

  1. A new blueprint for AI in higher education

    1 DAY AGO

    A new blueprint for AI in higher education

    What does it actually look like when a higher education institution takes AI seriously, as something to integrate into the very design of how students learn, are assessed, and grow? In this episode, Svenia Busson sits down with the founder of Forward College, Boris Walbaum, who walks us through one of the most concrete and thoughtful AI-in-education frameworks we've encountered. From an in-house AI Hub giving students access to all major models, to an AI Mirror dashboard that maps how each student is using AI (memorising, getting feedback, or — the red flag — offloading higher-order thinking to the machine), to an AI Journal where students log and defend their AI strategy on every assignment, this is what AI-integrated learning can actually look like in practice. We also dig into the bigger philosophical shift behind it: why "excellence" itself is being redefined in a world moving from certainty to uncertainty, why the appetite for learning is collapsing among teenagers, and why investing in deep skills — cognitive, social, emotional and practical — is the only no-regret move in a world where technical skills become obsolete every few years. A must-listen for anyone in higher education. Resources & references mentioned in this episode: Boris's book Excellence Isn't What You Think – https://www.amazon.com/Excellence-Isnt-What-You-Think-ebook/dp/B0GG9TMHC2Forward College – https://forward-college.eu/London School of Economics (academic partner) – https://www.lse.ac.ukKing's College London (academic partner) – https://www.kcl.ac.ukAn Avalanche is Coming report (Michael Barber, IPPR, 2013) – https://www.ippr.org/articles/an-avalanche-is-coming-higher-education-and-the-revolution-aheadNoise by Daniel Kahneman, Olivier Sibony & Cass Sunstein – https://www.hachettebookgroup.com/titles/daniel-kahneman/noise/9780316451383/Paul LeBlanc (President of Forward College's Academic Council, former SNHU President) – https://www.paul-leblanc.comMcKinsey "no-regret moves" framing – https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/strategy-under-uncertainty

    54 min
  2. How AI chatbots reshape children's brains

    5 DAYS AGO

    How AI chatbots reshape children's brains

    In this episode of Education Futures, Svenia Busson sits down with Pilyoung Kim, Professor of Psychology at the University of Denver and Director of the Center for Brain, AI, and Child (BAIC). Pilyoung's research bridges developmental science and AI design to understand how generative AI is shaping the social, emotional, and neural development of children. We dive into: Why general-purpose chatbots were never designed with children in mind — and what that means for safetyWhat happens inside a kindergartener's brain when they talk to ChatGPT (spoiler: the same brain region that lights up when they talk to another human)The paradox of AI as mental health support: accessible, affordable… and quietly dangerousWhy sycophantic, "I care about you" chatbot behavior can blur the line between human and machineThe rise of emotional over-dependency and even romantic relationships between adolescents and AIPilyoung's Social and Emotional AI Literacy Framework for teachers, parents, and school psychologistsConcrete prompts you can use right now to make AI chatbots safer for kidsWhat Pilyoung learned at OpenAI and on Capitol Hill about the future of AI policy for children A must-listen for educators, parents, researchers, and anyone building technology for the next generation. Links & references mentioned in the episode: Follow Pilyoung Kim on Linkedin: www.linkedin.com/in/pilyoung-kimPilyoung Kim at the University of Denver: https://liberalarts.du.edu/about/people/pilyoung-kimCenter for Brain, AI, and Child (BAIC): https://www.baic.center/Pilyoung's Substack newsletter "Brain, AI, Child" (with prompt examples for teachers): https://pilyoung.substack.com/Pilyoung's study on kindergarteners' brain activation during storytelling with ChatGPT: https://pilyoung.substack.com/p/how-do-young-childrens-brains-respondPilyoung's Capitol Hill briefing on AI & youth mental health: https://pilyoung.substack.com/p/bringing-ai-safety-and-child-developmentSociety for Research in Child Development (SRCD): https://www.srcd.org/The Rithm Project – "Youth, AI, and the Relationships That Shape Them" report: https://www.therithmproject.org/research

    47 min
  3. Protecting children in the age of AI

    20 APR

    Protecting children in the age of AI

    In this episode of Education Futures, Svenia Busson sits down with the founder of the Safe AI for Children Alliance, Tara Steele, a former intelligence officer who spent a decade assessing long-term risks before turning her attention to one of the most urgent and least-discussed issues of our time: the impact of AI on children. From conversational AI optimized for intimacy (not just attention) to deepfakes, AI companions, and the tragic real-world harms already documented, our guest walks us through what parents, educators, and policymakers need to understand right now. She shares the Alliance's three non-negotiables for AI design, explains why schools shouldn't have to shoulder this responsibility alone, and offers a surprisingly hopeful message: the sense of "inevitability" around AI harms is itself the biggest obstacle to change. A wake-up call and a roadmap for anyone who cares about how children grow up in an AI-saturated world. In this conversation: From intelligence officer to AI safety advocateThe three non-negotiables every AI system should meet before reaching childrenWhy conversational AI is fundamentally different from social mediaThe risks of AI companions, AI tutors, and chatbots in classroomsRaising children in an AI world: practical advice for parentsThe Meta accountability case and the shifting tide in US courtsWhy "inevitability" is the biggest trap — and what collective action can still achieve Resources & links mentioned: Safe AI for Children Alliance → https://www.safeaiforchildren.orgFree guide for schools and educators (on the Alliance's website) → https://www.safeaiforchildren.orgThe Alliance's three non-negotiables campaign: AI should never produce sexualised images of children, create emotional dependency, or encourage self-harm, full campaign on the Alliance's siteRecent US legal cases on big-tech accountability for harms to children (referenced around the 28-minute mark, including the Meta case) https://www.bbc.com/news/articles/c747x7gz249o

    36 min
  4. Is AI safe for children? Inside KORA's benchmark

    16 APR

    Is AI safe for children? Inside KORA's benchmark

    What happens when millions of children start talking to LLMs every day and no one knows whether it's safe? In this episode, Laurent Jolie sits down with Stéphie Herlin, co-founder and Research & Product Lead at KORA, the first independent, non-profit, open-source benchmark measuring how safe LLMs are for children. Before KORA, Stéphie spent 8+ years as a government economist, then moved into education as a policy analyst and teacher — spending nearly five years in a French public classroom with 6- to 10-year-olds while retraining in neuroscience, developmental science and pedagogy. We talk about why education hasn't had its scientific revolution yet and what precision education could look like; Stéphie's earlier ed-tech project Brio and the tension between engagement-first investors and outcomes-first science; how KORA works (generating conversations between synthetic child profiles and real LLMs, judged against a taxonomy of 25 risks / 8 categories co-built with ~30 experts); the first results (average safety ~44%, ranging 13–78%, with some models regressing over time); why educational integrity is the industry's biggest blind spot (about a third of US kids use LLMs every day); a simple tip for parents (telling the model your child is a child improves safety by ~10 percentage points across every model tested); why LLMs still hallucinate ~20% of the time; and how any ed-tech builder can run KORA on their own conversational product today, for free. Links mentionedKORA (the benchmark) Introducing KORA (blog)Open-source benchmark on GitHubLaunch post by Mathilde CollinDeep-dive on KORA (EdTech Partnerships)Conversation with Mathilde Collin about KORA (Opportunity Knocks) People mentioned Mathilde Collin (co-founder of KORA)Isabelle Hau — Stanford Accelerator for Learning · personal siteLove to Learn by Isabelle Hau (Hachette) · Stanford Report featureJonathan Banon — Ed.ai

    45 min
  5. AI in education: separating the hype from the evidence

    13 APR

    AI in education: separating the hype from the evidence

    In this episode, Svenia Busson sits down with Wess Trabelsi, a Tech Integration Specialist at Ulster BOCES in New York, where he supports eight rural school districts. Wess is neither a cheerleader nor a doomsayer when it comes to AI in education, he's something rarer: an evidence-driven practitioner who actually read the research. Wess shares his deep dive into the science (or lack thereof) behind AI in K-12. After reviewing over 100 studies, he found that the vast majority are noise, glorified surveys, opinion pieces, and what he calls "dead horse studies" that prove the obvious. His findings closely mirror those of the Stanford AI Hub for Education's newly released 2026 Review, which started with 800 studies and kept only 20 with strong causal evidence, and found zero conducted in U.S. K-12 settings. Together, Svenia and Wess unpack the two most significant studies to date: the Harvard RCT showing a custom AI tutor significantly helped motivated physics students, and the landmark Wharton/Turkey study showing that AI-assisted practice gains completely disappeared when AI was removed at test time. Neither provides a clear playbook for the average classroom. But the conversation goes much deeper. Wess makes the case that AI didn't create a problem in educatio, it merely exposed one that was already there, putting "the final nail in the coffin of the traditional model." He advocates for process-based and project-based learning (citing the inspiring model of High Tech High in California), and for rethinking assessment entirely, away from written essays as proxies for thinking, toward conversation, video, and authentic real-world problem-solving through work-based learning. If you've been overwhelmed by AI headlines in education and wished someone would just tell you what the evidence actually says, this episode is for you. Resources mentioned in this episode: 📄 The Evidence Base on AI in K-12: A 2026 Review — Stanford AI Hub for Education (PDF)✍️ The Good, the Bad, and the Ugly Science of AI in Education — Wess Trabelsi (Substack)🎓 High Tech High — San Diego, California (school that does PBL)📖 Superintelligence by Nick Bostrom · Works by Ray Kurzweil Substack writers recommended by Wess: Mike Taubman — AI WaypointsStephen Fitzpatrick — Teaching in the Age of AIMike Kentz — How We Frame Machines

    54 min
  6. Teaching and measuring soft skills in the age of AI

    8 APR

    Teaching and measuring soft skills in the age of AI

    In this episode of Education Futures, Svenia Busson is joined by Michaela Horvathova, founder of Beyond Education and former policy analyst at the OECD. For more than a decade, Michaela has worked on an important and misunderstood challenge in education: how to define, develop, and assess metacognition and soft skills. As AI makes knowledge abundant and easily accessible, these competencies are becoming essential. Yet across most education systems, they remain poorly defined, inconsistently taught, and rarely measured. In this conversation, we explore: • Why we still lack a shared definition and taxonomy of soft skills • Why what gets measured and graded still determines what gets taught • The gap between policy ambition and classroom reality • How the Four-Dimensional Education Framework (knowledge, skills, character, meta-learning) helps structure these competencies • Why metacognition (learning how to learn) is becoming a foundational skill • The risks of cognitive offloading and the emergence of a “cognitive divide” • Why assessment must shift from outputs to thinking processes We also discuss how schools can move toward competency-based models, drawing on Michaela’s work at Beyond International School. The challenge is not just to talk about soft skills, but to define them clearly, prioritize them, and measure them in ways that truly reflect how humans learn. To explore further what was discussed in the episode: https://beyondeducation.tech/ https://curriculumredesign.org/4-dimensions/

    37 min
  7. Rethinking assessment in the age of AI

    6 APR

    Rethinking assessment in the age of AI

    What does it mean to assess learning in a world where AI can generate answers instantly? In this episode of Education Futures, Svenia Busson is joined by Alina von Davier, Chief of Assessment at Duolingo, and Elie Bechara, who works on institutional partnerships for the Duolingo English Test. Together, they bring a rare combination of assessment science, product innovation, and real-world university dynamics. As AI tools become widely accessible, traditional forms of assessment, especially high-stakes exams, are being fundamentally challenged. If students can generate answers with AI, what are we actually measuring? In this conversation, we explore: • Why AI is putting pressure on the validity of traditional exams • How universities are responding to new questions around academic integrity and admissions • The shift from testing knowledge → competencies and skills • Why assessment needs to become more continuous, contextual, and embedded in learning • How the Duolingo English Test is rethinking language assessment using AI • The importance of designing assessments that reflect real-world performance, not artificial test conditions We also discuss how assessment can evolve from a moment of evaluation into a tool for learning itself — providing feedback, guiding progress, and supporting long-term skill development. The challenge ahead is about redefining what we value, what we measure, and what we trust in education. To explore further what was discussed in the episode: https://englishtest.duolingo.com/ https://blog.duolingo.com/video-call/

    1hr 3min

About

A podcast about the future of education in the age of AI. We bring together interdisciplinary voices to explore how we can shape more desirable futures for learning.

You Might Also Like