The Edtech Podcast

Professor Rose Luckin
The Edtech Podcast

The mission of The Edtech Podcast is to improve the dialogue between ‘ed’ and ‘tech’ through storytelling, for better innovation and impact. Hosted by Rose Luckin, Professor of Learner-Centred Design at UCL and Founder and CEO of EDUCATE Ventures Research, using AI to measure the unmeasurable in education. The Edtech Podcast audience consists of education leaders from around the world, plus startups, learning and development specialists, bluechips, investors, Government and media. The Edtech Podcast is downloaded 2000+ each week from 145 countries in total, with UK, US & Australia the top 3 downloading countries. Podcast series have included Future Tech for Education, Education 4.0, and The Voctech Podcast, Learning Continued, Evidence-Based EdTech, and the upcoming AI in Ed: Our Data-Driven Future series on AI. Send your qs and comments to @PodcastEdtech, @knowldgillusion, theedtechpodcast@gmail.com, hello@educateventures.com or https://theedtechpodcast.com/ or leave a voicemail for the show at https://www.speakpipe.com/theedtechpodcast

  1. 1D AGO

    #286 - 'Learn Fast, Act More Slowly' to Leverage AI

    We've all seen the headlines - AI is revolutionising everything from how students learn to how teachers teach. The promise of personalised learning paths, automated grading, and AI teaching assistants has created a gold rush mentality in education technology. But in our rush to adopt these powerful new tools, are we moving too fast? Today we'll explore why when it comes to AI in education, we need to learn fast but act more slowly and thoughtfully. We'll look at both the tremendous opportunities and serious risks that AI tools present for students and educators. We'll examine where AI can truly add value in education versus where human teachers remain irreplaceable. And most importantly, we'll discuss why comprehensive AI literacy and training is absolutely crucial - not just for educators, but for everyone involved in shaping young minds. Drawing on insights from leading experts on the frontlines of AI in education, we'll provide a framework for thinking about how to implement AI tools responsibly and effectively. Whether you're a teacher, administrator, policymaker or parent, this episode will give you practical guidance for navigating the AI revolution in education. Talking points and questions may include: Opportunities and risks of the tools: Adaptive or personalised learning paths, automated marking and feedback, content generation, analytics and teaching assistants, but also inaccuracy and lack of transparency, data risks, biases, ethics and safeguarding, and like social media, the unintended lasting consequences Where AI is best placed: Is it EdTech and tools in the classroom, the augmentation and elevation of human intelligence, or is it just learning about AI and what it can do and why (is knowledge=power enough?) Why it is so important that understanding and training are emphasised and why everyone needs to have such training Without it there can be safeguarding disasters, skills training can be insufficient, many AI tool providers are offering free training to learn to use their tool but this is consumerised and inadequate and can be ethically questionable; do we want successive generations to only be producing AI tools that are exploitative and using our data and our IP without our consent, or do we want to help people with technology and for the partnership to be of most benefit to them? Guests: Rt. Hon the Lord Knight of Weymouth, Jim Knight Rob Robson, ASCL Trust Leadership Consultant

    45 min
  2. 12/20/2024

    #283 - A Teacher's Perspective - How to Approach AI as an Institution (part 1)

    AI integration in UK schools varies, with some embracing it for tasks like grading and personalised learning, while others avoid it in certain subjects. However, there is no risk-free AI. As these technologies spread in education, proactive strategies are crucial, not reactive ones. Key concerns include AI providing misleading or biased information, generating explicit content without consent, and impacts on true learning if over-relied upon for content generation. Robust safeguarding measures addressing these risks are essential as AI permeates classrooms. Effectively preparing teachers is paramount for successful AI adoption. Comprehensive training is needed not just for educators, but leaders too, ensuring all grasp the opportunities and challenges. Only then can AI enhance learning while keeping a human-centric approach. Talking points and questions may include: What is the extent of AI penetration in your schools, including teacher usage, classes avoiding it, student use, and any strategies or evaluation plans in place regarding reactive or proactive AI adoption? No AI is risk-free, so concerns around impacts on learning, creativity, authorship, assessment, and whether students genuinely understand AI-generated content are critical issues Safeguarding measures must address the risks of AI providing misleading, biased, or explicit content without consent as these technologies proliferate in classrooms Comprehensive AI training is needed for educators at all levels to ensure smooth technology transitions while maintaining human-centric learning approaches as new tools and understanding are required Guests: Emma Darcy, Director of Technology for Learning, Denbigh High School Sarah Buist, Head of Digital Strategy, Royal Grammar School Newcastle Rose Luckin, Professor of Learner Centred Design, UCL, Founder & CEO, Educate Ventures Research

    54 min
  3. 11/07/2024

    #282 - Risk Assessments for AI Learning Tools, a conversation, Part 2

    In the second episode of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools, Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk as a developer and CEO. View our Risk Assesments here: https://www.educateventures.com/risk-assessments In the studio: Rowland Wells, Creative Producer, EVR Rajeshwari Iyer, CEO and Cofounder, sAInaptic Talking points and questions include: Who are these for?  what's the profile of the person we want to engage with these risk assessments?  They're concise, easy-to-read, no technical jargon.  But it's still an analysis, for people with a research/evidence mindset.  Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public.  So how do we get this in front of people?  Do we lead the conversation with budget concerns?  Safeguarding concerns?  Value for money? What's the end goal of this?  Are you trying to raise the sophistication of conservation around evidence and risk?  Many developers who you critique might just think you're trying to make a name pulling apart their tools.  Surely the market will sort itself out? What's the process involved in making judgements about a risk assessment?  If we're trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what's the first step?  Can this be done quickly?  Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves?  Schools aren't testbeds for intellectual property or tech interventions.  Why is it practitioners' responsibilities to make these kind of evaluations, even with the aid of these kind of assessments?  Why is the tech and AI sector not capable of regulating their own practices? You've all worked with schools and learning and training institutions using AI tools.  Although this episode is about using the tools wisely, effectively and safely, please tell us how you've seen teaching and learning enhanced with the safe and impactful use of AI

    26 min
  4. 11/07/2024

    #281 - Risk Assessments for AI Learning Tools, a conversation, Part 1

    In today’s episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools.  Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers.  What does a risk assessment look like and whose responsibility is it to take onboard its insights?  Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself.    View our Risk Assessments here: https://www.educateventures.com/risk-assessments In the studio: Rowland Wells, Creative Producer, EVR Dave Turnbull, Deputy Head of Educator AI Training, EVR Ibrahim Bashir, Technical Projects Manager, EVR Rose Luckin, CEO & Founder, EVR Talking points and questions include: Who are these for?  what’s the profile of the person we want to engage with these risk assessments?  They’re concise, easy-to-read, no technical jargon.  But it’s still an analysis, for people with a research/evidence mindset.  Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public.  So how do we get this in front of people?  Do we lead the conversation with budget concerns?  Safeguarding concerns?  Value for money? What’s the end goal of this?  Are you trying to raise the sophistication of conservation around evidence and risk?  Many developers who you critique might just think you’re trying to make a name pulling apart their tools.  Surely the market will sort itself out? What’s the process involved in making judgements about a risk assessment?  If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step?  Can this be done quickly?  Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves?  Schools aren’t testbeds for intellectual property or tech interventions.  Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments?  Why is the tech and AI sector not capable of regulating their own practices? You’ve all worked with schools and learning and training institutions using AI tools.  Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI

    37 min
  5. 07/10/2024

    #280 - What are Student Expectations for AI in Education?

    In today's rapidly evolving educational landscape, Artificial Intelligence is emerging as a transformative force, offering both opportunities and challenges. As AI technologies continue to advance, it's crucial to examine their impact on student expectations, learning experiences, and institutional strategies. One pressing question is: what do students truly want from AI in education? Are they reflecting on the value of their assessments and assignments when AI tools can potentially complete them? This begs the deeper question of what we mean by student success in higher education and the purpose of knowledge in an AI-driven economy.  Professor Rose Luckin is joined by three wonderful guests in the studio to discuss what tools we need to support students and how we explore the potential and the limitations of AI for education. Guests: Michael Larsen, CEO & Managing Director, Studiosity Sally Wheeler, Professor, Vice-Chancellor, Birkbeck, University of London Ant Bagshaw, Executive Director, Australian Technology Network of Universities Talking points and questions include: Student expectations and perspectives on using AI for assessments/assignments and the role of knowledge in an AI economy The potential of AI to enhance learning through features like instant feedback, error correction, personalized support, learning analytics How AI could facilitate peer support systems and student community, and the research on the value of this The lack of robust digital/AI strategies at many institutions as a barrier to effective AI adoption The evidence-base for AI in education - challenges with research being highly specific/contextual, debating the value of in-house research vs general studies Whether evidence on efficacy truly drives institutions' buying decisions for AI tools or if other factors/institutional challenges are stronger influences How challenges facing the education sector can inhibit capacity for innovative deployments like AI The growing need for proven, supportive AI tools for students despite institutional constraints

    52 min
  6. #279 - Can We Trust in AI for Education? (AI in Ed Miniseries)

    06/11/2024

    #279 - Can We Trust in AI for Education? (AI in Ed Miniseries)

    Coming to the fifth and final episode of our miniseries on AI for education, host Professor Rose Luckin is joined by Timo Hannay, Founder of SchoolDash, and Lord David Puttnam, Independent Producer, Chair of Atticus Education, and former member of the UK parliament's House of Lords.  This episode and our series have been generously sponsored by Nord Anglia Education. Today we’re going to look ahead to the near and far future of AI in education, and ask what might be on the horizon that we can’t even predict, and what we can do as humans to proof ourselves against disruptions and innovations that have, like the Covid pandemic and ChatGPT's meteoric rise, rocked our education systems, and demanded we do things differently. Guests: Lord David Puttnam, Independent Producer, Chair, Atticus Education Timo Hannay, Founder, SchoolDash Talking points and questions include:  Slow Reaction to AI: Despite generative AI's decade-long presence and EdTech's rise, the education sector's response to tools like ChatGPT has been surprisingly delayed. Why? Learning from Our AI Response: Can our current reaction to generative AI serve as a case study for adapting to future tech shifts? It's a test of our educational system's resilience AI's Double-Edged Sword: With ChatGPT's rapid rise, are EdTech companies risking harm by using AI without fully understanding it? Think Facebook's data misuse in the Rohingya massacre Equipping Teachers for AI: Who can educators trust for AI knowledge? We need frameworks to guide them, as AI literacy is now as crucial as internet literacy Digital Natives ≠ AI-Ready: Today's youth grew up online, but does that prepare them for sophisticated, accessible AI? Not necessarily

    47 min

Trailer

4.8
out of 5
24 Ratings

About

The mission of The Edtech Podcast is to improve the dialogue between ‘ed’ and ‘tech’ through storytelling, for better innovation and impact. Hosted by Rose Luckin, Professor of Learner-Centred Design at UCL and Founder and CEO of EDUCATE Ventures Research, using AI to measure the unmeasurable in education. The Edtech Podcast audience consists of education leaders from around the world, plus startups, learning and development specialists, bluechips, investors, Government and media. The Edtech Podcast is downloaded 2000+ each week from 145 countries in total, with UK, US & Australia the top 3 downloading countries. Podcast series have included Future Tech for Education, Education 4.0, and The Voctech Podcast, Learning Continued, Evidence-Based EdTech, and the upcoming AI in Ed: Our Data-Driven Future series on AI. Send your qs and comments to @PodcastEdtech, @knowldgillusion, theedtechpodcast@gmail.com, hello@educateventures.com or https://theedtechpodcast.com/ or leave a voicemail for the show at https://www.speakpipe.com/theedtechpodcast

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada