The AI Lyceum

Samraj Matharu

The AI Lyceum Podcast explores the intersection of artificial intelligence, ethics, and society, bringing together voices from academia, industry, and policy. Hosted by Samraj Matharu - AI ethicist certified by OxEthica, University of Oxford, Visiting Lecturer at Durham University, and founder of The AI Lyceum - each episode unpacks the real-world impact of AI on business, governance, and human values. Listeners can expect: ▸ Sharp discussions on AI ethics, regulation, and governance ▸ Insights from leading experts at OpenAI, University of Oxford and more. ▸ Practical perspectives on responsible AI adoption in business ▸ Philosophical reflections on how AI reshapes our world 🌐 Website: theailyceum.com 🔗 Linktree: linktr.ee/theailyceum This isn’t just another tech podcast - it’s a space for critical thinking, practical wisdom, and future-facing ideas. Whether you’re a policymaker, business leader, or AI practitioner, The AI Lyceum Podcast equips you with the clarity to navigate AI responsibly. *The views and opinions expressed in this podcast are those of the individual speakers alone. They do not represent the views of The AI Lyceum, its host, or any affiliated organisations. Each guest participates in a personal capacity. The AI Lyceum and its host accept no responsibility or liability for any loss, damage, or actions taken in reliance on the content of this podcast.

  1. #22 – Governing AI at Scale: Understanding Bias & Fairness [Dr. Chiara Gallese, Digital Ethics Prof]

    5 DAYS AGO

    #22 – Governing AI at Scale: Understanding Bias & Fairness [Dr. Chiara Gallese, Digital Ethics Prof]

    'We should improve our critical thinking' — Chiara Gallese As AI systems move from experimentation to infrastructure, governance becomes the real test. In Episode 22 of The AI Lyceum, Samraj speaks with Dr Chiara Gallese — Philosophy PhD, Adjunct Professor of Digital Ethics at Collegio Internazionale Ca’ Foscari, Researcher at Tilburg Institute for Law, Technology, and Society (TILT), and Academic Expert in the European Commission’s AI Transparency Code of Practice Working Groups. A lawyer and privacy consultant for multinationals and banks since 2015, she currently focuses her studies on the legal aspects of artificial intelligence, the ethics of data use, and data protection. She's also a TedX speaker. They explore what fairness means once AI systems are deployed at scale, where bias truly enters AI (data, model, or deployment), and how transparency obligations under the EU AI Act shape real institutional practice. Chiara explains the difference between stochastic and deterministic systems, why ignoring bias is not just unethical but poor engineering, and why governance must extend beyond frameworks into everyday use. The conversation also examines emotional attachment to generative systems, disclosure dilemmas, and why strengthening human judgment may be just as important as improving the models themselves. This is a conversation about responsibility, constitutional values, transparency, and governing intelligence in the real world. Episode Highlights 0:00 ➤ Intro / Guest Welcome 2:40 ➤ Does AI ethics improve business outcomes? 8:55 ➤ Inside the EU AI Transparency Code of Practice 15:30 ➤ Stochastic vs deterministic systems 22:10 ➤ Where bias enters AI systems 30:45 ➤ Emotional intelligence and attachment 38:20 ➤ Disclosure, labelling, and stigma 45:10 ➤ Critical thinking in the AI era 50:30 ➤ Final reflections Key Questions Explored ➤ What does fairness mean in AI governance? ➤ Where does bias originate in AI systems? ➤ Can AI emotional intelligence be trusted? ➤ Should AI-generated content always be disclosed? ➤ Is governance about frameworks or lived practice? ➤ What must humans preserve as AI advances? Listen on: YouTube – https://www.youtube.com/@The.AI.Lyceum Spotify – https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple – https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon – https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website – https://theailyceum.com Hosted by Samraj Matharu — Certified AI Ethicist (Oxford) | Visiting Lecturer (Durham) #AI #AIAct #AIGovernance #DigitalEthics #Bias #Fairness #Transparency #ResponsibleAI #CriticalThinking

    52 min
  2. #21 – Understanding AI Ethics: Trust and Safety [Savneet Singh, AI Ethicist]

    6 FEB

    #21 – Understanding AI Ethics: Trust and Safety [Savneet Singh, AI Ethicist]

    'The machine is there to support you — not to replace your judgment. You are the one in control' — Savneet Singh As AI becomes more human-like, trust becomes harder to define — and more critical to get right. In this episode, Samraj speaks with Savneet Singh, who doesn't speak in her capacity as trust and safety lead at a top tech company, but as Visiting Lecturer at Emory University, about what it really means to trust AI systems that increasingly sound, remember, and respond like humans. Savneet breaks down why trust in AI isn't about how human it feels — it's about predictability, transparency, and alignment with human values. They explore why AI must remain a co-pilot, not an autopilot, why labeling AI-generated content matters, and how misinformation spreads not through conspiracy, but through everyday digital behaviour. The conversation tackles 'AI psychosis' emotional attachment to non-conscious systems, the ethics of AI companions, and why accountability must sit with developers, deployers, and users — not the machine. This is a conversation about responsibility, boundaries, and keeping humans firmly in control as AI becomes more powerful. WHAT YOU'LL LEARN → What trust actually means in the context of AI → Why human-in-the-loop design is non-negotiable → The difference between misinformation and disinformation → Why AI companions risk emotional substitution → How "AI psychosis" emerges through prolonged interaction → Why labelling AI content builds trust → Where accountability must sit when AI goes wrong EPISODE HIGHLIGHTS 0:00 ➤ Intro 1:50 ➤ What does "trust" really mean in AI? 4:41 ➤ Transparency, guardrails, and human-in-the-loop 7:22 ➤ Trust vs confidence 9:33 ➤ AI-generated journalism and fabricated facts 11:15 ➤ Misinformation, deception, and human responsibility 14:49 ➤ AI psychosis and emotional attachment 21:07 ➤ Losing clarity about the human–AI relationship 24:27 ➤ Supportive tools vs emotional substitution 28:49 ➤ Guardrails, free will, and ethics by design 30:20 ➤ "Digital littering" and everyday ethics 33:54 ➤ Why AI literacy matters for all ages 36:26 ➤ One practical guardrail every business should use 39:51 ➤ Trusting AI when you doubt your own judgment 42:26 ➤ Teaching children how to use AI responsibly 46:55 ➤ Why AI agents still feel risky 52:22 ➤ The accountability gap 54:00 ➤ Final message: humans must stay in control 🔗 LISTEN, WATCH & CONNECT 🌐 Join the 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com ▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum ABOUT THE AI LYCEUM The AI Lyceum is a global community exploring AI, ethics, philosophy, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University. #AI #philosophy #trust #google

    54 min
  3. #20 - AI's Biggest Blind Spot: Culture, Not Code [Dr. Nina Begus, Berkeley]

    30 JAN

    #20 - AI's Biggest Blind Spot: Culture, Not Code [Dr. Nina Begus, Berkeley]

    🎓 20% Off | Oxford AI Executive Programmes https://oxsbs.link/ailyceum "Humanities scholars need to be at the table where AI is being built — ethics must be embedded from the start, not added as a band-aid afterward" — Dr. Nina Begus AI doesn't just process data — it processes human culture. In this episode, Samraj speaks with Dr. Nina Begus, Philosophy PhD from Harvard, UC Berkeley researcher and author of Artificial Humanities, who argues that understanding AI requires more than engineering — it requires the humanities. Nina reveals how ancient myths like Pygmalion still shape how we design AI today, why language models inherit our cultural assumptions, and what happens when language gets stripped from human experience. They explore Ex Machina's warning about artificial companions, the rise of "mind crime" with Neuralink, and whether transformers are really the future. WHAT YOU'LL LEARN: → How the Pygmalion myth influences AI design → Why Ex Machina matters for understanding AI relationships → What "mind crime" means in the age of Neuralink → The difference between trust and reliability in AI → Why interpretability unlocks creativity and control → Are transformers really it — or is there more ahead? EPISODE HIGHLIGHTS 0:00 ➤ Intro 3:00 ➤ What humanities reveal about AI 10:00 ➤ Academia meets Silicon Valley 13:00 ➤ "Will I be replaced?" — the 2023 question 17:00 ➤ Writers respond: First Encounters book 21:00 ➤ The Pygmalion myth in modern tech 24:00 ➤ Ex Machina & artificial companions 28:00 ➤ Neuralink, neuroethics & mind crime 33:00 ➤ Ethics from the start vs band-aid approach 36:00 ➤ Getting the transformer paper day one 42:00 ➤ Are transformers the future? 45:00 ➤ Determinism vs creativity in AI 48:00 ➤ The black box problem 53:00 ➤ Tokenization: language without meaning 58:00 ➤ Trust vs reliability in machines 1:02:00 ➤ Would you trust a machine? 🔗 LISTEN, WATCH & CONNECT 🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum 🌐 Join 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com ▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum ABOUT THE AI LYCEUM The AI Lyceum explores AI, ethics, philosophy, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University. #ai #humanities #ethics #culture #neuralink #exmachina #berkeley

    1h 1m
  4. #19 - AI Is a Question Machine: Teaching Thinking, Not Answers [Casandra Sibilin, Philosophy Lecturer, CUNY]

    21 JAN

    #19 - AI Is a Question Machine: Teaching Thinking, Not Answers [Casandra Sibilin, Philosophy Lecturer, CUNY]

    AI is not an oracle — it is a question machine. In this episode, Samraj Matharu speaks with Cassandra Sibilin, Philosophy Lecturer at CUNY (City University of New York) and a leading practitioner in AI-assisted education. Cassandra works at the intersection of philosophy, pedagogy, and emerging AI tools, helping educators and students move beyond fear, hype, and automation toward deeper thinking. Rather than treating AI as an answer engine or oracle, she argues for a different framing: AI as a question machine — a tool that challenges assumptions, surfaces multiple perspectives, and sharpens human judgment. Together, they explore why philosophy has become essential AI literacy, how “flipping the interaction” with AI changes learning outcomes, and why critical thinking is not replaced by automation — but pressured into relevance by it. The conversation examines dialectic thinking, growth-mindset tutors, the risks of anthropomorphising AI, and how education must evolve toward dialogue, inquiry, and community. This is a long-form, reflective discussion about authority, knowledge, ethics, and what it really means to teach — and think — in the age of generative AI. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest welcome 3:10 ➤ Why AI isn’t an oracle 9:20 ➤ Philosophy as AI literacy 15:40 ➤ Flipping the interaction: AI that asks questions back 23:30 ➤ Dialectic thinking vs hype and panic 31:10 ➤ Anthropomorphising AI: bug or feature? 38:50 ➤ Ethics, growth mindset, and responsible AI tutors 46:00 ➤ Education in 2050: AI tutors and human community 53:30 ➤ Closing reflections & audience question 🔗 LISTEN, WATCH & CONNECT 🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum 🌐 Join the 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com ▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum ABOUT THE AI LYCEUM The AI Lyceum™ is an independent global community exploring AI, ethics, philosophy, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (University of Oxford) and Visiting Lecturer at Durham University. “1K+ members from OpenAI, DeepMind, Oxford, Google, and more.”

    56 min
  5. #18 – Thinking With AI: What Can’t Be Automated? [Peter Danenberg, Google DeepMind]

    13 JAN

    #18 – Thinking With AI: What Can’t Be Automated? [Peter Danenberg, Google DeepMind]

    “The things we can say are limited by the things we can think.” In this episode, Samraj Matharu speaks with Peter Danenberg, Senior Software Engineer specialising in rapid LLM prototyping at Google DeepMind, based in Palo Alto, California. Peter works at the frontier where large language models move from research to real-world systems. Together, they explore what it really means to think with AI — not to outsource thinking to machines, but to use them as tools that challenge, pressure-test, and refine human judgment. The conversation goes beyond model performance into philosophy, ethics, and cognition. Peter reflects on why intelligence is not the same as thinking, how critical thinking emerges from moments of crisis, and why philosophy remains the underlying language of reasoning in an age of automation. They examine our instinct to anthropomorphise AI — questioning whether this is a flaw or an evolutionary feature — and discuss why ethics in LLM development has largely focused on harm reduction rather than human flourishing. The episode also introduces the idea of peirastic AI: systems designed not to reassure users, but to test and sharpen their thinking. This is a long-form, reflective conversation about judgment, responsibility, and the limits of automation — and what still belongs, fundamentally, to humans. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest welcome 4:00 ➤ Peter’s role at DeepMind and rapid LLM prototyping 9:30 ➤ What “thinking with AI” really means 15:00 ➤ Intelligence vs thinking: where people get confused 22:00 ➤ Philosophy as the language of thinking 30:00 ➤ Critical thinking, crisis, and discernment 38:00 ➤ Anthropomorphising AI: bug or feature? 47:00 ➤ Ethics in LLMs and the limits of harm reduction 56:00 ➤ Automation, judgment, and human responsibility 1:05:00 ➤ Peirastic AI: systems that test us 1:15:00 ➤ Interfaces, embodiment, and tactile thinking 1:26:00 ➤ What can’t be automated 1:36:00 ➤ Closing reflections and audience question 🔗 LISTEN, WATCH & CONNECT 🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum 🌐 Join the 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com ▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum ABOUT THE AI LYCEUM The AI Lyceum is a global community exploring AI, ethics, creativity, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University. #ai #genai #llm #google #deepmind #aiethics #ethics #philosophy #thinking #criticalthinking #automation #humanjudgment #agenticai #responsibleai #theailyceum

    1h 43m
  6. #17 – The Future of Law with AI: Making Sense of It All [Stephen Dnes, Lawyer]

    5 JAN

    #17 – The Future of Law with AI: Making Sense of It All [Stephen Dnes, Lawyer]

    “Law does not stop innovation — but poor regulation can quietly distort it.” In this episode, Samraj speaks with Stephen Dnes, media lawyer, partner at Dnes & Felver, and lecturer in law at Royal Holloway, University of London, about one of the most important questions facing AI today: How should law assign responsibility when AI systems act, decide, and transact autonomously? Stephen brings a rare transatlantic legal perspective, having worked across UK, EU, and US competition, data, and technology regulation. Together, they explore how existing legal doctrine struggles with agentic systems, why GDPR and the EU AI Act often collide rather than complement one another, and how concepts like liability, mens rea, hazard, and risk must evolve in an AI-mediated world. The conversation moves beyond surface-level AI debates into deeper legal, economic, and philosophical territory — including how agentic contracts change verification and accountability, why today’s AI systems are better at averaging than wisdom, and what trust really means as humans gradually leave the loop. Whether you work in law, policy, advertising, technology, or AI strategy, this episode offers a rare, clear-eyed view of how legal systems may adapt to the next era of automation. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 3:00 ➤ Defining data, information, and regulation 8:00 ➤ Why law always lags innovation 13:00 ➤ GDPR vs the EU AI Act: a structural tension 18:00 ➤ Hazard vs risk and the limits of precaution 24:00 ➤ Mens rea, strict liability, and AI systems 32:00 ➤ Agentic contracts and responsibility chains 41:00 ➤ Disintermediation and the future of advertising markets 50:00 ➤ Trust, brands, and humans leaving the loop 58:00 ➤ Artificial intelligence vs artificial wisdom 1:05:00 ➤ Law, philosophy, and the role of human judgment 1:09:00 ➤ Closing reflections & audience question 🔑 KEY QUESTIONS ANSWERED ➤ How should responsibility be assigned when AI acts autonomously? ➤ Why do GDPR and the EU AI Act often pull in opposite directions? ➤ What is the legal difference between hazard and risk — and why does it matter? ➤ Can concepts like mens rea apply to AI systems at all? ➤ How do agentic contracts change verification and liability? ➤ Why AI systems average well but struggle with wisdom ➤ What does “trust” mean in an AI-mediated economy? 🔗 SUBSCRIBE TO THE AI LYCEUM https://www.youtube.com/@The.AI.Lyceum 🎓 20% Off | Oxford AI Ethics Executive Programme https://oxsbs.link/ailyceum 🔗 Website: https://theailyceum.com 🔗 Instagram: https://www.instagram.com/theailyceum 🔗 LinkedIn Company Page: https://www.linkedin.com/company/108295902/admin/dashboard/ 🔗 LinkedIn Community: https://linktr.ee/theailyceum #ai #law #aigovernance #aiethics #regulation #agenticai #liability #trust #gdpr #euaiact #digitalmarkets #advertising #adtech #policy #philosophy #technology #theailyceum

    1h 11m
  7. #16 – Winning the Zero Click: AI and Marketing’s Future [Professor Jim Lecinski Kellogg. School of Management]

    11/12/2025

    #16 – Winning the Zero Click: AI and Marketing’s Future [Professor Jim Lecinski Kellogg. School of Management]

    🎓 20% Off | Oxford AI Ethics Executive Programme https://oxsbs.link/ailyceum “AI raises the ceiling and raises the floor. It is not the end of marketing. It is the beginning of better marketing.” — Jim Lecinski In this episode, Samraj speaks with Jim Lecinski, Clinical Professor of Marketing at Northwestern University’s Kellogg School of Management and former Google VP. He created ZMOT (Zero Moment of Truth), the idea that people research and judge brands long before they reach a store or website. His new book, The AI Marketing Canvas, Second Edition: A Five-Step AI Plan for Marketers, offers a clear method for adopting AI in modern organisations. In the Zero-Click Era, where AI answers instead of sending users to links, this work becomes essential. We explored how AI-first search, LLMs and agentic interfaces are changing discovery and consumer behaviour. Jim explains how models gather and refine information, why answers differ and how brands can prepare for a world where clicks fall. He argues for clear use cases, real value creation and avoiding hype. We explore ethics, bias and how brands must balance financial, brand and community value. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:00 ➤ Why AI is reshaping marketing 3:00 ➤ ZMOT explained and Zero Click’s rise 6:00 ➤ The 2x2 model for real AI use cases 10:00 ➤ Efficiency traps 13:00 ➤ How LLMs retrieve and refine information 16:00 ➤ Variability, trust and personalisation 20:00 ➤ Ethics, bias and community value 25:00 ➤ Preparing for AI-first search 30:00 ➤ Agentic assistants and commerce 35:00 ➤ Wow moments with AI 40:00 ➤ UI, usability and the next era 45:00 ➤ Jim’s closing challenge 50:00 ➤ Closing 🔗 LISTEN, WATCH & CONNECT 🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum 🌐 Join the 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum ▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum 📘 Jim’s Book: https://www.amazon.co.uk/Marketing-Canvas-Second-Five-Step-Marketers-ebook/dp/B0G59PLDBM/ ABOUT THE AI LYCEUM The AI Lyceum is a global community exploring AI, ethics, creativity and human potential, hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University. #ai #genai #llm #openai #google #nvidia #agent #agentic #marketing #advertising #media #commerce #ecommerce #zeroclick #zmot #jimlecinski #aimarketing #consumerjourney #search #business #entrepreneur #entrepreneurship #aiethics #ethics #philosophy #theailyceum

    50 min
  8. #15 – Is AI Listening to Us? Voice AI as the New Interface [Charles Cadbury | CEO, Say It Now]

    08/12/2025

    #15 – Is AI Listening to Us? Voice AI as the New Interface [Charles Cadbury | CEO, Say It Now]

    🎓 20% Off | Oxford AI Ethics Executive Programme - https://oxsbs.link/ailyceum 🍫 The answer is yes — he is related to 𝘵𝘩𝘢𝘵 Cadbury. “Conversation is the most natural interface there is.” — Charles Cadbury Charles Cadbury — CEO, Say It Now and Amazon Alexa Cup winner — explores why Voice AI is becoming the new interface for search. From early Alexa development to powering actionable audio ads for Marriott, Greggs and HMRC, he explains how voice delivers better engagement and how agentic advertising will reshape the future. The conversation dives into context windows, ethics, mind-crime, multimodal devices, energy demands, assistant wars, and why voice may replace screens as the dominant interface by 2030. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:00 ➤ “Conversation is the most natural interface there is” 2:50 ➤ Broadcast vs dialogue 3:45 ➤ Why voice beats typing 4:40 ➤ The early Alexa commerce moment 5:40 ➤ How actionable audio ads work 6:45 ➤ Smart speaker behaviours 8:00 ➤ Building voice apps before mass adoption 9:40 ➤ Booking flights by voice (2015) 10:30 ➤ Marriott & in-room assistants 11:25 ➤ Winning Amazon’s Alexa Cup 12:40 ➤ Debunking the “always listening” myth 13:50 ➤ Context windows in advertising 15:20 ➤ Agentic journeys: audio ad → action 16:20 ➤ Growing The AI Lyceum via voice search 17:20 ➤ Voice-through-rate & 30–40% conversions 20:00 ➤ Measuring voice in the zero-click era 21:10 ➤ Branded experts for Alexa, Gemini & Siri 22:20 ➤ Ethics, persuasion & the audience of one 24:20 ➤ Data powering conversational optimisation 25:45 ➤ Voice as the unlock for TV, radio & OOH 27:50 ➤ From billboard to real-time booking 29:00 ➤ Beyond the smartphone: new interfaces 30:40 ➤ Smart glasses & multimodal projection 31:30 ➤ Mind-crime: when AI gets “too good” 32:30 ➤ Using AI for creative thought partnership 33:30 ➤ Alexa upgrades & the assistant wars 34:40 ➤ Power demand & AI’s energy challenge 36:50 ➤ Space data centres & lunar solar ideas 38:20 ➤ Why OpenAI won’t run ads mid-conversation 40:10 ➤ Behaviour change campaigns: smoke alarms & HMRC 42:00 ➤ Charles’ closing question 43:00 ➤ Closing PLATFORMS 🎓 20% Off | Oxford Programme - https://oxsbs.link/ailyceum All Links: https://linktr.ee/theailyceum YouTube: https://www.youtube.com/@The.AI.Lyceum Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website: https://theailyceum.com #ai #artificialintelligence #aiethics #marketing #advertising #tech #genai #theailyceum #charlescadbury #voiceai #conversationalai #adtech #sayitnow #philosophy

    44 min

About

The AI Lyceum Podcast explores the intersection of artificial intelligence, ethics, and society, bringing together voices from academia, industry, and policy. Hosted by Samraj Matharu - AI ethicist certified by OxEthica, University of Oxford, Visiting Lecturer at Durham University, and founder of The AI Lyceum - each episode unpacks the real-world impact of AI on business, governance, and human values. Listeners can expect: ▸ Sharp discussions on AI ethics, regulation, and governance ▸ Insights from leading experts at OpenAI, University of Oxford and more. ▸ Practical perspectives on responsible AI adoption in business ▸ Philosophical reflections on how AI reshapes our world 🌐 Website: theailyceum.com 🔗 Linktree: linktr.ee/theailyceum This isn’t just another tech podcast - it’s a space for critical thinking, practical wisdom, and future-facing ideas. Whether you’re a policymaker, business leader, or AI practitioner, The AI Lyceum Podcast equips you with the clarity to navigate AI responsibly. *The views and opinions expressed in this podcast are those of the individual speakers alone. They do not represent the views of The AI Lyceum, its host, or any affiliated organisations. Each guest participates in a personal capacity. The AI Lyceum and its host accept no responsibility or liability for any loss, damage, or actions taken in reliance on the content of this podcast.