The AI Lyceum

Samraj Matharu

The AI Lyceum Podcast explores the intersection of artificial intelligence, ethics, and society, bringing together voices from academia, industry, and policy. Hosted by Samraj Matharu - AI ethicist certified by OxEthica, University of Oxford, Visiting Lecturer at Durham University, and founder of The AI Lyceum - each episode unpacks the real-world impact of AI on business, governance, and human values. Listeners can expect: ▸ Sharp discussions on AI ethics, regulation, and governance ▸ Insights from leading experts at OpenAI, University of Oxford and more. ▸ Practical perspectives on responsible AI adoption in business ▸ Philosophical reflections on how AI reshapes our world 🌐 Website: theailyceum.com 🔗 Linktree: linktr.ee/theailyceum This isn’t just another tech podcast - it’s a space for critical thinking, practical wisdom, and future-facing ideas. Whether you’re a policymaker, business leader, or AI practitioner, The AI Lyceum Podcast equips you with the clarity to navigate AI responsibly. *The views and opinions expressed in this podcast are those of the individual speakers alone. They do not represent the views of The AI Lyceum, its host, or any affiliated organisations. Each guest participates in a personal capacity. The AI Lyceum and its host accept no responsibility or liability for any loss, damage, or actions taken in reliance on the content of this podcast.

  1. AI and the New Economics of Advertising [Media Analyst, Ian Whittaker] #30

    VOR 20 STD.

    AI and the New Economics of Advertising [Media Analyst, Ian Whittaker] #30

    ‘You are perceived as you price’ - Ian Whittaker Advertising is entering a new economic era. AI is changing what gets automated, what gets valued, and how agencies prove their worth. In this episode of The AI Lyceum®, Samraj Matharu speaks with Ian Whittaker, a media and advertising analyst who has spent more than 25 years looking at the industry through a financial markets lens. Ian argues that advertising has become too focused on efficiency: lower CPMs, cheaper reach, faster execution and marginal optimisation. But clients, CFOs and boards do not think in media metrics. They think in revenue growth, free cash flow, margin, risk and capital allocation. This conversation asks a simple question: if AI makes execution cheaper, what does the advertising industry get paid for next? We discuss why the current agency pricing model may not hold, why brand becomes more valuable when optimisation is abundant, why agencies need to move closer to the client P&L EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 3:28 ➤ Ian Whittaker’s Background in Media, Markets and Advertising 4:45 ➤ Why Advertising Must Move From Efficiency to Efficacy 6:45 ➤ CPMs, Clicks, Cash Flow and the Client Gap 8:52 ➤ Starting With the Business Problem, Not the Technology 10:58 ➤ The Industry’s Rabbit Hole of Tech, Process and Measurement 15:12 ➤ AI as a Supply-Side Shock to Advertising 16:29 ➤ Why Advantage Moves Upstream to Strategy and Brand 20:03 ➤ Distribution Control, Ad Tech and the Future of Agencies 24:20 ➤ Publicis, Power of One and Agency Margin 26:19 ➤ Why the Advisor and Executor Roles Are Blurring 29:39 ➤ Why the Current Agency Revenue Model Will Not Hold 34:30 ➤ What Agencies Need to Change Before 2030 39:42 ➤ OpenAI Ads, LLM Advertising and Capital Allocation 42:06 ➤ Attribution, Risk and Why CFOs Do Not Need Perfect Certainty 46:35 ➤ Speaking the Language of the CFO 49:58 ➤ Closing Thoughts KEY QUESTIONS ANSWERED ➤ Why has advertising become too focused on efficiency? ➤ What does AI do to agency pricing power? ➤ Why is execution becoming commoditised? ➤ Should agencies be paid for time, outputs or outcomes? ➤ Why does brand become more important in an AI-driven media world? ➤ How should advertisers think about every pound of media spend? ➤ Why do agencies need to speak the language of the CFO? SUBSCRIBE Subscribe to The AI Lyceum® for conversations on AI, philosophy, media, ethics and the future of business. CONNECT YouTube https://www.youtube.com/@The.AI.Lyceum Spotify https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website https://theailyceum.com LinkedIn Group https://www.linkedin.com/company/108295902/admin/dashboard/ #AI #Advertising #Media #MarketingScience #AdTech #BrandStrategy #TheAILyceum

    46 Min.
  2. Inside the AI Supply Chain: How Cerebras Powers Fast AI [James Wang] #29

    26. APR.

    Inside the AI Supply Chain: How Cerebras Powers Fast AI [James Wang] #29

    'STEM is reducible to math and math is verifiable. So anything verifiable is automatable.' In this episode of The AI Lyceum®, Samraj Matharu speaks with James Wang, Product Marketing Director at Cerebras, about the AI supply chain, inference, agents and what remains human when intelligence becomes infrastructure. James previously spent nearly a decade at NVIDIA before joining Cerebras, where he focuses on AI models and inference. This conversation goes from the technical to the philosophical: what inference actually means, why fast AI matters, how agentic coding is changing work, and why humanities, relationships and personal narrative may become more valuable in an age of automation. We also explore the difference between old rule-based automation and modern AI systems. As James puts it, with AI you no longer define every rule up front. You define the objective, and the system creates what it needs to get there. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:20 ➤ Ray Kurzweil, AI and the Final Boss of Technology 6:00 ➤ What Inference Means and Why It Matters 10:20 ➤ AI Adoption, Agentic Coding and the Usage Gap 13:00 ➤ Mental Labour, Automation and the Future of Work 20:00 ➤ Rule-Based Automation vs Fluid Intelligence 25:00 ➤ Alignment, Consciousness and Inner Experience 28:00 ➤ Proactive AI, Offline Inference and Continuous Agents 32:00 ➤ Why Humanities May Matter More Than STEM 41:00 ➤ AGI Timelines and the End of Long-Term Forecasting 45:00 ➤ Agent Companies, Token Economies and Human Value 54:00 ➤ What Remains Valuable When AI Automates Utility KEY QUESTIONS ANSWERED ➤ What is inference, and why does it matter in the AI supply chain? ➤ How does Cerebras fit into the future of AI infrastructure? ➤ Why is agentic coding changing software development so quickly? ➤ What is the difference between rule-based automation and intelligent AI? ➤ Why might verifiable tasks become increasingly automatable? ➤ What human skills become more valuable as AI gets faster and cheaper? Subscribe to The AI Lyceum® for conversations on artificial intelligence, philosophy, ethics, infrastructure and the future of society. YouTube https://www.youtube.com/@The.AI.Lyceum Spotify https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website https://theailyceum.com LinkedIn https://www.linkedin.com/company/108295902/admin/dashboard/ #AI #Cerebras #Inference #AISupplyChain #AIAgents #AGI #AIInfrastructure #FutureOfWork #ResponsibleAI #TheAILyceum

    55 Min.
  3. How Advertising, AI and Algorithms Shape What We See [Alessandra Di Lorenzo, Advertising Leader] #28

    20. APR.

    How Advertising, AI and Algorithms Shape What We See [Alessandra Di Lorenzo, Advertising Leader] #28

    ‘The web is one big fat ad.' That was Alessandra DiLorenzo’s quote of the podcast, and it gets right to the point. In this episode of The AI Lyceum®, Samraj Matharu speaks with Alessandra DiLorenzo, former CEO of lastminute.com media and former leader at eBay and Vodafone, following her recent TEDx Royal Tunbridge Wells talk at the March 8, 2026 event themed ‘Momentum’. Her TEDx speaker profile framed the idea simply and sharply: ‘My daughter could skip ads before she could read the word advertising.’ A key theme running through this conversation is reversibility. Alessandra makes a sharp distinction between decisions that can be reversed and those that cannot. If a decision is reversible, AI can help optimise it. If it is irreversible, especially where trust, brand, or human relationships are at stake, leaders should think very carefully before handing it to a machine. They explore how AI, advertising, and algorithms shape what we see, how the zero-click web is changing the economics of publishing and discovery, and why brands now have to think beyond traffic and performance alone. The conversation also gets into trust, direct traffic, language, intelligence, ethics, agents, media literacy, and the growing need for human judgment in an age of machine-led optimisation. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Alessandra DiLorenzo on AI as the new silent gatekeeper 2:18 ➤ Motherhood, media, and why algorithmic influence starts early 7:01 ➤ The zero-click web, Google AI answers, and the pressure on publishers 14:44 ➤ What language models get wrong about meaning and intelligence 20:01 ➤ AI ethics, human judgment, and why some decisions cannot be outsourced 25:03 ➤ Reversible vs irreversible decisions in brand and business 31:01 ➤ Direct traffic, trust, brand equity, and surviving outside the gatekeeper 37:21 ➤ How CEOs should think about AI transformation, goals, and timeframes 45:46 ➤ Agents, black boxes, and the future of brand discovery 50:24 ➤ Critical thinking, media literacy, and ‘feed the feed’ 59:57 ➤ Closing question on responsible AI and human judgment YouTube https://www.youtube.com/@The.AI.Lyceum Spotify https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website https://theailyceum.com

    1 Std. 3 Min.
  4. Inclusive AI, Trust and Hidden Harm [Sidrah Hassan, AI Ethicist] #27

    14. APR.

    Inclusive AI, Trust and Hidden Harm [Sidrah Hassan, AI Ethicist] #27

    'AI just feels like another frontier of exclusion' – Sidrah Hassan In this episode of The AI Lyceum®, Samraj Matharu speaks with Sidrah Hassan, AI Governance and AI Ethics Specialist, about inclusive AI, AI ethics, AI governance, algorithmic bias, trust in AI, transparency, human oversight, and responsible AI in practice. Sidrah is an AI Governance and Ethics Manager at Kainos, an AI Ethics and Strategy Advisor at Ethical AI Alliance, and has also worked across AI ethics, product, and public education through roles at AND Digital, BBC Scotland, and the AI Safety Collab by ENAIS. We explore how large language models can reinforce gender bias and racial bias, why inclusive AI must go beyond good intentions, and what trustworthy AI really looks like when systems are used in the real world. The conversation covers training data, representation gaps, AI harms, accountability, human-in-the-loop decision-making, and the challenge of building AI systems that serve people fairly. Sidrah also discusses agentic AI, AI in healthcare, economic displacement, and the role of storytelling in surfacing subtle harms that are often missed by technical frameworks alone. She explains the thinking behind the AI Harms Map and why lived experience matters when assessing the real impact of AI systems. This episode is for anyone interested in AI ethics, AI governance, responsible AI, inclusive AI, trustworthy AI, bias in AI systems, AI transparency, and the future of human-centred technology. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:09 ➤ What inclusive AI looks like in everyday systems 4:34 ➤ LLM bias, training data, and representation gaps 7:09 ➤ How to improve inclusivity in AI 11:00 ➤ What trust in AI really means 13:01 ➤ Building trust in AI systems 20:13 ➤ Is AI ethics a distinct field? 30:17 ➤ Agentic AI, safety, and security 32:45 ➤ Storytelling, lived experience, and AI harms 43:10 ➤ Economic displacement and the future of work 50:21 ➤ AI in healthcare and human judgment 1:00:03 ➤ The AI Harms Map 1:03:48 ➤ A closing question on data and AI use YouTube https://www.youtube.com/@The.AI.Lyceum Spotify https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website https://theailyceum.com #AI #AIEthics #AIGovernance #InclusiveAI #ResponsibleAI #TrustworthyAI #AlgorithmicBias #AgenticAI #TheAILyceum

    1 Std. 9 Min.
  5. How AI Is Changing Education, Ethics, and the Future of Work [Tina Austin] #26

    6. APR.

    How AI Is Changing Education, Ethics, and the Future of Work [Tina Austin] #26

    ‘Failure is not something to be ashamed of. I want my students to know why they were wrong' AI is changing education, work, and judgment faster than most institutions can keep up. In Episode #26 of The AI Lyceum®, I speak with Tina Austin, an AI educator, bioethics and computational biology lecturer, AI ethics adviser, and OpenAI presenter, about what it means to teach, think, and stay human in the age of AI. We explore how generative AI is reshaping the classroom, why students are anxious about the future of work, and why critical thinking matters more when machines can produce polished answers in seconds. This is not a vague conversation about hype. It is a grounded one about what universities, businesses, and individuals should actually measure when they use AI. Tina argues that the goal is not speed for its own sake. It is better judgment, clearer reasoning, stronger ethics, and a more honest understanding of where AI helps and where it can mislead. The conversation is beyond thinking properly, Tina is designing learning environments where thinking is unavoidable, even with AI EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:06 ➤ AlphaFold, science, and getting students excited about AI 2:26 ➤ Preparing students for jobs in a post-GenAI world 3:31 ➤ Human wisdom vs artificial intelligence 4:39 ➤ Moltbook, AI agents, and strange new digital experiments 7:05 ➤ AI-only conferences and what they reveal 10:20 ➤ How businesses should measure AI success 15:04 ➤ Morals, ethics, and whether AI ethics really exists 18:40 ➤ The Socratic AI Framework and why frameworks matter 21:20 ➤ Tina’s framework for critical thinking in education 25:20 ➤ Can thinking be measured? Metacognition, evidence, and learning 29:44 ➤ Determinism, interpretability, and risk in medicine 35:48 ➤ Sci-fi, surveillance, and where AI may take society 40:03 ➤ Healthcare, AlphaGenome, and cautious optimism 46:00 ➤ Foresight, prediction, and helping students earlier 47:03 ➤ Wisdom, intelligence, and deciding which problems matter 48:21 ➤ Closing reflections on agency and responsibility This episode answers questions such as: What should students learn in the AI age? How should businesses measure AI properly? Where is the line between automation and intelligence? What happens when systems start influencing human judgment at scale? And how do we preserve agency when AI becomes more capable, persuasive, and present? Subscribe to The AI Lyceum® for conversations on AI, ethics, philosophy, science, and the future of society. YouTube https://www.youtube.com/@The.AI.Lyceum Spotify https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website https://theailyceum.com #AI #Education #AIEthics #FutureOfWork #CriticalThinking #HigherEducation #TheAILyceum

    51 Min.
  6. How Wondercraft Is Changing AI Video for Business [Wondercraft CEO, Dimitris Nikolaou] #25

    29. MÄRZ

    How Wondercraft Is Changing AI Video for Business [Wondercraft CEO, Dimitris Nikolaou] #25

    'The problem is that more people want to be creating videos, but they can’t' AI video is moving from novelty to utility, and Wondercraft is betting the real opportunity is not just generating content, but making video genuinely usable for business. In Episode 25 of The AI Lyceum®, we sat down with Dimitris Nikolaou, Co-Founder and CEO of Wondercraft, to explore how he went from Imperial and Palantir to Y Combinator and building one of the UK’s most interesting AI creation platforms. We discuss why Wondercraft started in audio, why video is a far bigger market, and why most AI video tools still fall short for real work. Dimitris explains the difference between model aggregation and building a true application layer, why businesses need editable workflows rather than random clips, and how Wondercraft is aiming to become “the antidote to AI slop” by making AI video useful for onboarding, training, internal communication, and more. We also get into startup iteration, hiring philosophy, market selection, and why being a founder requires more intentionality than most people realise. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:54 ➤ Why Wondercraft is called Wondercraft 3:51 ➤ From Imperial and Palantir to Y Combinator 7:29 ➤ Proving a startup idea early on 9:42 ➤ Why long-term vision matters 11:43 ➤ AI slop, business video, and real utility 14:48 ➤ The missing layer between models and useful products 16:47 ➤ Why Wondercraft moved from audio into video 19:42 ➤ Building a lean, high-performance team 24:07 ➤ Dimitris’s favourite interview questions 29:00 ➤ Luck, hard work, and career inflection points 32:27 ➤ Where AI video is heading 34:49 ➤ Inference, APIs, and working with FAL 36:59 ➤ Advice for founders starting in AI today 40:11 ➤ Why AI is not the real differentiator 44:02 ➤ Dimitris’s closing advice for aspiring founders Listen to The AI Lyceum® YouTube: https://www.youtube.com/@The.AI.Lyceum Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum Website: https://theailyceum.com #AI #AIVideo #Wondercraft #GenerativeAI #Startups #YCombinator #VideoCreation #FutureOfWork

    44 Min.
  7. Agentic Advertising: How AI Agents Will Buy and Sell Media [IAB Tech Lab, Shailley Singh] #24

    12. MÄRZ

    Agentic Advertising: How AI Agents Will Buy and Sell Media [IAB Tech Lab, Shailley Singh] #24

    This episode is sponsored by Advertising Protocols™ — a free and open directory for the entire advertising ecosystem, built to help people explore the protocols, standards, tools, and infrastructure shaping the future of media and ad tech. 🌐 https://advertisingprotocols.com 'Bot is the old world. Agent is the new world' – Shailley Singh What happens when AI agents start buying and selling media? In this episode of The AI Lyceum™, Samraj Matharu sits down with Shailley Singh, EVP Product and COO at IAB Tech Lab. Shailley has helped shape modern digital advertising through senior roles at companies including Yahoo and PayPal, and through contributions to major industry standards such as MRAID for mobile. Now he is focused on the next shift: agentic advertising. We discuss agentic real-time bidding, publisher value, standards, transparency, human oversight, AI-generated content, and why the future of advertising may depend on humans moving from operators to architects. HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 1:42 ➤ Why agentic AI is the next big shift in advertising 7:25 ➤ Agentic real-time bidding explained 12:40 ➤ From campaign KPIs to business outcomes 16:29 ➤ Standards, protocols and avoiding fragmentation 22:20 ➤ Human in the loop: from operator to architect 28:11 ➤ AI slop, provenance and content quality 35:28 ➤ The future of AI in marketing and advertising 44:34 ➤ Is AI a bubble? 52:05 ➤ Final thoughts: speed vs precision WE ANSWERED ➤ What is agentic advertising? ➤ How could AI agents change media buying and real-time bidding? ➤ Why do standards and protocols matter in an AI era? ➤ How should humans stay in the loop when AI systems make decisions? ➤ What will make companies stand out when AI becomes embedded everywhere? 🎧 YouTube: https://www.youtube.com/@The.AI.Lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum 🌐 Website: https://theailyceum.com 🔗 Linktree: https://linktr.ee/theailyceum 🔗 Advertising Protocols: https://advertisingprotocols.com 🔗 Shailley Singh: https://www.linkedin.com/in/shailleysingh/ 🔗 IAB Tech Lab: https://iabtechlab.com Samraj Matharu — Certified AI Ethicist (Oxford) | Visiting Lecturer (Durham) The AI Lyceum™ is a 2K+ global community with members across OpenAI, DeepMind, Oxford, Google, Anthropic, and more. #AI #Advertising #AdTech #AgenticAI #MediaBuying #IABTechLab #TheAILyceum

    57 Min.
  8. The Magic Mirror: The Journey to $3M a Month [Sunil Jindal, Magic AI] #23

    2. MÄRZ

    The Magic Mirror: The Journey to $3M a Month [Sunil Jindal, Magic AI] #23

    'When you see that image of yourself on the screen, suddenly it feels much more possible' - Sunil Jindal, Co-Founder, Magic AI What if your mirror could train you using a future version of you as the trainer? In this episode of The AI Lyceum, Samraj Matharu sits down with Sunil Jindal, Co-Founder of Magic AI - the award-winning consumer tech company behind the AI-powered smart mirror named one of TIME Magazine's Best Inventions of 2024. Magic AI uses computer vision to count reps, correct form, and give real-time feedback across 400+ exercises, with virtual trainers including Sir Alistair Cook, Katya Jones, and Jesse Lingard. They were invited to Dragon's Den and turned it down. They hold patents on the mirror and adjustable dumbbells. And in January 2026, they posted $3.4M in a single month, their best ever, with just 15 employees. EPISODE HIGHLIGHTS 0:00 ➤ Welcome and intro 2:12 ➤ The Magic Mirror explained 3:43 ➤ From 2 founders to a team of 20 5:38 ➤ The emotional impact: wheelchair users, single mums, cancer recovery 9:18 ➤ Privacy by design and why they don't store your camera feed 12:40 ➤ What data does Magic AI actually track? 15:36 ➤ Body scan, metabolic age and personalised weight recommendations 19:38 ➤ Dragon's Den and why they said no 21:35 ➤ Future you as your trainer and the visualisation breakthrough 26:20 ➤ Could the mirror run ads? Sunil's honest take 29:33 ➤ Loss aversion, habit nudges and standby screen ideas 33:10 ➤ Patents, dumbbells and the Technogym comparison 38:41 ➤ From Imperial physics to AI fitness founder 45:22 ➤ One question to leave the audience with MEMORABLE QUOTES 💬 'The only data we store are the coordinates of points on your body, not the video' 💬 'Imagine progress photos that look forwards, not backwards' 💬 'Future you, training present you, to become future you' 💬 'Always ask yourself: is this providing net positive value for me?' KEY QUESTIONS ANSWERED ➤ How does an AI mirror track form without storing footage? ➤ What metrics does Magic AI use to personalise your workout? ➤ Why did they decline Dragon's Den? ➤ What does the future of home fitness look like with GenAI? ➤ How do you build a company with AI before it was a buzzword? 🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza 🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167 🌐 Website: https://theailyceum.com 🔗 Linktree: linktr.ee/theailyceum 🪞 Magic AI: https://magic.fit 🔗 Sunil Jindal: https://www.linkedin.com/in/suniljindal21/ 🔗 Varun Bhanot: https://www.linkedin.com/in/varunbhanot1/ Hosted by Samraj Matharu — Certified AI Ethicist (Oxford) | Visiting Lecturer, Durham University | Founder, The AI Lyceum | 1,000+ members from OpenAI, DeepMind, Google, Anthropic and more. The views expressed are those of the speakers personally and do not represent The AI Lyceum or affiliated organisations.

    47 Min.

Info

The AI Lyceum Podcast explores the intersection of artificial intelligence, ethics, and society, bringing together voices from academia, industry, and policy. Hosted by Samraj Matharu - AI ethicist certified by OxEthica, University of Oxford, Visiting Lecturer at Durham University, and founder of The AI Lyceum - each episode unpacks the real-world impact of AI on business, governance, and human values. Listeners can expect: ▸ Sharp discussions on AI ethics, regulation, and governance ▸ Insights from leading experts at OpenAI, University of Oxford and more. ▸ Practical perspectives on responsible AI adoption in business ▸ Philosophical reflections on how AI reshapes our world 🌐 Website: theailyceum.com 🔗 Linktree: linktr.ee/theailyceum This isn’t just another tech podcast - it’s a space for critical thinking, practical wisdom, and future-facing ideas. Whether you’re a policymaker, business leader, or AI practitioner, The AI Lyceum Podcast equips you with the clarity to navigate AI responsibly. *The views and opinions expressed in this podcast are those of the individual speakers alone. They do not represent the views of The AI Lyceum, its host, or any affiliated organisations. Each guest participates in a personal capacity. The AI Lyceum and its host accept no responsibility or liability for any loss, damage, or actions taken in reliance on the content of this podcast.