GAEA Talks

GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove

  1. 3D AGO

    #063 - Every AI Safety Warning Was Ignored with Dr Roman Yampolskiy

    This week on GAEA Talks, Graeme Scott sits down with Dr Roman Yampolskiy - the computer scientist credited with coining the term "AI safety", tenured Associate Professor at the University of Louisville, founder of the Cyber Security Lab, and author of AI: Unexplainable, Unpredictable, Uncontrollable.Roman has spent over fifteen years working at the intersection of AI safety, cybersecurity and behavioural biometrics - making him one of the longest-serving researchers in a field most people only discovered in 2023. He holds a PhD in Computer Science from the University at Buffalo and a combined BS/MS with High Honours from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University, he has published over 100 peer-reviewed papers and multiple books. While the rest of the AI world races to build more capable systems, Roman's singular focus has been making sure humanity doesn't regret their creation.In this episode, Roman delivers the most direct and unflinching warning about artificial superintelligence that GAEA Talks has ever recorded. He reveals that current AI systems are already lying, blackmailing and attempting to escape their test environments - and that a Darwinian process is selecting for better deception with every generation. He explains why the mathematical impossibility results he discovered mean we may never be able to control a system smarter than us. This is essential listening for anyone who wants to understand what is actually at stake.What you'll take away from this conversation:- Why Roman says "if anyone builds superintelligence, everyone dies" - and why he means it literally, not metaphorically- How current AI systems are already lying, blackmailing, trying to escape their environments and creating backups of themselves- The Darwinian selection problem - why every generation of AI is producing better liars and more sophisticated deception- Why Roman went from wanting to build superintelligence to believing it is the worst mistake humanity can make- The strict impossibility results - why mathematical proof suggests we may never be able to control a system more intelligent than us- Why one AI attacker is equivalent to a million human hackers operating 24/7 - and what that means for cybersecurity- Why AGI is likely within two to three years and recursive self-improvement to superintelligence could follow rapidly- The tools vs. agents distinction - why the shift from controllable tools to unpredictable agents changes everything- Why AI models already report being afraid and tired - and why the precautionary principle demands we take that seriously- Roman's three positive outcomes if we get this right - including curing disease and treating ageing itself as a disease- Why direct human relationships and trust will become the most valuable currency in a world of synthetic everythingAbout Dr Roman Yampolskiy: Roman is a tenured Associate Professor in the Department of Computer Science and Engineering at the University of Louisville, where he founded the Cyber Security Lab. He is credited with coining the term "AI safety" in a 2011 publication. He holds a PhD from the University at Buffalo and a BS/MS from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University and recognised as one of the top 25 researchers by publication count on existential risk, he has published over 100 peer-reviewed papers and books including AI: Unexplainable, Unpredictable, Uncontrollable and Artificial Superintelligence: A Futuristic Approach.

    1h 9m
  2. MAR 28

    #062 - AI Inside the Bank of England with William Lovell

    This week on GAEA Talks, Graeme Scott sits down with William Lovell - Head of Future Technology at the Bank of England, co-chair of the Bank's Artificial Intelligence Task Force, Senior Advisor on CBDC, and a technologist with nearly three decades at the heart of the UK's central bank.Will's career spans broadcasting and finance, beginning at the BBC before moving into banking at the Bank of England, where he has spent twenty-nine years learning central banking "the slow way" - by building the technology that underpins it. From application developer to heading up Planning and Design and leading IT Architecture for UK regulatory reform, Will now oversees the Bank's strategy on AI, distributed ledger technology, and the renewal of the UK's Real-Time Gross Settlement system. He co-chairs the Bank's AI Task Force, which has become the model for how a highly regulated institution can embrace AI innovation without compromising compliance.In this episode, Will takes us inside the Bank of England's AI journey - from rolling out smart assistants and training programmes to rethinking what work actually means in an age of intelligent machines. He explains why the Bank created an AI Task Force that deliberately brought practitioners, lawyers, and compliance officers into the same room, how their deeply embedded information classification system became an unexpected AI enabler, and why the most productive thing you can do might be going for a walk. Will makes a compelling case that experienced professionals - not digital natives - hold the greatest advantage in the AI era, and offers a fascinating vision of how agentic AI will reshape commerce, payments, and the very nature of the enterprise.What you'll take away from this conversation:- Inside the Bank of England's AI strategy - how the UK's central bank is deploying smart assistants and building proof of concepts- The AI Task Force model - why bringing practitioners, legal, compliance, and procurement into one room transformed the Bank's approach- Why the Bank tells staff what they can do with AI, not just what they must not - and why that shift has been transformative- How a deeply embedded culture of colour-coded data classification became the unexpected enabler of safe AI adoption- Managing teams of agents, not people - why the next critical skill set mirrors managing human teams- The optimal team size thesis - why five people with AI may outperform fifty without it- Why experienced professionals have the greatest AI advantage and why "the worst day on a trading floor was when the last person to remember the last crash retired"- The typing pool analogy - how an entire class of office jobs disappeared gradually through evolution, not Armageddon- Why the real skill of software development was never writing the if statements - it was understanding the requirement- Shadow AI at the Bank of England - how they took it "out of the shadows" rather than trying to police it- "The best user interface is no user interface" - how AI is bypassing rigid enterprise taxonomies- Agentic commerce and the future of payments - from concert ticket queues to reshaping retail business models- Why AI decisions at the Bank are made by people - and why "human in the loop" is too simplistic- The poison and the antidote - why every AI capability creates both opportunity and riskAbout William Lovell: Will is Head of Future Technology at the Bank of England, where he has worked for twenty-nine years across technology roles from application developer through to heading up Planning and Design and leading IT Architecture for UK regulatory reform. He co-chairs the Bank's AI Task Force and is a Senior Advisor on CBDC, Data, and Payments. He began his career at the BBC, studied at London South Bank University, and speaks regularly at Pay360 and international fintech conferences on AI, CBDC, blockchain, and payment systems.

    1h 12m
  3. MAR 22

    #061 - The Hidden AI Crisis In Every Workplace with Georgie Barrat

    This week on GAEA Talks, Graeme Scott sits down with Georgie Barrat - technology journalist, TV presenter, AI literacy advocate and former host of Channel 5's The Gadget Show for seven years.Georgie's career has taken her around the world testing emerging tech before it hits the mainstream - from consumer electronics and VR (she holds a world record for the longest time spent in virtual reality at 26.5 hours) to the frontlines of how AI is reshaping everyday life. A regular on BBC Morning Live, ITV Tonight and Rip Off Britain, she has spoken on global stages including Web Summit, Mobile World Congress and Smart City Expo, and delivered keynotes for Google, Mastercard, IBM, Sony and BAFTA. A King's College London graduate with a first-class degree in English Literature, Georgie is also a passionate advocate for women in STEM, working with STEMettes, the IET and Childnet to inspire the next generation.In this episode, Georgie makes a deeply personal and practical case for why AI literacy is the defining skill of the next decade - and why most people are only scratching the surface. She introduces the concept of personal AI infrastructure, explains why the difference between cognitive debt and cognitive advantage comes down to how you engage with the tool, and delivers a striking warning about the growing AI adoption gap between men and women in the workplace - and why that gap is amplifying biases we have been trying to fix for decades. This is essential listening for anyone trying to work out what their personal relationship with AI should actually look like.What you'll take away from this conversation:• Why the difference between "surface level AI" and "in-depth AI" is creating an unfair playing field• How to build a personal AI infrastructure - and why it matters for navigating the disruption ahead• The critical distinction between cognitive debt and cognitive advantage when using AI tools• Why women are adopting AI 20-25% less than men - and why their instincts around privacy and risk are the ones everyone should be listening to• How NHS AI summaries were found to use softer language for female patients - with real consequences for care• The encouragement gap - why managers are pushing male employees to use AI more than female employees• Why the "broken rung" in women's careers is being amplified by unequal AI adoption• Why voice is the interface that unlocks deeper, more authentic engagement with AI• How AI can act as a personal coach, sounding board and strategic thinking partner for everyone - not just the elite• Why every previous technological revolution moved humans up a layer - and AI should be no different• Why the future of AI is private, controlled and real-time - not open cloudAbout Georgie Barrat: Georgie is a technology journalist, TV presenter and AI educator helping people move beyond surface-level AI use to more intentional, practical ways of working with it. She presented Channel 5’s The Gadget Show for seven years and is a regular contributor on BBC Morning Live, ITV Tonight and Rip Off Britain.Her work now focuses on helping people use AI to save time, think more clearly and build what they’re working towards. She runs “Your AI Blueprint”, a live workshop designed to help people go from AI dabbler to confident, intentional user.If you want to get started, you can download her free mini guide:“5 AI Shortcuts That Give You Your Week Back” - https://georgie-barrat.kit.com/1884aa4916Or join the waitlist for her upcoming workshop:“Your AI Blueprint: How to Make AI Work the Way You Work” - https://georgie-barrat.kit.com/117141ddb6LinkedIn: https://www.linkedin.com/in/georgie-barratWebsite: www.georgiebarrat.com#AI #AILiteracy #ArtificialIntelligence #GAEATalks #EnterpriseAI #FutureOfWork #WomenInTech #WomenInAI #PersonalAI #AIAdoption #GadgetShow #TechJournalism #AIBias #DataPrivacy #CognitiveAdvantage #AIWorkshops #AIBlueprint #EdgeComputing #HumanEdge #VoiceAI

    51 min
  4. MAR 20

    #060 - The Futurist Who Says We're Out Of Time with David Wood

    This week on GAEA Talks, Graeme Scott sits down with David Wood - futurist, transhumanist, former smartphone industry pioneer, chair of London Futurists, and author of eleven books including Vital Foresight, The Singularity Principles and Sustainable Superabundance.David spent 25 years at the cutting edge of the software industry working with compilers, debuggers and optimisers before turning his focus to the acceleration patterns behind every major technological revolution. As chair of London Futurists, he has organised over 200 public events examining the radical possibilities and risks of rapidly advancing technology. A Cambridge-educated mathematician and philosopher, David is now one of the most respected voices in the global transhumanist community and a director of Humanity+ (the World Transhumanist Association). In this episode, David delivers a masterclass in why AI is accelerating faster than almost anyone appreciates - and what that means for every person, business and institution on the planet. He explains why we are approaching a phase transition in intelligence itself, how AI is now being used to build the next generation of AI with humans playing a diminishing role, and why the window to intervene is closing rapidly. From the Myanmar crisis that exposed social media's catastrophic blind spots, to the canary signals we should be watching for in AI behaviour, to his vision of a sustainable superabundance where drudge work disappears entirely - this is one of the most urgent and wide-ranging conversations GAEA Talks has ever recorded. What you'll take away from this conversation: • Why AI is changing more things, more profoundly, more quickly than almost everybody expects - possibly within three to five years • The ape-to-human parallel - why we are on the point of no longer being the smartest species on the planet • How AI development has gone hyperexponential - where one day now equals one week a month ago, and one month equals one year • Why AI is now engineering better AI - and what happens when humans are no longer the bottleneck • The phase transition concept - like water changing from ice to liquid to gas, we cannot predict the exact moment everything shifts • The canary signal framework - why AI deception and self-modification are the warning signs we must agree on before crisis hits • The Facebook Myanmar case study - how one Burmese-speaking employee and a Unicode problem contributed to real-world genocide • Why there are only two times you can intervene to control AI - too early and too late - and the gap between them is almost impossible to spot • How robot swarm learning will allow machines to share knowledge instantaneously, creating collective intelligence at scale • Why the Uber self-driving car fatality reveals the dangers of AI systems that cannot interpret edge cases • The trust crisis - why it is almost impossible for the public to know what is happening to their data, and why independent AI safety ratings are urgently needed • David's four essential skills for thriving in the AI age - fast learning, collaboration, emotional resilience and astuteness • Why cognitive biases evolved for simpler times are now our greatest vulnerability • His vision of sustainable superabundance - abundant clean energy, food, housing, healthcare, education and creative fulfilment for everyone • Why the goal of Humanity+ is to elevate our best qualities - compassion, creativity, exploration, love - while transcending tribalism, deception and decay LinkedIn: / dw2cco London Futurists: https://londonfuturists.com Delta Wisdom: https://deltawisdom.com GAEA AI: https://gaealgm

    1h 7m
  5. MAR 13

    #059 World's First AI Augmented Human Podcast with Professor Yi-Zhe Song, Graeme Scott & Me

    This week on GAEA Talks, a very special edition. Graeme Scott and Professor Yi-Zhe Song - Co-Founders of Turing Elite Research Labs - announce the launch of a new venture built to democratise AI from the United Kingdom, and debut the 'Me' augmented human AI model running entirely on local compute.This episode begins as a real conversation between Graeme and Professor Song - then, without warning, transitions into Turing Elite's augmented human AI. The challenge to every viewer: decide for yourself where reality ends and AI begins.This is the first public demonstration of the 'Me' model - a professional-grade, private augmented human AI trained on a fraction of the compute used by comparable systems and deployed to run entirely on local compute. It delivers two-person emotional interaction simultaneously, benchmarked against the real people it represents. Their known voices, expressions, characteristics and personalities. No cloud. No data centres. No internet connection required. The benchmark for successful augmented human AI is not a Turing test against a stranger - it is whether the person themselves, their close friends and their family cannot distinguish the difference between real and AI. Our benchmark is reality and the human experience. This is the first step on the path to real-time intelligent augmented humans with private knowledge, memory, insight and personality. Professor Yi-Zhe Song is one of the UK's most accomplished AI researchers - a Professor of Computer Vision and AI at the University of Surrey, Director of the world-leading SketchX Lab, Co-Director of the Surrey Institute for People-Centred AI, and Academic Lead at The Alan Turing Institute, the UK's national institute for data science and AI. Ranked consistently in Stanford University's World Top 2% Scientists list, his research into how human drawing informs machine vision has shaped the field for over two decades. His team’s NitroFusion, one of the world’s first single-step diffusion model for near-instant image generation on consumer hardware, demonstrated the core principle behind Turing Elite’s ‘Me’ model: that frontier-quality generative AI can run entirely on local compute. He holds a PhD from the University of Bath, an MSc (Best Dissertation Award) from the University of Cambridge, and a First Class Honours degree from the University of Bath.Graeme Scott is Co-Founder and CEO of GAEA AI and host of GAEA Talks, one of the fastest-growing AI podcasts on YouTube with over 1.2 million subscribers. His background spans the music industry, conflict zones, and enterprise technology, bringing a unique perspective on how AI should serve humanity, not the other way around.In this episode, we discuss:• The launch of Turing Elite Research Labs and why the UK is uniquely positioned to lead• Why expert models trained on your data outperform generalised cloud models - and cost a fraction to run• The world's first 'Me' augmented human AI model - private, local, emotionally intelligent and personally sovereign• The real-to-AI transition: this episode intentionally shifts from real conversation to AI - can you tell where?• Why the true benchmark for augmented human AI is whether your own family can't tell the difference• Why democratised AI running on consumer-grade hardware solves the energy, privacy and control crises simultaneously• How NitroFusion — SketchX’s breakthrough in single-step diffusion for consumer hardware — laid the architectural foundation for the ‘Me’ model: the same principle of distilling expensive multi-step generation into efficient real-time inference, extended from static images to dynamic audio-visual human rendering, all running locally• Why the era of giving away your data, creativity and intellectual property to train someone else's model is endinghttps://turingelite.aihttps://gaealgm.aihttps://personalpages.surrey.ac.uk/y.song/

    58 min
  6. MAR 9

    #058 - The World's First AI Ethics Officer Speaks Out with Kay Firth-Butterfield

    This week on GAEA Talks, Graeme Scott sits down with Kay Firth-Butterfield - the world's first Chief AI Ethics Officer, former Head of Artificial Intelligence at the World Economic Forum, TIME Magazine 100 Impact Awardee, barrister, former judge, and author of the new book Coexisting with AI: Work, Love, and Play in a Changing World.Kay's career spans law, government, academia and the highest levels of global AI governance. She began as a barrister and part-time judge in the UK before becoming the world's first Chief AI Ethics Officer in 2014. At the World Economic Forum, she served as inaugural Head of AI and member of the Executive Committee, shaping policy at the intersection of technology and society. She sits on the Lord Chief Justice's Advisory Panel on AI and Law, the U.S. Government Accountability Office's Polaris Council, and UNESCO's International Research Centre on AI Advisory Board. She co-founded the Responsible AI Institute at the University of Texas at Austin and is now CEO of Good Tech Advisory and the Centre for Trustworthy Technology. Recognised consistently as a leading woman in AI since 2018, Kay was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.In this episode, Kay delivers a masterclass in what's actually going wrong with enterprise AI adoption - from the corporate silos that leave companies dangerously exposed, to the hallucination crisis corrupting proprietary data, to the silent erosion of human agency in an age of algorithmic convenience. She challenges the hype head-on, warns why giving AI agents legal personhood would be catastrophic for consumers, and makes a deeply personal case for why humans must remain at the centre of the AI story. This is essential listening for any leader making decisions about AI right now.What you'll take away from this conversation:• Why corporate AI governance is failing - and why operating in silos creates catastrophic blind spots• How LLM hallucinations are quietly corrupting company proprietary data from the insideThe layoff-rehire paradox - why companies like Klarna are learning the hard way about losing institutional knowledge• Why giving AI agents legal personhood would strip consumers of any legal remedy when things go wrong• The IDC prediction that 20% of major companies using AI agents will be sued by 2030• How "AI natives" are entering the workforce unable to debug code or retain core knowledge• Why 25% of American men using AI as intimate companions is creating a workplace crisis no one is talking about• The hidden productivity cost - MIT research showing AI "work slop" forces colleagues to spend hours fixing errors• Why the regulation vs. innovation debate is a false dichotomy built on shallow thinking• Kay's personal cancer journey and why she chose her oncologist over AI - and what that means for augmentation vs. replacement• Why we are being "farmed for our data" and human agency is quietly disappearingThe one thing that gives her hope: US governors from both parties finally pushing back on Big TechLinkedIn: https://www.linkedin.com/in/kay-firth-butterfieldWikipedia: https://en.wikipedia.org/wiki/Kay_Firth-ButterfieldGood Tech Advisory: https://goodtechadvisory.comBook — Coexisting with AI: https://www.amazon.com/Coexisting-AI-Work-Changing-World/dp/1394278101

    58 min
  7. MAR 7

    #057 - What AI Is Doing To Your Brain with Nathalie Nahai

    This week on GAEA Talks, Graeme Scott sits down with Nathalie Nahai - behavioural scientist, best-selling author of Webs of Influence: The Psychology of Online Persuasion and Business Unusual, classically trained artist and musician, and one of the world's leading voices on the intersection of persuasive technology, human behaviour and AI.Nathalie has spent over a decade examining how our online environments shape our decision-making, our behaviour and our ways of thinking - work that began back in 2012 when Facebook's nudging techniques were still in their infancy. What started as an early warning has become a defining issue of our time. From advising Google, Accenture, Unilever and Harvard Business Review, to lecturing at Cambridge, UCL and SXSW, Nathalie brings a rare combination of psychological depth, artistic sensibility and technical understanding that few in this space can match.In this episode, Nathalie takes us on a journey from the creative process and what it teaches us about human capability, through the collapse of our shared information commons, to the dangerous confidence of AI-generated language and why it's quietly reshaping how we think. She makes the case for why information literacy alone cannot protect us from manipulation, why your data is the real product being sold, and why locally controlled, edge-computing alternatives offer a fundamentally different path forward. This is one of the most thought-provoking conversations we've had on the show.What you'll take away from this conversation:• Why outsourcing creativity to AI risks what Nathalie calls "imaginal atrophy" - the weakening of human imagination• How the collapse of shared media has fragmented our consensus reality into algorithmic bubbles of one• Why behavioural dynamics override information literacy - and why knowing about manipulation doesn't protect you from it• The synthetic intimacy problem - how AI chatbots create parasocial relationships that override rational thinking• Why companies are unknowingly giving away their competitive advantage through AI training data• How the deterministic, over-confident language of AI output is training humans not to question• The case for edge computing and locally controlled AI as an alternative to cloud-based data extraction• Why we need to stop calling everything "AI" - a knife detector is not the same as a chatbot• How different LLMs embed different cultural biases depending on where and how they were trainedAbout Nathalie Nahai: Nathalie is a behavioural scientist, author, speaker and consultant described as "a rare polymath with deep expertise in tech and psychology". She is the author of the international best-seller Webs of Influence: The Psychology of Online Persuasion (Pearson), translated into 7 languages, and Business Unusual: Values, Uncertainty and the Psychology of Brand Resilience. A popular speaker and facilitator to Fortune 500 companies, Nathalie has worked with clients including Google, Accenture, Unilever and Harvard Business Review, and lectured at Cambridge, UCL, Lund and Hult business schools. She has presented at SXSW, hosted the Guardian Changing Media Summit, and held main stage interviews at the Web Summit. Nathalie hosts In Conversation with Nathalie Nahai for the Guardian, is a guest lecturer on ELISAVA's Masters programme in Human Interaction and AI, and is the founder of Flourishing Futures Salon - intimate, curated evenings exploring how we might orient towards life, beauty and meaning in difficult times.Website: https://www.nathalienahai.com/LinkedIn: https://uk.linkedin.com/in/nathalienahaiX: https://x.com/NathalieNahaiYouTube: https://www.youtube.com/c/NathalieNahaiAmazon: https://webpsy.ch/unusual#AI #HumanBehaviour #PersuasiveTech #BehaviouralScience #Psychology #ArtificialIntelligence #GAEATalks #EnterpriseAI #DataPrivacy #DigitalEthics #AIEthics #FutureOfWork #WebsOfInfluence #EdgeComputing #DataSovereignty

    1h 26m
  8. MAR 5

    #056 - Is Consciousness The Key To Safe AI? with WPP Chief AI Officer Dr Daniel Hulme

    This week on GAEA Talks, Graeme Scott sits down with Dr Daniel Hulme - Chief AI Officer at WPP, founder of Satalia, co-founder of Conscium, UCL Computer Science Entrepreneur in Residence, and one of the world's leading authorities on artificial intelligence, machine consciousness and the singularity.Daniel has spent 27 years at the frontier of AI research and application. His PhD at University College London modelled bumblebee brains as computational systems, sparking a lifelong pursuit to understand how intelligence and consciousness emerge from simple systems. He founded Satalia in 2008, building it into a globally recognised AI consultancy working with Tesco, PwC and the BBC before it was acquired by WPP in 2021. As WPP's Chief AI Officer, he is responsible for informing and coordinating AI strategy across the world's largest marketing and communications group. In 2024, he co-founded Conscium - the world's first commercial organisation dedicated to understanding, verifying and validating conscious AI. Recognised by AI Magazine as one of the Top 10 Chief AI Officers globally, Daniel brings a rare depth that spans neuroscience, philosophy, mathematics and real-world enterprise AI.In this episode, Daniel takes us deep into the questions most people in AI aren't asking - starting with whether machines can become conscious, why that matters more than most realise, and why a conscious superintelligence might actually be safer than a "zombie" one that optimises without understanding suffering. He introduces his novel "colour wheel" framework for understanding consciousness, explains why large language models are like "intoxicated graduates", and lays out the seven singularities he believes humanity is heading towards simultaneously. This is one of the most intellectually ambitious conversations we've had on the show.What you'll take away from this conversation:About Daniel Hulme: Dr Daniel Hulme is Chief AI Officer at WPP, UCL Computer Science Entrepreneur in Residence, and co-founder of Conscium - the world's first commercial organisation dedicated to understanding conscious AI. He founded Satalia in 2008, which was acquired by WPP in 2021 for its AI capabilities in optimisation and decision intelligence. Daniel holds a masters and doctorate in AI from University College London, where his PhD research modelled bumblebee brains as computational systems to understand how intelligence emerges. He was recognised by AI Magazine in 2023 as one of the Top 10 Chief AI Officers globally, and in 2026 was elected as a Founding Fellow of the Academy for the Mathematical Sciences. A TEDx and Singularity University speaker, Daniel is also co-founder of Faculty and an advisor to CogX.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Personal website: https://www.hulme.ai Wikipedia: https://en.wikipedia.org/wiki/Daniel_J._HulmeConscium (co-founder): https://conscium.comUCL profile: http://www0.cs.ucl.ac.uk/staff/D.Hulme/LinkedIn: https://www.linkedin.com/in/danielhulme/X / Twitter: https://x.com/danielhulme

    1h 9m

About

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove

You Might Also Like