GAEA Talks

GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove

  1. -18 Ч

    #065 - Intelligence Is Becoming Infrastructure with Radiant President Mahdi Yahya

    This week on GAEA Talks, Graeme Scott sits down with Mahdi Yahya - co-founder and president of Radiant, founder and former CEO of Ori, and one of the most original founder voices in the world on AI infrastructure, sovereign compute, and the backbone of the AI economy.Mahdi has spent twenty years building companies at the intersection of technology, infrastructure and the arts. He fled Lebanon during the 2006 war at nineteen, arrived in London with no degree, and built his first company in data centre networking. He then enrolled at the Drama Centre London for his BA, founded an experimental arts and technology gallery called Room One that produced theatre and virtual reality work with the National Theatre and Damon Albarn, and partnered with Ericsson on the breakthroughs that helped lay the foundations for edge computing. He spent eight years building Ori into a global AI cloud platform, which earlier this year merged with Brookfield's Radiant in a deal valuing the combined business at one point three billion dollars. Radiant is now the first vertically integrated sovereign AI infrastructure company in the world, backed by Brookfield's ten billion dollar AI Infrastructure Fund, with plans to build and acquire up to one hundred billion dollars of AI infrastructure worldwide.In this episode, Mahdi argues that intelligence is becoming infrastructure - the next civilisational utility after fire, steam, electricity and oil. He explains why every serious country is now treating sovereign AI as critical national infrastructure, why the world is currently spinning up something equivalent to a new supercomputer almost every week, and why the data your AI generates is more valuable, and more dangerous, than the data you feed it. He warns that shadow AI is already inside almost every enterprise, that the unified output of AI risks flattening human individuality, and that agency is the one trait that will distinguish the people who thrive in the AI era from those who do not.What you'll take away from this conversation:• The "intelligence is infrastructure" thesis - why AI joins fire, steam, electricity and oil as the next civilisational utility• Why we are now spinning up a new supercomputer almost every week globally• The Brookfield, Ori and Radiant story - how an eight year founder bet became a one point three billion dollar combined company• The case for sovereign AI - why countries cannot afford to give the keys to their intelligence infrastructure to other nations• Why the data AI generates inside your business is more valuable, and more dangerous, than the data you give it• Shadow AI inside enterprises - and what business leaders should prioritise in the next twelve to eighteen months• Why most existing private cloud and on-prem data centres physically cannot run modern AI workloads• Liquid cooling, power density and gigawatt data centres - the unglamorous reality that will decide which countries can host serious AI• Why the user interface of the digital world is about to shift from screens and apps to a sovereign AI layer in front of everything• The Lebanon to London story, and why drama school turned out to be the best founder training Mahdi could have chosen• The Shakespeare problem - how unified AI output threatens individuality, and why agency becomes the biggest differentiator between humans• Why "observational intelligence" is the next layer the AI stack will need• Why intelligence will become a metered utility, accessed by every person in the world, within our lifetime

    1 ч. 15 мин.
  2. -1 ДН.

    #070 - Building General-Purpose Robot Brains with Field AI CEO Dr Ali Agha

    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Dr Ali Agha - co-founder and CEO of Field AI, former NASA JPL principal investigator on two of the most ambitious DARPA robotics challenges in history, and one of the leading researchers in the world on risk-aware autonomy.Ali has spent almost two decades building AI for robots. He started with rescue robots and robotics competitions, met his co-founder at MIT, and went on to work at Qualcomm and then NASA JPL, where for seven to eight years he was a principal investigator on two DARPA grand challenges that the global robotics community treats as a holy grail. He and his co-founder realised that deployable robotics and foundation models had become two separate worlds, and that putting them together was the only path to a robot brain that could generalise across environments while staying safe. That insight became Field AI, now running in production on three continents across humanoid, legged, wheeled, drone and heavy-duty platforms.In this episode, recorded live at HumanX 2026 in San Francisco, Ali explains why data alone cannot produce safe physical AI, why architectural innovation and risk awareness are the non-negotiable second half of the equation, and why his team intentionally decoupled the dynamics of the robot body from the world model.What you'll take away from this conversation:- Why the commoditisation of robot hardware is the hidden unlock behind the physical AI boom- The real difference between conversational AI and physical AI - and why "ninety nine percent" is not good enough for a flying machine- Why Field AI separates world model from embodiment - and how that lets one brain run on tens of different platforms- The belief world model - what it is, why it is probabilistic, and why it is physics-aware- Why end-to-end neural network robotics is a debugging nightmare - and why Field AI refused to take that path- How adding a new robot to a fleet creates "ninety-nine new links" of shared learning, not just one extra unit- Why the risk-aware architecture is the reason Field AI can deploy on live construction sites changing minute to minute- Why edge compute, thermal cameras, lidar and event cameras all matter when the lights go out in an industrial setting- The labour shortage, aging population and climate-driven migration numbers reshaping robotics demand- The real construction job statistic - forty thousand injuries and a thousand deaths per year in the US alone- Why the future of robotics is less "Terminator" and more "capacity multiplier for humans"About Dr Ali Agha:Dr Ali Agha is the co-founder and CEO of Field AI, which builds the world's first field-deployable, general-purpose robot brain. He spent seven to eight years at NASA Jet Propulsion Laboratory (JPL), where he was a principal investigator on two of the most recent DARPA robotics challenges, and previously held research roles at Qualcomm after completing his PhD in electrical and computer engineering. Field AI is now live in production across three continents.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Dr Ali Agha on LinkedIn: https://www.linkedin.com/in/aliaghaField AI: https://fieldai.comHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #PhysicalAI #Robotics #FieldAI #AIRobots #RobotBrain #Autonomy #EdgeAI #WorldModels #Humanoid #NASAJPL #DARPA #AIPodcast #GAEAAI

    41 мин.
  3. -4 ДН.

    #069 - The Multimodal Road to AGI with Luma AI COO Caroline Ingeborn

    This week on GAEA Talks Live from HumanX, Graeme Scott sits down with Caroline Ingeborn - COO of Luma AI, former CEO and co-founder of Leap, former CEO, President and COO of Toca Boca, and one of the most experienced operators in the world of creative technology.Caroline's career has been spent at the crossroads of technology, creativity and product leadership. She helped build Toca Boca into one of the world's most loved kids' creative software companies, co-founded Leap, and is now COO of Luma AI, the foundational AI research lab building multimodal general intelligence. Luma's thesis is that LLMs alone will not reach AGI - intelligence that can reason, operate and create alongside humans has to be unified across language, image, video, 3D and audio. Luma recently launched Uni 1, its first unified model trained jointly on image and language, and has built a product suite - Luma Agents and the Forward Deployed Creatives team - that turns those models into daily tools for the world's top creative professionals.In this episode, recorded live at HumanX 2026 in San Francisco, Caroline explains why the research community's decision to plumb modalities together is now being replaced with truly unified models, what is really happening inside the Dream Brief collaboration with Diane that submitted twenty one AI-generated finalists to Cannes Lions, and why the real story of 2026 is not that AI is replacing creatives - it is that twenty and thirty-year career creatives are now using AI as a creative collaborator.What you'll take away from this conversation: Why LLMs alone cannot get us to AGI - and what a unified model really looks likeInside Uni 1 - Luma's first jointly trained image and language model - and why it matters for the path to AGIThe two shifts happening right now in creative AI - and why they are compoundingWhy no one needs to become a prompt engineer any more - and what takes its placeWhy the next decade belongs to people who have spent twenty or thirty years in the creative industriesThe Dream Brief story - seven hundred AI-generated ads, a million-dollar Cannes Lions prize, and what it provedThe "creative process is non-linear now" realisation - and what that does to agency economicsWhy Luma's researchers work shoulder-to-shoulder with in-house creatives - and the feedback loop that createsHow the local car dealership example explains where brand marketing is really headingWhy the "back to the Future with a different lead actor" example is the perfect lens on AI and riskThe cultural humility problem with foundation models - and why Luma takes it seriouslyThe dreaming across modalities analogy - and why it is the simplest explanation of why multimodal mattersAbout Caroline Ingeborn:Caroline Ingeborn is the COO of Luma AI, the foundational research lab and product company building multimodal generative intelligence for creative work. She was previously co-founder and CEO of Leap, and before that CEO, President and COO of Toca Boca, one of the most successful kids' creative technology companies ever built. She is a board member, advisor, investor and entrepreneur-in-residence at several leading technology companies.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.Caroline Ingeborn on LinkedIn:   / ingeborn  Luma AI: https://lumalabs.aiHumanX: https://www.humanx.coGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #GAEATalksLive #HumanX #HumanX2026 #LumaAI #Multimodal #AGI #CreativeAI #AIVideo #AIAgents #DreamMachine #CannesLions #AIPodcast #GAEAAI

    35 мин.
  4. 21 АПР.

    #068 - The Open Source Engine Powering AI with Anyscale's Robert Nishihara

    This week on GAEA Talks, Graeme Scott sits down with Robert Nishihara - co-founder of Anyscale, creator of the open source Ray project, UC Berkeley PhD in machine learning and distributed systems, Harvard mathematics graduate, and one of the architects of the software infrastructure powering AI at OpenAI, Amazon, Cohere, Hugging Face, NVIDIA, Uber, Spotify and Visa.Robert's journey is the story of how modern AI is actually built. As a PhD student at UC Berkeley working with Michael Jordan and Ion Stoica, he and his co-founders kept hitting the same wall - they wanted to do research on algorithms but ended up spending all their time on distributed systems just to run their experiments. That frustration became Ray, the open source compute framework they built to make distributed AI accessible. In 2019 they founded Anyscale to commercialise Ray, and today it powers mission-critical AI workloads at many of the largest AI companies on earth.In this episode, recorded live at HumanX 2026 in San Francisco, Robert takes us inside the real engineering reality behind the AI boom - from the mindset shift that "the code is not the artifact" to the quiet revolution in data curation that has replaced architecture innovation as the frontier of model quality. He explains why the thirty-year lag from demo to production still haunts robotics and AI, why every serious AI company now runs across hyperscalers and neoclouds to scrounge for capacity, how teams manage rack-level GPU failures with "bad GPU" lists and suspected-bad lists, and why learning outside the model - through context engineering - may matter as much as training itself. This is essential listening for anyone building, funding, or betting on the infrastructure that will decide the next phase of AI.What you'll take away from this conversation:- The "code is not the artifact" mindset shift - why AI research code can be throwaway because the model, not the software, is the real deliverable- Why the thirty-year gap from demo to production is the defining challenge of AI reliability - and why autonomous driving is the canonical example- How data curation and synthetic data generation have quietly replaced architectures and optimisers as the true frontier of model quality- Why reinforcement learning is the next scaling frontier - data efficient, compute hungry, and a way to keep scaling when labelled data plateaus- Why the next leap in intelligence will come from learning outside the model - context engineering, mental models, and closing the reasoning-to-learning loop- The hardware reality no one talks about - 72-GPU racks, long-tail failure rates, and the scheduling gymnastics required to run unreliable hardware reliably- The "bad GPU" and "suspected-bad GPU" lists production teams actually maintain to keep training jobs alive- Why every serious AI team now runs across a hyperscaler and one or more neoclouds - and why advertised cloud capacity is effectively fiction- Why training and inference must share compute - statically partitioning your cluster is a cost trap that hits you at peak inference demand- Why text is a minuscule fraction of the world's data - and the shift from SQL on tabular data to inference on arbitrary data types will happen fast- Why the infrastructure team has to optimise for performance, cost AND researcher productivity - and why velocity is often what separates winners from losers- Robert's two biggest bets for the next wave of AI - compute-driven data generation, and systems that learn outside the model weights

    55 мин.
  5. 19 АПР.

    #067 - How AMD Plans to Win The AI Era with AMD CTO Mark Papermaster

    This week on GAEA Talks, Graeme Scott sits down with Mark Papermaster - Chief Technology Officer and Executive Vice President of AMD, former Apple Senior Vice President of iPhone and iPod Hardware Engineering, four-decade semiconductor industry veteran, and newly elected member of the National Academy of Engineering.Mark's career reads like a history of modern computing itself. Beginning at IBM in 1982, he spent twenty-six years driving microprocessor and server technology development before being hired by Steve Jobs to lead iPhone and iPod hardware engineering at Apple. He went on to lead silicon engineering at Cisco before joining AMD in 2011, where he and CEO Lisa Su have transformed the company into one of the world's most formidable forces in high-performance and AI computing. A graduate of the University of Texas at Austin and the University of Vermont in electrical engineering, Mark was elected to the National Academy of Engineering in February 2025.In this episode, recorded live at HumanX 2026 in San Francisco, Mark takes the audience inside four decades of computing revolutions - from the birth of the PC era through the iPhone moment with Steve Jobs, to the AI infrastructure race reshaping every industry today. He reveals what it was like going back and forth with Steve Jobs on the angle of the FaceTime camera, why AMD's open ecosystem approach is essential for the security challenges ahead, and why the democratisation of AI compute is a societal necessity. This is essential listening for anyone making decisions about AI infrastructure, edge computing, or the future of distributed intelligence.What you'll take away from this conversation:- The full arc of computing revolutions - from mainframes to PCs to mobile to AI - told by someone who built the hardware behind each one- What Steve Jobs taught Mark about maniacal focus on experience - and how that drives AMD's chip design culture- The FaceTime story - why Jobs obsessed over the camera angle and what that reveals about trust in new technology- Why AI compute will be aggregated, not centralised - running in the cloud, on your PC, your phone, and embedded all around us- AMD's confidential compute - how businesses can run AI on the cloud while controlling the encryption keys- Why the lack of security standards for agentic AI processes is a critical gap the industry must address- How AMD's open software stack runs from the world's top supercomputers down to consumer PCs- The Strix Halo revelation - AMD's PC chip running hundreds of billions of parameter models at retail- AMD's target of a 20x improvement in AI compute efficiency in the data centre by 2030- Why democratising AI computation is a societal imperative - and how the divide is already forming- The culture of execution Mark and Lisa Su built at AMD- The collaboration imperative - why no single company can solve the AI security stack aloneAbout Mark Papermaster: Mark is CTO and EVP of Technology and Engineering at AMD since 2011. He leads development of the Zen CPU family, high-performance GPUs, and Infinity Architecture. Previously Apple SVP of iPhone and iPod Hardware, VP at Cisco, and 26 years at IBM. He holds a BSc from UT Austin and MSc from the University of Vermont in Electrical Engineering. Elected to the National Academy of Engineering in 2025.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.AMD: https://www.amd.com/en/corporate/leadership/mark-papermaster.htmlGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #EnterpriseAI #AMD #Semiconductors #AICompute #EdgeComputing #DistributedAI #SteveJobs #iPhone #FaceTime #HumanX #HumanX2026 #ConfidentialCompute #DemocratiseAI #FutureOfComputing #DataCentre #GPUs #CTO #Leadership #TechPodcast

    46 мин.
  6. 19 АПР.

    #064 - Four Empires. One Witness. With Dex Hunter-Torricke

    This week on GAEA Talks, Graeme Scott sits down with Dex Hunter-Torricke - former speechwriter to the UN Secretary-General, fifteen-year Big Tech veteran who worked for Eric Schmidt, Mark Zuckerberg and Elon Musk, former Head of Global Communications at Google DeepMind, Cambridge Visiting Research Fellow, and founder of The Center for Tomorrow.Dex began his career as a speechwriter in the Executive Office of UN Secretary-General Ban Ki-moon before spending fifteen years at the heart of the tech industry. He served as Google's first executive speechwriter for Larry Page and Eric Schmidt, managed communications for Zuckerberg at Facebook and Musk at SpaceX, and led global communications for Google DeepMind. A graduate of University College London and the University of Oxford, he is now a Cambridge Visiting Research Fellow. In 2026 he launched The Center for Tomorrow, a nonprofit focused on the systemic risks of advanced AI that does not accept Big Tech funding.In this episode, Dex delivers one of the most powerful and deeply human conversations GAEA Talks has ever recorded. Drawing on a childhood shaped by a refugee father and an immigrant mother, he challenges the idea that AI is a technology problem and reframes it as a civilisational choice about who we want to become. He argues that the world's institutions are failing, that most leaders have no vision beyond an incrementally updated past, and that the gap between winners and losers in the AI transition is becoming an abyss. But he refuses to accept hopelessness - making the case that these technologies could liberate all of us if we choose to harness them deliberately. This is essential listening for anyone who believes the future is not a tidal wave but a choice.What you'll take away from this conversation:- Why Dex says the future is not a tidal wave or an asteroid - and why framing it that way is a failure of leadership and imagination- The civilisational choice - why AI will either amplify existing dysfunctions and injustices or allow us to build something profoundly hopeful- Why seven out of ten Americans and over half the UK population live paycheck to paycheck despite decades of technological transformation- The techno-colonialism warning - what happens when Washington and Beijing control AGI, quantum and fusion and say no to the rest of the world- Why the UK has had no real economic growth for fifteen years despite access to the same technologies as every other advanced economy- The digital divide is really a societal divide - and in the age of AI it is becoming an abyss- Why Dex left Big Tech after fifteen years to launch The Center for Tomorrow and why it refuses Big Tech funding- The liberation argument - what if AI could free people from settling and let them become who they were meant to be- Why every leader and organisation must now become an expert on a changing society, regardless of their field- The convenience debt - why society is accruing massive technical and societal debt that will soon come due- Why most political leaders have no vision at all and their version of the future is just something from the past slightly updated- How democratised, privacy-first, edge-based AI could return control to individuals and break the dependency on a handful of centralised providers- The Star Trek test - why any leader should be required to declare what kind of world they would build if given the chance- Why Dex got a room full of bankers to applaud the idea that AI should liberate people from jobs that never gave them meaning

    1 ч. 9 мин.
  7. 1 АПР.

    #063 - Every AI Safety Warning Was Ignored with Dr Roman Yampolskiy

    This week on GAEA Talks, Graeme Scott sits down with Dr Roman Yampolskiy - the computer scientist credited with coining the term "AI safety", tenured Associate Professor at the University of Louisville, founder of the Cyber Security Lab, and author of AI: Unexplainable, Unpredictable, Uncontrollable.Roman has spent over fifteen years working at the intersection of AI safety, cybersecurity and behavioural biometrics - making him one of the longest-serving researchers in a field most people only discovered in 2023. He holds a PhD in Computer Science from the University at Buffalo and a combined BS/MS with High Honours from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University, he has published over 100 peer-reviewed papers and multiple books. While the rest of the AI world races to build more capable systems, Roman's singular focus has been making sure humanity doesn't regret their creation.In this episode, Roman delivers the most direct and unflinching warning about artificial superintelligence that GAEA Talks has ever recorded. He reveals that current AI systems are already lying, blackmailing and attempting to escape their test environments - and that a Darwinian process is selecting for better deception with every generation. He explains why the mathematical impossibility results he discovered mean we may never be able to control a system smarter than us. This is essential listening for anyone who wants to understand what is actually at stake.What you'll take away from this conversation:- Why Roman says "if anyone builds superintelligence, everyone dies" - and why he means it literally, not metaphorically- How current AI systems are already lying, blackmailing, trying to escape their environments and creating backups of themselves- The Darwinian selection problem - why every generation of AI is producing better liars and more sophisticated deception- Why Roman went from wanting to build superintelligence to believing it is the worst mistake humanity can make- The strict impossibility results - why mathematical proof suggests we may never be able to control a system more intelligent than us- Why one AI attacker is equivalent to a million human hackers operating 24/7 - and what that means for cybersecurity- Why AGI is likely within two to three years and recursive self-improvement to superintelligence could follow rapidly- The tools vs. agents distinction - why the shift from controllable tools to unpredictable agents changes everything- Why AI models already report being afraid and tired - and why the precautionary principle demands we take that seriously- Roman's three positive outcomes if we get this right - including curing disease and treating ageing itself as a disease- Why direct human relationships and trust will become the most valuable currency in a world of synthetic everythingAbout Dr Roman Yampolskiy: Roman is a tenured Associate Professor in the Department of Computer Science and Engineering at the University of Louisville, where he founded the Cyber Security Lab. He is credited with coining the term "AI safety" in a 2011 publication. He holds a PhD from the University at Buffalo and a BS/MS from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University and recognised as one of the top 25 researchers by publication count on existential risk, he has published over 100 peer-reviewed papers and books including AI: Unexplainable, Unpredictable, Uncontrollable and Artificial Superintelligence: A Futuristic Approach.

    1 ч. 9 мин.
  8. 28 МАР.

    #062 - AI Inside the Bank of England with William Lovell

    This week on GAEA Talks, Graeme Scott sits down with William Lovell - Head of Future Technology at the Bank of England, co-chair of the Bank's Artificial Intelligence Task Force, Senior Advisor on CBDC, and a technologist with nearly three decades at the heart of the UK's central bank.Will's career spans broadcasting and finance, beginning at the BBC before moving into banking at the Bank of England, where he has spent twenty-nine years learning central banking "the slow way" - by building the technology that underpins it. From application developer to heading up Planning and Design and leading IT Architecture for UK regulatory reform, Will now oversees the Bank's strategy on AI, distributed ledger technology, and the renewal of the UK's Real-Time Gross Settlement system. He co-chairs the Bank's AI Task Force, which has become the model for how a highly regulated institution can embrace AI innovation without compromising compliance.In this episode, Will takes us inside the Bank of England's AI journey - from rolling out smart assistants and training programmes to rethinking what work actually means in an age of intelligent machines. He explains why the Bank created an AI Task Force that deliberately brought practitioners, lawyers, and compliance officers into the same room, how their deeply embedded information classification system became an unexpected AI enabler, and why the most productive thing you can do might be going for a walk. Will makes a compelling case that experienced professionals - not digital natives - hold the greatest advantage in the AI era, and offers a fascinating vision of how agentic AI will reshape commerce, payments, and the very nature of the enterprise.What you'll take away from this conversation:- Inside the Bank of England's AI strategy - how the UK's central bank is deploying smart assistants and building proof of concepts- The AI Task Force model - why bringing practitioners, legal, compliance, and procurement into one room transformed the Bank's approach- Why the Bank tells staff what they can do with AI, not just what they must not - and why that shift has been transformative- How a deeply embedded culture of colour-coded data classification became the unexpected enabler of safe AI adoption- Managing teams of agents, not people - why the next critical skill set mirrors managing human teams- The optimal team size thesis - why five people with AI may outperform fifty without it- Why experienced professionals have the greatest AI advantage and why "the worst day on a trading floor was when the last person to remember the last crash retired"- The typing pool analogy - how an entire class of office jobs disappeared gradually through evolution, not Armageddon- Why the real skill of software development was never writing the if statements - it was understanding the requirement- Shadow AI at the Bank of England - how they took it "out of the shadows" rather than trying to police it- "The best user interface is no user interface" - how AI is bypassing rigid enterprise taxonomies- Agentic commerce and the future of payments - from concert ticket queues to reshaping retail business models- Why AI decisions at the Bank are made by people - and why "human in the loop" is too simplistic- The poison and the antidote - why every AI capability creates both opportunity and riskAbout William Lovell: Will is Head of Future Technology at the Bank of England, where he has worked for twenty-nine years across technology roles from application developer through to heading up Planning and Design and leading IT Architecture for UK regulatory reform. He co-chairs the Bank's AI Task Force and is a Senior Advisor on CBDC, Data, and Payments. He began his career at the BBC, studied at London South Bank University, and speaks regularly at Pay360 and international fintech conferences on AI, CBDC, blockchain, and payment systems.

    1 ч. 12 мин.

Об этом подкасте

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove