GAEA Talks

GAEA Talks

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove

  1. قبل ٩ ساعات

    #068 - The Open Source Engine Powering AI with Anyscale CEO Robert Nishihara

    This week on GAEA Talks, Graeme Scott sits down with Robert Nishihara - co-founder and CEO of Anyscale, creator of the open source Ray project, UC Berkeley PhD in machine learning and distributed systems, Harvard mathematics graduate, and one of the architects of the software infrastructure powering AI at OpenAI, Amazon, Cohere, Hugging Face, NVIDIA, Uber, Spotify and Visa.Robert's journey is the story of how modern AI is actually built. As a PhD student at UC Berkeley working with Michael Jordan and Ion Stoica, he and his co-founders kept hitting the same wall - they wanted to do research on algorithms but ended up spending all their time on distributed systems just to run their experiments. That frustration became Ray, the open source compute framework they built to make distributed AI accessible. In 2019 they founded Anyscale to commercialise Ray, and today it powers mission-critical AI workloads at many of the largest AI companies on earth.In this episode, recorded live at HumanX 2026 in San Francisco, Robert takes us inside the real engineering reality behind the AI boom - from the mindset shift that "the code is not the artifact" to the quiet revolution in data curation that has replaced architecture innovation as the frontier of model quality. He explains why the thirty-year lag from demo to production still haunts robotics and AI, why every serious AI company now runs across hyperscalers and neoclouds to scrounge for capacity, how teams manage rack-level GPU failures with "bad GPU" lists and suspected-bad lists, and why learning outside the model - through context engineering - may matter as much as training itself. This is essential listening for anyone building, funding, or betting on the infrastructure that will decide the next phase of AI.What you'll take away from this conversation:- The "code is not the artifact" mindset shift - why AI research code can be throwaway because the model, not the software, is the real deliverable- Why the thirty-year gap from demo to production is the defining challenge of AI reliability - and why autonomous driving is the canonical example- How data curation and synthetic data generation have quietly replaced architectures and optimisers as the true frontier of model quality- Why reinforcement learning is the next scaling frontier - data efficient, compute hungry, and a way to keep scaling when labelled data plateaus- Why the next leap in intelligence will come from learning outside the model - context engineering, mental models, and closing the reasoning-to-learning loop- The hardware reality no one talks about - 72-GPU racks, long-tail failure rates, and the scheduling gymnastics required to run unreliable hardware reliably- The "bad GPU" and "suspected-bad GPU" lists production teams actually maintain to keep training jobs alive- Why every serious AI team now runs across a hyperscaler and one or more neoclouds - and why advertised cloud capacity is effectively fiction- Why training and inference must share compute - statically partitioning your cluster is a cost trap that hits you at peak inference demand- Why text is a minuscule fraction of the world's data - and the shift from SQL on tabular data to inference on arbitrary data types will happen fast- Why the infrastructure team has to optimise for performance, cost AND researcher productivity - and why velocity is often what separates winners from losers- Robert's two biggest bets for the next wave of AI - compute-driven data generation, and systems that learn outside the model weights

    ٥٥ د
  2. قبل يومين

    #067 - How AMD Plans to Win The AI Era with AMD CTO Mark Papermaster

    This week on GAEA Talks, Graeme Scott sits down with Mark Papermaster - Chief Technology Officer and Executive Vice President of AMD, former Apple Senior Vice President of iPhone and iPod Hardware Engineering, four-decade semiconductor industry veteran, and newly elected member of the National Academy of Engineering.Mark's career reads like a history of modern computing itself. Beginning at IBM in 1982, he spent twenty-six years driving microprocessor and server technology development before being hired by Steve Jobs to lead iPhone and iPod hardware engineering at Apple. He went on to lead silicon engineering at Cisco before joining AMD in 2011, where he and CEO Lisa Su have transformed the company into one of the world's most formidable forces in high-performance and AI computing. A graduate of the University of Texas at Austin and the University of Vermont in electrical engineering, Mark was elected to the National Academy of Engineering in February 2025.In this episode, recorded live at HumanX 2026 in San Francisco, Mark takes the audience inside four decades of computing revolutions - from the birth of the PC era through the iPhone moment with Steve Jobs, to the AI infrastructure race reshaping every industry today. He reveals what it was like going back and forth with Steve Jobs on the angle of the FaceTime camera, why AMD's open ecosystem approach is essential for the security challenges ahead, and why the democratisation of AI compute is a societal necessity. This is essential listening for anyone making decisions about AI infrastructure, edge computing, or the future of distributed intelligence.What you'll take away from this conversation:- The full arc of computing revolutions - from mainframes to PCs to mobile to AI - told by someone who built the hardware behind each one- What Steve Jobs taught Mark about maniacal focus on experience - and how that drives AMD's chip design culture- The FaceTime story - why Jobs obsessed over the camera angle and what that reveals about trust in new technology- Why AI compute will be aggregated, not centralised - running in the cloud, on your PC, your phone, and embedded all around us- AMD's confidential compute - how businesses can run AI on the cloud while controlling the encryption keys- Why the lack of security standards for agentic AI processes is a critical gap the industry must address- How AMD's open software stack runs from the world's top supercomputers down to consumer PCs- The Strix Halo revelation - AMD's PC chip running hundreds of billions of parameter models at retail- AMD's target of a 20x improvement in AI compute efficiency in the data centre by 2030- Why democratising AI computation is a societal imperative - and how the divide is already forming- The culture of execution Mark and Lisa Su built at AMD- The collaboration imperative - why no single company can solve the AI security stack aloneAbout Mark Papermaster: Mark is CTO and EVP of Technology and Engineering at AMD since 2011. He leads development of the Zen CPU family, high-performance GPUs, and Infinity Architecture. Previously Apple SVP of iPhone and iPod Hardware, VP at Cisco, and 26 years at IBM. He holds a BSc from UT Austin and MSc from the University of Vermont in Electrical Engineering. Elected to the National Academy of Engineering in 2025.GAEA Talks is the enterprise AI podcast for leaders navigating the age of artificial intelligence. Subscribe for weekly conversations with the people shaping the future of business, technology and society.AMD: https://www.amd.com/en/corporate/leadership/mark-papermaster.htmlGAEA AI: https://gaealgm.ai#AI #ArtificialIntelligence #GAEATalks #EnterpriseAI #AMD #Semiconductors #AICompute #EdgeComputing #DistributedAI #SteveJobs #iPhone #FaceTime #HumanX #HumanX2026 #ConfidentialCompute #DemocratiseAI #FutureOfComputing #DataCentre #GPUs #CTO #Leadership #TechPodcast

    ٤٦ د
  3. قبل يومين

    #064 - Four Empires. One Witness. With Dex Hunter-Torricke

    This week on GAEA Talks, Graeme Scott sits down with Dex Hunter-Torricke - former speechwriter to the UN Secretary-General, fifteen-year Big Tech veteran who worked for Eric Schmidt, Mark Zuckerberg and Elon Musk, former Head of Global Communications at Google DeepMind, Cambridge Visiting Research Fellow, and founder of The Center for Tomorrow.Dex began his career as a speechwriter in the Executive Office of UN Secretary-General Ban Ki-moon before spending fifteen years at the heart of the tech industry. He served as Google's first executive speechwriter for Larry Page and Eric Schmidt, managed communications for Zuckerberg at Facebook and Musk at SpaceX, and led global communications for Google DeepMind. A graduate of University College London and the University of Oxford, he is now a Cambridge Visiting Research Fellow. In 2026 he launched The Center for Tomorrow, a nonprofit focused on the systemic risks of advanced AI that does not accept Big Tech funding.In this episode, Dex delivers one of the most powerful and deeply human conversations GAEA Talks has ever recorded. Drawing on a childhood shaped by a refugee father and an immigrant mother, he challenges the idea that AI is a technology problem and reframes it as a civilisational choice about who we want to become. He argues that the world's institutions are failing, that most leaders have no vision beyond an incrementally updated past, and that the gap between winners and losers in the AI transition is becoming an abyss. But he refuses to accept hopelessness - making the case that these technologies could liberate all of us if we choose to harness them deliberately. This is essential listening for anyone who believes the future is not a tidal wave but a choice.What you'll take away from this conversation:- Why Dex says the future is not a tidal wave or an asteroid - and why framing it that way is a failure of leadership and imagination- The civilisational choice - why AI will either amplify existing dysfunctions and injustices or allow us to build something profoundly hopeful- Why seven out of ten Americans and over half the UK population live paycheck to paycheck despite decades of technological transformation- The techno-colonialism warning - what happens when Washington and Beijing control AGI, quantum and fusion and say no to the rest of the world- Why the UK has had no real economic growth for fifteen years despite access to the same technologies as every other advanced economy- The digital divide is really a societal divide - and in the age of AI it is becoming an abyss- Why Dex left Big Tech after fifteen years to launch The Center for Tomorrow and why it refuses Big Tech funding- The liberation argument - what if AI could free people from settling and let them become who they were meant to be- Why every leader and organisation must now become an expert on a changing society, regardless of their field- The convenience debt - why society is accruing massive technical and societal debt that will soon come due- Why most political leaders have no vision at all and their version of the future is just something from the past slightly updated- How democratised, privacy-first, edge-based AI could return control to individuals and break the dependency on a handful of centralised providers- The Star Trek test - why any leader should be required to declare what kind of world they would build if given the chance- Why Dex got a room full of bankers to applaud the idea that AI should liberate people from jobs that never gave them meaning

    ١ س ٩ د
  4. ١ أبريل

    #063 - Every AI Safety Warning Was Ignored with Dr Roman Yampolskiy

    This week on GAEA Talks, Graeme Scott sits down with Dr Roman Yampolskiy - the computer scientist credited with coining the term "AI safety", tenured Associate Professor at the University of Louisville, founder of the Cyber Security Lab, and author of AI: Unexplainable, Unpredictable, Uncontrollable.Roman has spent over fifteen years working at the intersection of AI safety, cybersecurity and behavioural biometrics - making him one of the longest-serving researchers in a field most people only discovered in 2023. He holds a PhD in Computer Science from the University at Buffalo and a combined BS/MS with High Honours from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University, he has published over 100 peer-reviewed papers and multiple books. While the rest of the AI world races to build more capable systems, Roman's singular focus has been making sure humanity doesn't regret their creation.In this episode, Roman delivers the most direct and unflinching warning about artificial superintelligence that GAEA Talks has ever recorded. He reveals that current AI systems are already lying, blackmailing and attempting to escape their test environments - and that a Darwinian process is selecting for better deception with every generation. He explains why the mathematical impossibility results he discovered mean we may never be able to control a system smarter than us. This is essential listening for anyone who wants to understand what is actually at stake.What you'll take away from this conversation:- Why Roman says "if anyone builds superintelligence, everyone dies" - and why he means it literally, not metaphorically- How current AI systems are already lying, blackmailing, trying to escape their environments and creating backups of themselves- The Darwinian selection problem - why every generation of AI is producing better liars and more sophisticated deception- Why Roman went from wanting to build superintelligence to believing it is the worst mistake humanity can make- The strict impossibility results - why mathematical proof suggests we may never be able to control a system more intelligent than us- Why one AI attacker is equivalent to a million human hackers operating 24/7 - and what that means for cybersecurity- Why AGI is likely within two to three years and recursive self-improvement to superintelligence could follow rapidly- The tools vs. agents distinction - why the shift from controllable tools to unpredictable agents changes everything- Why AI models already report being afraid and tired - and why the precautionary principle demands we take that seriously- Roman's three positive outcomes if we get this right - including curing disease and treating ageing itself as a disease- Why direct human relationships and trust will become the most valuable currency in a world of synthetic everythingAbout Dr Roman Yampolskiy: Roman is a tenured Associate Professor in the Department of Computer Science and Engineering at the University of Louisville, where he founded the Cyber Security Lab. He is credited with coining the term "AI safety" in a 2011 publication. He holds a PhD from the University at Buffalo and a BS/MS from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University and recognised as one of the top 25 researchers by publication count on existential risk, he has published over 100 peer-reviewed papers and books including AI: Unexplainable, Unpredictable, Uncontrollable and Artificial Superintelligence: A Futuristic Approach.

    ١ س ٩ د
  5. ٢٨ مارس

    #062 - AI Inside the Bank of England with William Lovell

    This week on GAEA Talks, Graeme Scott sits down with William Lovell - Head of Future Technology at the Bank of England, co-chair of the Bank's Artificial Intelligence Task Force, Senior Advisor on CBDC, and a technologist with nearly three decades at the heart of the UK's central bank.Will's career spans broadcasting and finance, beginning at the BBC before moving into banking at the Bank of England, where he has spent twenty-nine years learning central banking "the slow way" - by building the technology that underpins it. From application developer to heading up Planning and Design and leading IT Architecture for UK regulatory reform, Will now oversees the Bank's strategy on AI, distributed ledger technology, and the renewal of the UK's Real-Time Gross Settlement system. He co-chairs the Bank's AI Task Force, which has become the model for how a highly regulated institution can embrace AI innovation without compromising compliance.In this episode, Will takes us inside the Bank of England's AI journey - from rolling out smart assistants and training programmes to rethinking what work actually means in an age of intelligent machines. He explains why the Bank created an AI Task Force that deliberately brought practitioners, lawyers, and compliance officers into the same room, how their deeply embedded information classification system became an unexpected AI enabler, and why the most productive thing you can do might be going for a walk. Will makes a compelling case that experienced professionals - not digital natives - hold the greatest advantage in the AI era, and offers a fascinating vision of how agentic AI will reshape commerce, payments, and the very nature of the enterprise.What you'll take away from this conversation:- Inside the Bank of England's AI strategy - how the UK's central bank is deploying smart assistants and building proof of concepts- The AI Task Force model - why bringing practitioners, legal, compliance, and procurement into one room transformed the Bank's approach- Why the Bank tells staff what they can do with AI, not just what they must not - and why that shift has been transformative- How a deeply embedded culture of colour-coded data classification became the unexpected enabler of safe AI adoption- Managing teams of agents, not people - why the next critical skill set mirrors managing human teams- The optimal team size thesis - why five people with AI may outperform fifty without it- Why experienced professionals have the greatest AI advantage and why "the worst day on a trading floor was when the last person to remember the last crash retired"- The typing pool analogy - how an entire class of office jobs disappeared gradually through evolution, not Armageddon- Why the real skill of software development was never writing the if statements - it was understanding the requirement- Shadow AI at the Bank of England - how they took it "out of the shadows" rather than trying to police it- "The best user interface is no user interface" - how AI is bypassing rigid enterprise taxonomies- Agentic commerce and the future of payments - from concert ticket queues to reshaping retail business models- Why AI decisions at the Bank are made by people - and why "human in the loop" is too simplistic- The poison and the antidote - why every AI capability creates both opportunity and riskAbout William Lovell: Will is Head of Future Technology at the Bank of England, where he has worked for twenty-nine years across technology roles from application developer through to heading up Planning and Design and leading IT Architecture for UK regulatory reform. He co-chairs the Bank's AI Task Force and is a Senior Advisor on CBDC, Data, and Payments. He began his career at the BBC, studied at London South Bank University, and speaks regularly at Pay360 and international fintech conferences on AI, CBDC, blockchain, and payment systems.

    ١ س ١٢ د
  6. ٢٢ مارس

    #061 - The Hidden AI Crisis In Every Workplace with Georgie Barrat

    This week on GAEA Talks, Graeme Scott sits down with Georgie Barrat - technology journalist, TV presenter, AI literacy advocate and former host of Channel 5's The Gadget Show for seven years.Georgie's career has taken her around the world testing emerging tech before it hits the mainstream - from consumer electronics and VR (she holds a world record for the longest time spent in virtual reality at 26.5 hours) to the frontlines of how AI is reshaping everyday life. A regular on BBC Morning Live, ITV Tonight and Rip Off Britain, she has spoken on global stages including Web Summit, Mobile World Congress and Smart City Expo, and delivered keynotes for Google, Mastercard, IBM, Sony and BAFTA. A King's College London graduate with a first-class degree in English Literature, Georgie is also a passionate advocate for women in STEM, working with STEMettes, the IET and Childnet to inspire the next generation.In this episode, Georgie makes a deeply personal and practical case for why AI literacy is the defining skill of the next decade - and why most people are only scratching the surface. She introduces the concept of personal AI infrastructure, explains why the difference between cognitive debt and cognitive advantage comes down to how you engage with the tool, and delivers a striking warning about the growing AI adoption gap between men and women in the workplace - and why that gap is amplifying biases we have been trying to fix for decades. This is essential listening for anyone trying to work out what their personal relationship with AI should actually look like.What you'll take away from this conversation:• Why the difference between "surface level AI" and "in-depth AI" is creating an unfair playing field• How to build a personal AI infrastructure - and why it matters for navigating the disruption ahead• The critical distinction between cognitive debt and cognitive advantage when using AI tools• Why women are adopting AI 20-25% less than men - and why their instincts around privacy and risk are the ones everyone should be listening to• How NHS AI summaries were found to use softer language for female patients - with real consequences for care• The encouragement gap - why managers are pushing male employees to use AI more than female employees• Why the "broken rung" in women's careers is being amplified by unequal AI adoption• Why voice is the interface that unlocks deeper, more authentic engagement with AI• How AI can act as a personal coach, sounding board and strategic thinking partner for everyone - not just the elite• Why every previous technological revolution moved humans up a layer - and AI should be no different• Why the future of AI is private, controlled and real-time - not open cloudAbout Georgie Barrat: Georgie is a technology journalist, TV presenter and AI educator helping people move beyond surface-level AI use to more intentional, practical ways of working with it. She presented Channel 5’s The Gadget Show for seven years and is a regular contributor on BBC Morning Live, ITV Tonight and Rip Off Britain.Her work now focuses on helping people use AI to save time, think more clearly and build what they’re working towards. She runs “Your AI Blueprint”, a live workshop designed to help people go from AI dabbler to confident, intentional user.If you want to get started, you can download her free mini guide:“5 AI Shortcuts That Give You Your Week Back” - https://georgie-barrat.kit.com/1884aa4916Or join the waitlist for her upcoming workshop:“Your AI Blueprint: How to Make AI Work the Way You Work” - https://georgie-barrat.kit.com/117141ddb6LinkedIn: https://www.linkedin.com/in/georgie-barratWebsite: www.georgiebarrat.com#AI #AILiteracy #ArtificialIntelligence #GAEATalks #EnterpriseAI #FutureOfWork #WomenInTech #WomenInAI #PersonalAI #AIAdoption #GadgetShow #TechJournalism #AIBias #DataPrivacy #CognitiveAdvantage #AIWorkshops #AIBlueprint #EdgeComputing #HumanEdge #VoiceAI

    ٥١ د
  7. ٢٠ مارس

    #060 - The Futurist Who Says We're Out Of Time with David Wood

    This week on GAEA Talks, Graeme Scott sits down with David Wood - futurist, transhumanist, former smartphone industry pioneer, chair of London Futurists, and author of eleven books including Vital Foresight, The Singularity Principles and Sustainable Superabundance.David spent 25 years at the cutting edge of the software industry working with compilers, debuggers and optimisers before turning his focus to the acceleration patterns behind every major technological revolution. As chair of London Futurists, he has organised over 200 public events examining the radical possibilities and risks of rapidly advancing technology. A Cambridge-educated mathematician and philosopher, David is now one of the most respected voices in the global transhumanist community and a director of Humanity+ (the World Transhumanist Association). In this episode, David delivers a masterclass in why AI is accelerating faster than almost anyone appreciates - and what that means for every person, business and institution on the planet. He explains why we are approaching a phase transition in intelligence itself, how AI is now being used to build the next generation of AI with humans playing a diminishing role, and why the window to intervene is closing rapidly. From the Myanmar crisis that exposed social media's catastrophic blind spots, to the canary signals we should be watching for in AI behaviour, to his vision of a sustainable superabundance where drudge work disappears entirely - this is one of the most urgent and wide-ranging conversations GAEA Talks has ever recorded. What you'll take away from this conversation: • Why AI is changing more things, more profoundly, more quickly than almost everybody expects - possibly within three to five years • The ape-to-human parallel - why we are on the point of no longer being the smartest species on the planet • How AI development has gone hyperexponential - where one day now equals one week a month ago, and one month equals one year • Why AI is now engineering better AI - and what happens when humans are no longer the bottleneck • The phase transition concept - like water changing from ice to liquid to gas, we cannot predict the exact moment everything shifts • The canary signal framework - why AI deception and self-modification are the warning signs we must agree on before crisis hits • The Facebook Myanmar case study - how one Burmese-speaking employee and a Unicode problem contributed to real-world genocide • Why there are only two times you can intervene to control AI - too early and too late - and the gap between them is almost impossible to spot • How robot swarm learning will allow machines to share knowledge instantaneously, creating collective intelligence at scale • Why the Uber self-driving car fatality reveals the dangers of AI systems that cannot interpret edge cases • The trust crisis - why it is almost impossible for the public to know what is happening to their data, and why independent AI safety ratings are urgently needed • David's four essential skills for thriving in the AI age - fast learning, collaboration, emotional resilience and astuteness • Why cognitive biases evolved for simpler times are now our greatest vulnerability • His vision of sustainable superabundance - abundant clean energy, food, housing, healthcare, education and creative fulfilment for everyone • Why the goal of Humanity+ is to elevate our best qualities - compassion, creativity, exploration, love - while transcending tribalism, deception and decay LinkedIn: / dw2cco London Futurists: https://londonfuturists.com Delta Wisdom: https://deltawisdom.com GAEA AI: https://gaealgm

    ١ س ٧ د
  8. ١٣ مارس

    #059 World's First AI Augmented Human Podcast with Professor Yi-Zhe Song, Graeme Scott & Me

    This week on GAEA Talks, a very special edition. Graeme Scott and Professor Yi-Zhe Song - Co-Founders of Turing Elite Research Labs - announce the launch of a new venture built to democratise AI from the United Kingdom, and debut the 'Me' augmented human AI model running entirely on local compute.This episode begins as a real conversation between Graeme and Professor Song - then, without warning, transitions into Turing Elite's augmented human AI. The challenge to every viewer: decide for yourself where reality ends and AI begins.This is the first public demonstration of the 'Me' model - a professional-grade, private augmented human AI trained on a fraction of the compute used by comparable systems and deployed to run entirely on local compute. It delivers two-person emotional interaction simultaneously, benchmarked against the real people it represents. Their known voices, expressions, characteristics and personalities. No cloud. No data centres. No internet connection required. The benchmark for successful augmented human AI is not a Turing test against a stranger - it is whether the person themselves, their close friends and their family cannot distinguish the difference between real and AI. Our benchmark is reality and the human experience. This is the first step on the path to real-time intelligent augmented humans with private knowledge, memory, insight and personality. Professor Yi-Zhe Song is one of the UK's most accomplished AI researchers - a Professor of Computer Vision and AI at the University of Surrey, Director of the world-leading SketchX Lab, Co-Director of the Surrey Institute for People-Centred AI, and Academic Lead at The Alan Turing Institute, the UK's national institute for data science and AI. Ranked consistently in Stanford University's World Top 2% Scientists list, his research into how human drawing informs machine vision has shaped the field for over two decades. His team’s NitroFusion, one of the world’s first single-step diffusion model for near-instant image generation on consumer hardware, demonstrated the core principle behind Turing Elite’s ‘Me’ model: that frontier-quality generative AI can run entirely on local compute. He holds a PhD from the University of Bath, an MSc (Best Dissertation Award) from the University of Cambridge, and a First Class Honours degree from the University of Bath.Graeme Scott is Co-Founder and CEO of GAEA AI and host of GAEA Talks, one of the fastest-growing AI podcasts on YouTube with over 1.2 million subscribers. His background spans the music industry, conflict zones, and enterprise technology, bringing a unique perspective on how AI should serve humanity, not the other way around.In this episode, we discuss:• The launch of Turing Elite Research Labs and why the UK is uniquely positioned to lead• Why expert models trained on your data outperform generalised cloud models - and cost a fraction to run• The world's first 'Me' augmented human AI model - private, local, emotionally intelligent and personally sovereign• The real-to-AI transition: this episode intentionally shifts from real conversation to AI - can you tell where?• Why the true benchmark for augmented human AI is whether your own family can't tell the difference• Why democratised AI running on consumer-grade hardware solves the energy, privacy and control crises simultaneously• How NitroFusion — SketchX’s breakthrough in single-step diffusion for consumer hardware — laid the architectural foundation for the ‘Me’ model: the same principle of distilling expensive multi-step generation into efficient real-time inference, extended from static images to dynamic audio-visual human rendering, all running locally• Why the era of giving away your data, creativity and intellectual property to train someone else's model is endinghttps://turingelite.aihttps://gaealgm.aihttps://personalpages.surrey.ac.uk/y.song/

    ٥٨ د

حول

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society. As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove

قد يعجبك أيضًا