London Futurists

London Futurists
London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

  1. The AI disconnect: understanding vs motivation, with Nate Soares

    4D AGO

    The AI disconnect: understanding vs motivation, with Nate Soares

    Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI. MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI. Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.” MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope. Selected follow-ups: Nate Soares - MIRIYudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRIThe Bayesian model of probabilistic reasoningDuring safety testing, o1 broke out of its VM - RedditLeo Szilard - Physics WorldDavid Bowie - Five Years - Old Grey Whistle TestAmara's Law - IEEERobert Oppenheimer calculation of p(doom)JD Vance commenting on AI-2027SolidGoldMagikarp - LessWrongASMLChicago Pile-1 - WikipediaCastle Bravo - Wikipedia Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    50 min
  2. Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin

    MAY 28

    Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin

    Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives.  He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence.  In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs. Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast. Selected follow-ups: PRISM podcastHenry Shevlin - personal siteKinds of Intelligence - Leverhulme Centre for the Future of IntelligenceConsciousness, Machines, and Moral Status - 2024 paper by Henry ShevlinApply rich psychological terms in AI with care - by Henry Shevlin and Marta HalinaWhat insects can tell us about the origins of consciousness - by Andrew Barron and Colin KleinConsciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et alAssociation for the Study of Consciousness Other researchers mentioned: Blake LemoineThomas NagelNed BlockPeter SengeGalen StrawsonDavid ChalmersDavid BenatarThomas MetzingerBrian TomasikMurray ShanahanMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    42 min
  3. The case for a conditional AI safety treaty, with Otto Barten

    MAY 9

    The case for a conditional AI safety treaty, with Otto Barten

    How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI. Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”. Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology. Selected follow-ups: Existential Risk ObservatoryThere Is a Solution to AI’s Existential Risk Problem - TimeInternational Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty - Otto Barten and colleaguesThe Precipice: Existential Risk and the Future of Humanity - book by Toby OrdGrand futures and existential risk - Lecture by Anders Sandberg in London attended by OttoPauseAIStopAIResponsible Scaling Policies - METRMeta warns of 'worse' experience for European users - BBC NewsAccidental Nuclear War: a Timeline of Close Calls - FLIThe Vulnerable World Hypothesis - Nick BostromSemiconductor Manufacturing Optics - ZeissCalifornia Institute for Machine ConsciousnessTipping point for large-scale social change? Just 25 percent - Penn Today Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    38 min
  4. Humanity's final four years? with James Norris

    APR 30

    Humanity's final four years? with James Norris

    In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks. Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries. Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention. Selected follow-ups: James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include: Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    50 min
  5. Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

    APR 23

    Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

    Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time. These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?” Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford. Selected follow-ups: Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    43 min
  6. The best of times and the worst of times, updated, with Ramez Naam

    MAR 26

    The best of times and the worst of times, updated, with Ramez Naam

    Our guest in this episode, Ramez Naam, is described on his website as “climate tech investor, clean energy advocate, and award-winning author”. But that hardly starts to convey the range of deep knowledge that Ramez brings to a wide variety of fields. It was his 2013 book, “The Infinite Resource: The Power of Ideas on a Finite Planet”, that first alerted David to the breadth of scope of his insight about future possibilities – both good possibilities and bad possibilities. He still vividly remembers its opening words, quoting Charles Dickens from “The Tale of Two Cities”: Quote: “‘It was the best of times; it was the worst of times’ – the opening line of Charles Dickens’s 1859 masterpiece applies equally well to our present era. We live in unprecedented wealth and comfort, with capabilities undreamt of in previous ages. We live in a world facing unprecedented global risks—risks to our continued prosperity, to our survival, and to the health of our planet itself. We might think of our current situation as ‘A Tale of Two Earths’.” End quote. 12 years after the publication of “The Infinite Resource”, it seems that the Earth has become even better, but also even worse. Where does this leave the power of ideas? Or do we need more than ideas, as ominous storm clouds continue to gather on the horizon? Selected follow-ups: Ramez Naam - personal websiteThe Infinite Resource: The Power of Ideas on a Finite PlanetThe Nexus Trilogy (Nexus Crux Apex)Jesse Jenkins (Princeton)Six Degrees: Our Future on a Hotter Planet - book by Mark Lynas1991 eruption of Mount Pinatubo - WikipediaWe cool Earth, with reflective clouds - Make SunsetsDirect Air Capture (DAC) - WikipediaFrontier: An advance market commitment to accelerate carbon removalToward a Responsible Solar Geoengineering Research Program - by David KeithSouth Korea scales down plans for nuclear powerMicrosoft chooses infamous nuclear site for AI powerMachines of Loving Grace: How AI Could Transform the World for the Better - Essay by Dario Amodei Music: Spike Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    46 min
  7. PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay

    FEB 27

    PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay

    In this episode, our guest is Rebecca Finlay, the CEO at Partnership on AI (PAI). Rebecca previously joined us in Episode 62, back in October 2023, in what was the run-up to the Global AI Safety Summit in Bletchley Park in the UK. Times have moved on, and earlier this month, Rebecca and the Partnership on AI participated in the latest global summit in that same series, held this time in Paris. This summit, breaking with the previous naming, was called the Global AI Action Summit. We’ll be hearing from Rebecca how things have evolved since we last spoke – and what the future may hold. Prior to joining Partnership on AI, Rebecca founded the AI & Society program at global research organization CIFAR, one of the first international, multistakeholder initiatives on the impact of AI in society. Rebecca’s insights have been featured in books and media including The Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She is a Fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S. Selected follow-ups: Partnership on AIRebecca FinlayOur previous episode featuring RebeccaCIFAR (The Canadian Institute for Advanced Research)"It is more than time that we move from science fiction" - remarks by Anne BouverotInternational AI Safety Report 2025 - report from expert panel chaired by Yoshua BengioThe Inaugural Conference of the International Association for Safe and Ethical AI (IASEAI)A.I. Pioneer Yoshua Bengio Proposes a Safe Alternative Amid Agentic A.I. HypeUS and UK refuse to sign Paris summit declaration on ‘inclusive’ AICurrent AICollaborative event on AI accountabilityCERN for AIAI Summit Day 1: Harnessing AI for the Future of WorkThe Economic SingularityReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

    40 min
  8. AI agents: challenges ahead of mainstream adoption, with Tom Davenport

    FEB 3

    AI agents: challenges ahead of mainstream adoption, with Tom Davenport

    The most highly anticipated development in AI this year is probably the expected arrival of AI agents, also referred to as “agentic AI”. We are told that AI agents have the potential to reshape how individuals and organizations interact with technology. Our guest to help us explore this is Tom Davenport, Distinguished Professor in Information Technology and Management at Babson College, and a globally recognized thought leader in the areas of analytics, data science, and artificial intelligence. Tom has written, co-authored, or edited about twenty books, including "Competing on Analytics" and "The AI Advantage." He has worked extensively with leading organizations and has a unique perspective on the transformative impact of AI across industries. He has recently co-authored an article in the MIT Sloan Management Review, “Five Trends in AI and Data Science for 2025”, which included a section on AI agents – which is why we invited him to talk about the subject. Selected follow-ups: Tom Davenport - personal siteFive Trends in AI and Data Science for 2025 - MIT Sloan Management ReviewMichael Martin Hammer - WikipediaAI winter - WikipediaAI is coming for the OnlyFans chat industry - FortuneHow Gen AI and Analytical AI Differ — and When to Use Each - Harvard Business ReviewTruth Terminal - The AI Bot That Became a Crypto Millionaire - a16zJim Simons - WikipediaWhy The "Godfather of AI" Now Fears His Own Creation - Curt Jaimungal interviews Geoffrey HintonAttention Is All You Need - Google researchers Apple suspends error-strewn AI generated news alerts - BBC NewsGen AI cuts costs by 30% - London Futurists Podcast episode featuring David Wakeling, partner at A&O ShearmanThe path to agentic automation is UiPath - UiPathMicrosoft CEO Predicts: "AI Agents Will Replace ALL Software" - AI Insights ExplorerNVIDIA CEO Jensen Huang Keynote at CES 2025 - NvidiaPioneering Safe, Efficient AI - ConsciumA New Survey Of Generative AI Shows Lots Of Work To Do - October 2023 article by Tom DavenportReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify

    34 min

Ratings & Reviews

4.7
out of 5
9 Ratings

About

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada