London Futurists

London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

  1. Anticipating 2026

    JAN 7

    Anticipating 2026

    When we started this Podcast back in August 2022, we, Calum and David, announced the theme to be “Anticipating and managing exponential impact”. We talked about three sub-themes: Developing the skills of exponential foresight; Distinguishing between scenarios, whether they were plausible or implausible, and whether they were desirable or undesirable; and thirdly, Supporting the community of collaborative exponential foresight. 126 episodes later, as we reach the transition between 2025 and 2026, it’s a good time for the two of us to take stock. Accordingly, in this episode, we each pick out a number of events from the last 12 months which we see as potential signals of larger exponential impact ahead. Selected follow-ups: An MIT report that 95% of AI pilots fail spooked investors - by Jeremy KahnThe Shape of AI: Jaggedness, Bottlenecks and Salients - by Ethan MollickThe Road To Superintelligence - by CalumAI Doomers, Accelerationists & Scouts - Digital DisruptionThe Economic Singularity - book by CalumHow can better foresight actually improve the world? - Webinar in the series "From forecasts to levers"Disrupting the first reported AI-orchestrated cyber espionage campaign - AnthropicMajor Neuromorphic Computing projects - listed by ConsciumWhy AI Agent Verification Is A Critical Industry - by CalumClimate change and populism: Grounds for optimism? - LFP episode with Matt BurgessWhat's Our Problem? - book by Tim UrbanOpenAI and Retro Biosciences achieve 50x increase in expressing stem cell reprogramming markersProgress at LEVF, December 2025 - by DavidUK BiobankThe THRIVE Act - Regenerative Medicine FoundationMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    52 min
  2. The puzzle pieces that can defuse the US-China AI race dynamic, with Kayla Blomquist

    12/23/2025

    The puzzle pieces that can defuse the US-China AI race dynamic, with Kayla Blomquist

    Almost every serious discussion about options to constrain the development of advanced AI results in someone raising the question: “But what about China?” The worry behind this question is that slowing down AI research and development in the US and Europe will allow China to race ahead. It's true: the relationship between China and the rest of the world has many complications. That’s why we’re delighted that our guest in this episode is Kayla Blomquist, the Co-founder and Director of the Oxford China Policy Lab, or OCPL for short. OCPL describes itself as a global community of China and emerging technology researchers at Oxford, who produce policy-relevant research to navigate risks in the US-China relationship and beyond. In parallel with her role at OCPL, Kayla is pursuing a DPhil at the Oxford Internet Institute. She is a recent fellow at the Centre for Governance of AI, and the lead researcher and contributing author to the Oxford China Briefing Book. She holds an MSc from the Oxford Internet Institute and a BA with Honours in International Relations, Public Policy, and Mandarin Chinese from the University of Denver. She also studied at Peking University and is professionally fluent in Mandarin. Kayla previously worked as a diplomat in the U.S. Mission to China, where she specialized in the governance of emerging technologies, human rights, and improving the use of new technology within government services. Selected follow-ups: Kayla Blomquist - Personal siteOxford China Policy LabThe Oxford Internet Institute (OII)Google AI defeats human Go champion (Ke Jie)AI Safety Summit 2023 (Bletchley Park, UK)United Kingdom: Balancing Safety, Security, and Growth - OCPLChina wants to lead the world on AI regulation - report from APEC 2025China's WAICO proposal and the reordering of global AI governanceImpact of AI on cyber threat from now to 2027Options for the future of the global governance of AI - London Futurists WebinarA Tentative Draft of a Treaty - Online appendix to the book If Anyone Builds It, Everyone DiesAn International Agreement to Prevent the Premature Creation of Artificial SuperintelligenceMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    35 min
  3. Jensen Huang and the zero billion dollar market, with Stephen Witt

    12/16/2025

    Jensen Huang and the zero billion dollar market, with Stephen Witt

    Our guest in this episode is Stephen Witt, an American journalist and author who writes about the people driving the technological revolutions. He is a regular contributor to The New Yorker, and is famous for deep-dive investigations. Stephen's new book is "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip", which has just won the 2025 Financial Times and Schroders Business Book of the Year Award. It is a definitive account of the rise of Nvidia, from its foundation in a Denny's restaurant in 1993 as a video game component manufacturer, to becoming the world's most valuable company, and the hardware provider for the current AI boom. Stephen's previous book, “How Music Got Free”, is a history of music piracy and the MP3, and was also a finalist for the FT Business Book of the Year. Selected follow-ups: Stephen Witt - personal siteArticles by Stephen Witt on The New YorkerThe Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip - book siteStephen Witt wins FT and Schroders Business Book of the Year - Financial TimesNvidia ExecutivesBattle Royale (Japanese film) - IMDbThe Economic Singularity - book by Calum ChaceA Cubic Millimeter of a Human Brain Has Been Mapped in Spectacular Detail - NatureNotebookLM - by GoogleMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    45 min
  4. What's your p(Pause)? with Holly Elmore

    12/05/2025

    What's your p(Pause)? with Holly Elmore

    Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI.” But PauseAI isn’t just a talking shop. They’re probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities. Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard’s Organismic and Evolutionary Biology department. Selected follow-ups: Holly Elmore - substackPauseAI USPauseAI - global siteWild Animal Suffering... and why it mattersHard problem of consciousness - WikipediaThe Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael PlantLeading Evolution Compassionately - Herbivorize PredatorsDavid Pearce (philosopher) - WikipediaThe AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI)Nick Bostrom's new views regarding AI/AI safety - redditAI is poised to remake the world; Help us ensure it benefits all of us - Future of Life InstituteOn being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AICalifornia Institute of Machine Consciousness - organisation founded by Joscha BachPausing AI is the only safe approach to digital sentience - article by Holly ElmoreCrossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey Moore Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    44 min
  5. Real-life superheroes and troubled institutions, with Tom Ough

    10/31/2025

    Real-life superheroes and troubled institutions, with Tom Ough

    Popular movies sometimes feature leagues of superheroes who are ready to defend the Earth against catastrophe. In this episode, we’re going to be discussing some real-life superheroes, as chronicled in the new book by our guest, Tom Ough. The book is entitled “The Anti-Catastrophe League: The Pioneers And Visionaries On A Quest To Save The World”. Some of these heroes are already reasonably well known, but others were new to David, and, he suspects, to many of the book’s readers. Tom is a London-based journalist. Earlier in his career he worked in newspapers, mostly for the Telegraph, where he was a staff feature-writer and commissioning editor. He is currently a senior editor at UnHerd, where he commissions essays and occasionally writes them. Perhaps one reason why he writes so well is that he has a BA in English Language and Literature from Oxford University, where he was a Casberd scholar. Selected follow-ups: About Tom OughThe Anti-Catastrophe League - The book's webpageOn novel methods of pandemic preventionWhat is effective altruism? (EA)Sam Bankman-Fried - Wikipedia (also covers FTX)Open PhilanthropyConsciumHere Comes the Sun - book by Bill McKibbenThe 10 Best Beatles Songs (Based on Streams)Carrington Event - WikipediaMirror life - WikipediaFuture of Humanity Institute 2005-2024: final report - by Anders SandbergOxford FHI Global Catastrophic Risks - FHI Conference, 2008ForethoughtReview of Nick Bostrom’s Deep Utopia - by CalumDeepMind and OpenAI claim gold in International Mathematical OlympiadWhat the Heck is Hubble Tension?The Decade Ahead - by Leopold AschenbrennerAI 2027AnglofuturismMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    40 min
  6. Safe superintelligence via a community of AIs and humans, with Craig Kaplan

    10/10/2025

    Safe superintelligence via a community of AIs and humans, with Craig Kaplan

    Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon. Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom). Selected follow-ups: iQ CompanySuperintelligence - by iQ CompanyHerbert A. Simon - WikipediaAmara’s Law and Its Place in the Future of Tech - Pohan LinThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race’ of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father’s MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    42 min
  7. How progress ends: the fate of nations, with Carl Benedikt Frey

    09/17/2025

    How progress ends: the fate of nations, with Carl Benedikt Frey

    Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there’s a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand. To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School. Carl’s new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide. Selected follow-ups: Professor Carl-Benedikt Frey - Oxford Martin SchoolHow Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University PressStop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump’s Takeover")OpenAI o3 Breakthrough High Score on ARC-AGI-PubA Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - ViceThe future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. OsborneEurope's Choice: Policies for Growth and Resilience - Alfred Kammer, IMFMIT Radiation Laboratory ("Rad Lab")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    38 min
  8. Tsetlin Machines, Literal Labs, and the future of AI, with Noel Hurley

    09/08/2025

    Tsetlin Machines, Literal Labs, and the future of AI, with Noel Hurley

    Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.  Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks. AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning. For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology. Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.  Selected follow-ups: Noel Hurley - Literal LabsA New Generation of Artificial Intelligence - Literal LabsMichael Tsetlin - WikipediaThinking, Fast and Slow - book by Daniel Kahneman54x faster, 52x less energy - MLPerf Inference metricsIntroducing the Model Context Protocol (MCP) - AnthropicPioneering Safe, Efficient AI - ConsciumSmartphones and Beyond - a personal history of Psion and SymbianThe Official History of Arm - ArmInterview with Sir Robin Saxby - IT ArchiveHow Spotify came to be worth billions - BBCMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration C-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

    37 min

Ratings & Reviews

4.7
out of 5
9 Ratings

About

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

You Might Also Like