The Existential Hope Podcast

Foresight Institute

The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely. Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more. About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success. Explore more: Transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X Hosted on Acast. See acast.com/privacy for more information.

  1. APR 28

    Teaching AI empathy using brain signals

    AIs could get much better at understanding what we truly value if we gave them access to our brain signals. And doing that is becoming easier than ever before. In this episode, we talk with Thorsten Zander, professor at Brandenburg University of Technology and co-founder of Zander Labs. He coined the concept of passive brain-computer interfaces: devices that read brain signals to decode a user's mental state, non-invasively and without any effort on their part.  We cover: What non-invasive brain-computer interfaces (BCIs) can actually pick up from brain signals, and why that's very different from reading your thoughts or internal monologueThe hardware and software breakthroughs that are finally making passive BCIs wearable and affordableHow continuous neural feedback could dramatically improve AI training compared to current methods based on human ratingsWhy Thorsten believes passive BCIs may offer the most concrete path to solving the AI alignment problemThe risk of social networks exploiting unconscious brain reactions to manipulate people, and why regulation alone is unlikely to be enough 0:00 Cold open 0:56 What are passive brain-computer interfaces, and how are they different from Neuralink? 3:23 What are the applications of passive brain-computer interfaces? 4:33 What people get wrong about BCIs: reading thoughts vs. mental states 6:14 How passive BCIs could transform AI training and help AI understand you better 11:40 The misuse risk: how social networks could exploit unconscious brain reactions to manipulate political opinions 16:00 How close is mass adoption? The hardware and software breakthroughs making BCIs wearable 20:08 Why Germany's cybersecurity agency invested €30M in passive BCI research 24:22 Invasive vs non-invasive: how Europe and the US are taking different approaches to brain-computer interfaces 28:52 Should AI act on your first instinct?  32:56 How passive BCIs could solve the AI alignment problem (and why previous approaches have fallen short) 35:26 From professor to startup founder: what Thorsten learned making the leap 41:27 Best case scenario: what the world looks like when AI truly understands human values 46:03 How to get started in neuroadaptive AI and passive BCIs 48:18 The best advice Thorsten ever received On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    50 min
  2. APR 15

    How to build a career that actually changes the world

    More and more people want to make a real-world difference with their career. Very few of them do. Why are careers in consultancy or finance still so much more mainstream than careers tackling the world's biggest problems? In this episode, we talk with Jan-Willem van Putten, co-founder of the School for Moral Ambition, an organization that is building clear pathways for people who want to do work that actually changes the world. We discuss: The three main bottlenecks stopping talented people from doing high-impact workHow to find important yet neglected causes to work on, and the School for Moral Ambition top picksWhy movements that want to change the world often fail, and what effective advocates do differentlyHow to figure out which problems your specific background and skills are best placed to solveThe real struggles of leaving a prestigious career behind, from lifestyle creep to peer support, and what makes people say it was worth it Timestamps: 0:00 Cold open 2:12 From thesis on talent waste to joining consultancy: Jan-Willem's journey 4:29 Why did you step away from management consulting? 6:35 Focusing on impact vs. status: can you persuade people? 8:40 What is the School for Moral Ambition? 11:58 Is there now a real field for impact-driven careers? 12:58 Cause areas: food transition and tobacco control 17:10 How to prioritize problems to work on: the Triple-S framework 21:11 Next cause areas: tax fairness and democracy 23:00 What does the fellowship journey look like? 25:06 The profile of an ambitious idealist: startup drive meets activist values 27:43 Noble losers: why social movements fail 30:56 Is moral ambition only for the privileged? 36:04 How to cultivate a higher level of ambition in society 40:31 Feeling hopeless about big problems? New tools change the game 42:19 What holds people back from making the leap to meaningful work 46:12 What do fellows find most rewarding? 47:32 What does success look like in 10 years? 51:25 Where to start if you want to shift to a career that makes a difference 55:28 Best advice ever received: the case for taking action On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    58 min
  3. APR 2

    How AI could improve the lives of trillions of animals

    We think a lot about how AI will affect humanity, and for good reason. But AI could have an enormous impact on the trillions of animals that share our world (for better or worse), and almost nobody is talking about it. In this episode, we talk with Constance Li, founder of Sentient Futures, an organization working to make sure AI and other emerging technologies improve the lives of animals rather than harm them. We touch on: The enormous scale of animal suffering today, and why AI could either worsen or improve it depending on the decisions we make.Using computer vision and sensors to monitor animals and optimize for their welfare rather than just productivity.The research that’s being done to use AI to communicate with animals and what it’s already telling us about their well-being.Other sentient beings that could be impacted by emerging technologies, like artificial minds and biocomputing. Timestamps: 0:00 Cold open 1:57 Why AI and animals is an overlooked combination 4:46 The staggering scale of factory farming 8:26 How a physician became an animal welfare advocate 10:19 What Sentient Futures does day-to-day 11:38 What "AI for animals" actually means 14:23 Why the organization was renamed Sentient Futures, and the question of AI moral patients 18:08 The biggest misconceptions about AI for animals 20:26 What is precision livestock farming? 24:46 Best and worst-case scenarios for AI in farms 27:46 Communication across species: promise and limitations 35:56 Genetic welfare and using genetics in farms 43:34 What a best-case scenario for AI and animals looks like in the next 5–10 years 47:11 The biggest hurdles: funding and attention 48:39 How to get involved with Sentient Futures 50:44 What gives Constance hope On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    53 min
  4. MAR 19

    How dating an AI could improve your real love life | David Eagleman

    Having an AI boyfriend or girlfriend might seem creepy, but what if it helped you get better at human relationships?  In this episode, we talk with David Eagleman, a professor of neuroscience at Stanford, bestselling author, and science communicator. We discuss how AI and other technologies can help us become better humans – wiser, kinder and more empathetic, not just more productive. We get a neuroscientist’s take on how human and artificial intelligence interact, including: How to use AI to better understand other people and improve our relationships.Using debate AIs in schools to make younger generations better at critical thinking and grasping both sides of an argument.Is AI making our lives too easy by removing the friction we need to learn?Technologies that could expand what’s possible with our brain, from mind uploading to brain-to-brain communication. Timestamps: 0:00 Cold open 1:38 How David Eagleman became a neuroscientist 4:46 How malleable is the brain? 6:29 Can AI make us better humans? The Reddit debate bot experiment 11:00 AI relationships and becoming better at dating real people 14:24 Using AI to hear his late father's voice again 18:26 Mind uploading and digital immortality 23:27 What technology could make us more kind and empathetic 24:04 How AI could revolutionize debate education and critical thinking 28:30 Why AI needs a "tough love" mode to help us grow 30:17 Does AI making life easier rob us of useful friction for learning? 34:21 Why brain-to-brain communication probably won't help us understand each other 37:29 Could neurotechnology let us experience the world as another species? 41:58 The current state of neuroscience and where it's heading 48:05 How to get started if you're inspired by this conversation On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    51 min
  5. FEB 27

    How the whole world can exceed Swiss living standards by 2100 (backed by data)

    What would the world look like if the poorest country was as rich as Switzerland is today? It turns out we could actually see it happen by 2100, and with an economic growth that is similar to the one we have been experiencing for the past 20 years. In this episode, we talk with Marc Canal, Senior Fellow at the McKinsey Global Institute, and co-author of the book A Century of Plenty. We unpack what a hundred years of data tells us about human progress, and map out the steps to an ambitious scenario we can build by the end of the century. We discuss: How much the world has actually changed since 1925: from one in five children dying before age five in Spain, to life expectancy growing by 40 years globally.What it would take to make today’s Swiss living standards the world’s floor by 2100 (while richer countries grow far beyond it), from energy efficiency to birth rates and geopolitics.How data shows economic growth is actually good for the climate and for human happiness.Why achieving a prosperous world currently depends more on our collective belief that progress is possible than on resource constraints.How you can thrive in an AI world, where 57% of work hours can be automated, by leaning into the “messy” jobs. Timestamps: 0:00 - Cold open 1:54 - Why the McKinsey Global Institute wrote “A Century of Plenty”  5:20 - What was the world like in 1925?  10:04 - The most surprising stats from 100 years of progress 16:03 - Defining the “empowerment line” vs. the poverty line 19:30 - Projecting 2100: can we make Switzerland the global "floor"?  22:26 - The 5 conditions for achieving a world of plenty 26:14 - Can we grow the economy without sacrificing the environment? 28:23 - Economic growth vs. climate change: mitigation and adaptation  34:05 - What are the biggest challenges to the “progress machine”?  36:30 - The demographic crisis, and solving falling fertility rates 45:20 - Will AI speed up human innovation? 48:21 - Geopolitics: is the world really de-globalizing?  52:30 - The crisis of hope: why are we so pessimistic? 56:26 - How different nations reach the frontier of progress 58:49 - Building a new culture of growth 1:01:09 - Does economic progress actually make us happier? 1:05:39 - How you can help make a century of plenty probable On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    1h 9m
  6. FEB 19

    How your personal moral compass helps you build a better world | SJ Beard

    To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better? In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”. Some of the topics we discuss: How to shift our focus from "preventing the end of the world" to actively building a future worth living.Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.Relying on our own sense of “the right thing to do” as a practical guide to make the world better.Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow. Timestamps: [01:31] SJ’s background in philosophy and existential risk [02:02] Why write a book on existential hope? [04:43] Defining existential hope, and its relationship with existential risks and existential anxiety [11:09] Human agency without the guilt [13:59] Why there are no truly "natural" disasters [16:49] Why we shouldn’t try to build a perfect utopia [19:05] Protopia: is iterative improvement enough? [22:19] Defining progress: what does it mean to "get better"? [26:13] Protopia vs. viatopia: setting goals and achieving a great future [29:48] Existential safety as a collective project [35:06] Using participatory tools to make global decisions [36:32] Making existential hope reasonably demanding [40:06] Can we achieve systemic change in a tech-focused world? [46:00] Concrete socio-technical projects for AI safety [49:02] Aligning AI by building its character [51:45] The importance of history in building a good future [54:24] Key 17th-century ideas that are shaping modern society [58:20] Cultivating "humanity as a virtue" [01:04:37] Lessons from nuclear near-misses: the example of Petrov [01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making [01:12:16] Literacy vs. orality: how ideas become simplified [01:16:45] Meme culture and the transmission of deep context [01:18:48] How writing the book changed SJ’s mind [01:21:38] SJ Beard’s vision for existential hope On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    1h 26m
  7. FEB 4

    Raising science ambition: how to identify the highest-impact research for an AI world | Anastasia Gamick

    Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead? In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve. Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore. We discuss: What makes a research project truly high-impact in view of an AI worldConcrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and moreHow to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possibleHow young scientists can find the work that matters most for the future [00:00] Cold open [01:52] Introducing Anastasia Gamick and the mission of Convergent Research [02:44] Defining Focused Research Organizations (FROs) and their unique characteristics [09:46] Backcasting from 2075: what research to prioritize now to prepare for the intelligence age [19:08] The four types of projects Convergent decides not to fund [25:35] Biological and ecological dark matter: why we need better datasets for AI science [28:28] Why academia and industry aren’t incentivized to build tech capabilities for the public good [29:32] Defining “moonshot projects”: how boring drug screening creates massive downstream impact [32:56] The future of neuroscience: capturing videos of synapses firing [35:46] How the FRO model is catching on internationally [36:25] Steering vs. accelerating: selecting defense-dominant technology [41:22] Increasing human agency and how scientists can choose high-impact research areas [46:51] The evolution of scientific funding and the role of new philanthropy [48:05] Finding existential hope in the community of future-builders On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    49 min
  8. JAN 21

    Jason Crawford on how technology expands human choice and control

    Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature. Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing.  In our conversation, we dive into the core arguments of the manifesto: How we are more in control of our lives than ever beforeWhy we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”The value of nature and its interaction with humanityAllowing ourselves to celebrate human achievement and industrial civilizationThe concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problemsWhy two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfoundedThe possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agricultureHow to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fiction Chapters: [00:00] Cold open [01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto [04:10] Defining progress as the expansion of human agency [06:16] How to use our newfound agency to live a meaningful life [10:07] Climate control: installing a “thermostat” for the Earth [13:26] Anthropocentrism and the value of nature [19:41] Ode to man: celebrating human achievement [20:53] Solutionism: believing in our problem-solving abilities to tackle risks [26:26] Why pessimism sounds smart but misses the solution space [31:29] The myth of finite natural resources and the power of knowledge [34:27] Why we are getting better at finding ideas faster than they get harder to find [39:03] The Intelligence Age: a new mode of production [41:19] Amplifying human agency in an AI-driven world [43:09] Developing a healthy relationship with AI and attention [46:28] The culture of progress and why we soured on the future [50:10] Building the infrastructure for a global progress movement [53:54] A 20-year vision for progress studies in the mainstream [57:33] High-leverage regulations for progress: from nuclear to supersonic flight [58:57] Jason Crawford’s existential hope vision On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts Follow on X. Hosted on Acast. See acast.com/privacy for more information.

    1h 1m

Ratings & Reviews

5
out of 5
2 Ratings

About

The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely. Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more. About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success. Explore more: Transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X Hosted on Acast. See acast.com/privacy for more information.

You Might Also Like