Imagine A World

Future of Life Institute
Imagine A World

The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster, major wars and the risks from advanced Artificial Intelligence. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year. Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love some; others you might not choose to inhabit. FLI is not endorsing any one idea. Rather, we hope to grow the conversation about what futures people get excited about. Ask yourself, with each episode, is this a world you’d want to live in? And if not, what would you prefer? Listen on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts. Explore the worldbuilds discussed in these episodes here: https://worldbuild.ai/winners This podcast was produced by the Future of Life Institute. FLI is a non-profit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.

Épisodes

  1. Imagine: What if AI advisors helped us make better decisions?

    17/10/2023

    Imagine: What if AI advisors helped us make better decisions?

    Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI's worldbuilding contest. Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer. This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers. While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/computing-counsel The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and Concepts referenced in the episode: Corpus Callosum - https://en.wikipedia.org/wiki/Corpus_callosum Eliezer Yudkowsky on 'Superstimulation' - https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/superstimuli-and-the-collapse-of-western-civilization Universal culture - https://slatestarcodex.com/2016/07/25/how-the-west-was-won/ Max Harms' Crystal Trilogy - https://www.goodreads.com/book/show/28678856-crystal-society UBI - https://en.wikipedia.org/wiki/Universal_basic_income Kim Stanley Robinson - https://en.wikipedia.org/wiki/Kim_Stanley_Robinson

    1 h
  2. Imagine: What if narrow AI fractured our shared reality?

    10/10/2023

    Imagine: What if narrow AI fractured our shared reality?

    Let's imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and therapy, and those bubbles help to sustain their inhabitants. Can you get excited about a world with these tradeoffs? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the seventh episode of Imagine A World we explore a fictional worldbuild titled 'Hall of Mirrors', which was a third-place winner of FLI's worldbuilding contest. Michael Vasser joins Guillaume Reisen to discuss his imagined future, which he created with the help of Matija Franklin and Bryce Hidysmith. Vassar was formerly the president of the Singularity Institute, and co-founded Metamed; more recently he has worked on communication across political divisions. Franklin is a PhD student at UCL working on AI Ethics and Alignment. Finally, Hidysmith began in fashion design, passed through fortune-telling before winding up in finance and policy research, at places like Numerai, the Median Group, Bismarck Analysis, and Eco.com. Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power that we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what is real. This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet, on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/hall-of-mirrors The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media referenced in the episode: https://en.wikipedia.org/wiki/Neo-Confucianism https://en.wikipedia.org/wiki/Who_Framed_Roger_Rabbit https://en.wikipedia.org/wiki/Seigniorage  https://en.wikipedia.org/wiki/Adam_Smith https://en.wikipedia.org/wiki/Hamlet  https://en.wikipedia.org/wiki/The_Golden_Notebook https://en.wikipedia.org/wiki/Star_Trek%3A_The_Next_Generation https://en.wikipedia.org/wiki/C-3PO https://en.wikipedia.org/wiki/James_Baldwin

    51 min
  3. Imagine: What if AI enabled us to communicate with animals?

    03/10/2023

    Imagine: What if AI enabled us to communicate with animals?

    What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the sixth episode of Imagine A World we explore the fictional worldbuild titled 'AI for the People', a third place winner of the worldbuilding contest. Our host Guillaume Reisen welcomes Chi Rainer Bornfree, part of this three-person worldbuilding team alongside her husband Micah White, and their collaborator, J.R. Harris. Chi has a PhD in Rhetoric from UC Berkeley and has taught at Bard, Princeton, and NY State Correctional facilities, in the meantime writing fiction, essays, letters, and more. Micah, best-known as the co-creator of the 'Occupy Wall Street' movement and the author of 'The End of Protest', now focuses primarily on the social potential of cryptocurrencies, while Harris is a freelance illustrator and comic artist. The name 'AI for the People' does a great job of capturing this team's activist perspective and their commitment to empowerment. They imagine social and political shifts that bring power back into the hands of individuals, whether that means serving as lawmakers on randomly selected committees, or gaining income by choosing to sell their personal data online. But this world isn't just about human people. Its biggest bombshell is an AI breakthrough that allows humans to communicate with other animals. What follows is an existential reconsideration of humanity's place in the universe. This team has created an intimate, complex portrait of a world shared by multiple parties: AIs, humans, other animals, and the environment itself. As these entities find their way forward together, their goals become enmeshed and their boundaries increasingly blurred. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/ai-for-the-people The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and resources referenced in the episode: https://en.wikipedia.org/wiki/Life_3.0  https://en.wikipedia.org/wiki/1_the_Road  https://ignota.org/products/pharmako-ai  https://en.wikipedia.org/wiki/The_Ministry_for_the_Future  https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/  https://en.wikipedia.org/wiki/Occupy_Wall_Street  https://en.wikipedia.org/wiki/Sortition  https://en.wikipedia.org/wiki/Iroquois  https://en.wikipedia.org/wiki/The_Ship_Who_Sang  https://en.wikipedia.org/wiki/The_Sparrow_(novel)  https://en.wikipedia.org/wiki/After_Yang

    1 h 4 min
  4. Imagine: What if AI-enabled life extension allowed some people to live forever?

    26/09/2023

    Imagine: What if AI-enabled life extension allowed some people to live forever?

    If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed?  Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light'. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy.  Mako's world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There's an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There's a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace. While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He's particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we'd call an Artificial General Intelligence - abbreviated 'AGI'.  Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world.  Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/to-light The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Terra_Ignota  https://en.wikipedia.org/wiki/The_Transparent_Society  https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer  https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain  https://en.wikipedia.org/wiki/The_Matrix  https://aboutmako.makopool.com/

    59 min
  5. Imagine: What if we developed digital nations untethered to geography?

    19/09/2023

    Imagine: What if we developed digital nations untethered to geography?

    How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the fourth episode of Imagine A World, we explore the fictional worldbuild titled 'Digital Nations'. Conrad Whitaker and Tracey Kamande join Guillaume Riesen on 'Imagine a World' to talk about their worldbuild, 'Digital Nations', which they created with their teammate, Dexter Findley. All three worldbuilders were based in Kenya while crafting their entry, though Dexter has just recently moved to the UK. Conrad is a Nairobi-based startup advisor and entrepreneur, Dexter works in humanitarian aid, and Tracey is the Co-founder of FunKe Science, a platform that promotes interactive learning of science among school children. As the name suggests, this world is a deep dive into virtual communities. It explores how people might find belonging and representation on the global stage through digital nations that aren't tied to any physical location. This world also features a fascinating and imaginative kind of artificial intelligence that they call 'digital persons'. These are inspired by biological brains and have a rich internal psychology. Rather than being trained on data, they're considered to be raised in digital nurseries. They have a nuanced but mostly loving relationship with humanity, with some even going on to found their own digital nations for us to join. In an incredible turn of events, last year the South Pacific state of Tuvalu was the first to "go virtual" in response to sea levels threatening the island nation's physical territory. This happened in real life just months after it was written into this imagined world in our worldbuilding contest, showing how rapidly ideas that seem 'out there' can become reality. Will all nations eventually go digital? And might AGIs be assimilated, 'brought up' rather than merely trained, as 'digital people', citizens to live communally alongside humans in these futuristic states? Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/digital-nations The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://www.tuvalu.tv/  https://en.wikipedia.org/wiki/Trolley_problem  https://en.wikipedia.org/wiki/Climate_change_in_Kenya  https://en.wikipedia.org/wiki/John_von_Neumann  https://en.wikipedia.org/wiki/Brave_New_World  https://thenetworkstate.com/the-network-state  https://en.wikipedia.org/wiki/Culture_series

    56 min
  6. Imagine: What if global challenges led to a more centralized world?

    12/09/2023

    Imagine: What if global challenges led to a more centralized world?

    What if we had one advanced AI system for the entire world? Would this lead to a world 'beyond' nation states - and do we want this? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'. How does a team of seven academics agree on one cohesive worldbuild? That's a question the team behind 'Core Central' had to figure out as they went along. The results speak for themselves: this worldbuild came joint second-place in the FLI Worldbuilding Contest, and its realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralized AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work? In this episode of 'Imagine a World', Guillaume Riesen speaks to John Burden and Henry Shevlin, representing the team that created 'Core Central', one of the second-place winners in the FLI Worldbuilding Contest. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards and John) work with the Centre for the Study of Existential Risk, also at Cambridge University. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this imagined world: https://worldbuild.ai/core-central The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and Concepts referenced in the episode: https://en.wikipedia.org/wiki/Culture_series  https://en.wikipedia.org/wiki/The_Expanse_(TV_series)  https://www.vox.com/authors/kelsey-piper  https://en.wikipedia.org/wiki/Gratitude_journal  https://en.wikipedia.org/wiki/The_Diamond_Age  https://www.scientificamerican.com/article/the-mind-of-an-octopus/  https://en.wikipedia.org/wiki/Global_workspace_theory  https://en.wikipedia.org/wiki/Alien_hand_syndrome  https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)

    1 h
  7. Imagine: What if we designed and built AI in an inclusive way?

    05/09/2023

    Imagine: What if we designed and built AI in an inclusive way?

    How does who is involved in the design of AI affect the possibilities for our future? Why isn't the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this second episode of Imagine A World we explore the fictional worldbuild titled 'Crossing Points', a second place entry in FLI's worldbuilding contest. Joining Guillaume Riesen on the Imagine a World podcast this time are two members of the Crossing Points team, Elaine Czech and Vanessa Hanschke, both academics at the University of Bristol. Elaine has a background in art and design, and is studying the accessibility of technologies for the elderly. Vanessa is studying responsible AI practices of technologists, using methods like storytelling to promote diverse voices in AI research. Their teammates in the contest were Tashi Namgyal, a University of Bristol PhD studying the controllability of deep generative models, Dr. Susan Lechelt, who researches the applications and implications of emerging technologies at the University of Edinburgh, and Nicol Ogston, a British civil servant. This world puts an emphasis on the unanticipated impacts of new technologies on those who weren't considered during their development. From urban families in Indonesia to anti-technology extremists in America, we're shown that there's something to learn from every human story. This world emphasizes the importance of broadening our lens and empowering marginalized voices in order to build a future that would be bright for more than just a privileged few. The world of Crossing Points looks pretty different from our own, with advanced AIs debating philosophy on TV and hybrid 3D printed meats and grocery stores. But the people in this world are still basically the same. Our hopes and dreams haven't fundamentally changed, and neither have our blindspots and shortcomings. Crossing Points embraces humanity in all its diversity and looks for the solutions that human nature presents alongside the problems. It shows that there's something to learn from everyone's experience and that even the most radical attitudes can offer insights that help to build a better world. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/crossing-points The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Works referenced in this episode: https://en.wikipedia.org/wiki/The_Legend_of_Zelda  https://en.wikipedia.org/wiki/Ainu_people  https://www.goodreads.com/book/show/34846958-radicals  http://www.historyofmasks.net/famous-masks/noh-mask/

    53 min
  8. Imagine: What if new governance mechanisms helped us coordinate?

    05/09/2023

    Imagine: What if new governance mechanisms helped us coordinate?

    Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'. Host Guillaume Riesen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future. At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this imagined world: https://worldbuild.ai/peace-through-prophecy  The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email worldbuild@futureoflife.org. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Prediction_market  https://forum.effectivealtruism.org/  'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position  https://en.wikipedia.org/wiki/Isaac_Asimov  https://en.wikipedia.org/wiki/Liquid_democracy  https://en.wikipedia.org/wiki/The_Dispossessed  https://en.wikipedia.org/wiki/Terra_Ignota  https://equilibriabook.com/  https://en.wikipedia.org/wiki/John_Rawls  https://en.wikipedia.org/wiki/Radical_transparency  https://en.wikipedia.org/wiki/Audrey_Tang  https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding

    1 h 3 min

À propos

The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster, major wars and the risks from advanced Artificial Intelligence. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year. Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love some; others you might not choose to inhabit. FLI is not endorsing any one idea. Rather, we hope to grow the conversation about what futures people get excited about. Ask yourself, with each episode, is this a world you’d want to live in? And if not, what would you prefer? Listen on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts. Explore the worldbuilds discussed in these episodes here: https://worldbuild.ai/winners This podcast was produced by the Future of Life Institute. FLI is a non-profit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.

Pour écouter des épisodes au contenu explicite, connectez‑vous.

Recevez les dernières actualités sur cette émission

Connectez‑vous ou inscrivez‑vous pour suivre des émissions, enregistrer des épisodes et recevoir les dernières actualités.

Choisissez un pays ou une région

Afrique, Moyen‑Orient et Inde

Asie‑Pacifique

Europe

Amérique latine et Caraïbes

États‑Unis et Canada