
159 episodes

Future of Life Institute Podcast Future of Life Institute
-
- Technology
-
-
4.8 • 87 Ratings
-
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
-
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/ -
Connor Leahy on AI Safety and Why the World is Fragile
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
00:47 What is the best way to understand AI safety?
09:50 Why is the world relatively stable?
15:18 Is the main worry human misuse of AI?
22:47 Can humanity solve AI safety?
30:06 Can we slow down AI development?
37:13 How should governments regulate AI?
41:09 How do we avoid misallocating AI safety government grants?
51:02 Should AI safety research be done by for-profit companies?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/ -
Connor Leahy on AI Progress, Chimps, Memes, and Markets
Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
01:00 Defining artificial general intelligence
04:52 What makes humans more powerful than chimps?
17:23 Would AIs have to be social to be intelligent?
20:29 Importing humanity's memes into AIs
23:07 How do we measure progress in AI?
42:39 Gut feelings about AI progress
47:29 Connor's predictions about AGI
52:44 Is predicting AGI soon betting against the market?
57:43 How accurate are prediction markets about AGI? -
Sean Ekins on Regulating AI Drug Discovery
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery.
Timestramps:
00:00 Introduction
00:31 Ethical guidelines and regulation of AI drug discovery
06:11 How do we balance innovation and safety in AI drug discovery?
13:12 Keeping dangerous chemical data safe
21:16 Sean’s personal story of voicing concerns about AI drug discovery
32:06 How Sean will continue working on AI drug discovery -
Sean Ekins on the Dangers of AI Drug Discovery
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm.
Timestamps:
00:00 Introduction
00:46 Sean’s professional journey
03:45 Can computational models replace animal models?
07:24 The risks of AI drug discovery
12:48 Should scientists disclose dangerous discoveries?
19:40 How should scientists handle dual-use technologies?
22:08 Should we open-source potentially dangerous discoveries?
26:20 How do we control autonomous drug creation?
31:36 Surprising chemical discoveries made by black-box AI systems
36:56 How could the dangers of AI drug discovery be mitigated? -
Anders Sandberg on the Value of the Future
Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future.
Learn more about Anders' work: https://www.fhi.ox.ac.uk
Timestamps:
00:00 Introduction
00:54 Humanity as an immature teenager
04:24 How should we respond to our values changing over time?
18:53 How quickly should we change our values?
24:58 Are there limits to what future morality could become?
29:45 Could the universe contain infinite value?
36:00 How do we balance weird philosophy with common sense?
41:36 Lightning round: mind uploading, aliens, interstellar travel, cryonics
Customer Reviews
Science-smart interviewer asks very good questions!
Great, in depth interviews.
Fantastic contribution to mankind! Thanks!
🤗👍🏻
Gerat show!
Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!