The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Sneha Revanur on the Social Effects of AI
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org
00:00 Encode Justice
06:11 AI ethics and AI safety
15:49 Humans in the loop
23:59 AI in social media
30:42 Deteriorating social skills?
36:00 AIs identifying as AIs
43:36 AI influence in elections
50:32 AIs interacting with human systems
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/
00:00 Is AI like a Shoggoth?
09:50 Scaling laws
16:41 Are humans more general than AIs?
21:54 Are AI models explainable?
27:49 Using AI to explain AI
32:36 Evidence for AI being uncontrollable
40:29 AI verifiability
46:08 Will AI be aligned by default?
54:29 Creating human-like AI
1:03:41 Robotics and safety
1:09:01 Obstacles to AI in the economy
1:18:00 AI innovation with current models
1:23:55 AI accidents in the past and future
Special: Flo Crivello on AI as a New Form of Life
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years.
00:00 Technological progress
07:59 Regulatory capture and AI
11:53 AI as a new form of life
15:44 Can AI development be paused?
20:12 Biden's executive order on AI
22:54 How would a GPU kill switch work?
27:00 Regulating models or applications?
32:13 AGI in 2-8 years
42:00 China and US collaboration on AI
Carl Robichaud on Preventing Nuclear War
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/
00:00 A new nuclear arms race
08:07 How much do world leaders matter?
18:04 How much does ideology matter?
22:14 Do nuclear weapons cause stable peace?
31:29 North Korea
34:01 Have we overestimated nuclear risk?
43:24 Time pressure in nuclear decisions
52:00 Why so many nuclear warheads?
1:02:17 Has containment been successful?
1:11:34 Coordination mechanisms
1:16:31 Technological innovations
1:25:57 Public perception of nuclear risk
1:29:52 Easier access to nuclear weapons
1:33:31 Reaching a stable, low-risk era
Frank Sauer on Autonomous Weapon Systems
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/
00:00 Autonomy in weapon systems
12:19 Balance of offense and defense
20:05 Killer drone systems
28:53 Is autonomy like nuclear weapons?
37:20 Low-tech defenses against drones
48:29 Autonomy and power balance
1:00:24 Tricking autonomous systems
1:07:53 Unpredictability of autonomous systems
1:13:16 Will we trust autonomous systems too much?
1:27:28 Legal terminology
1:32:12 Political possibilities
Darren McKee on Uncontrollable Superintelligence
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
00:00 Uncontrollable superintelligence
16:41 AI goals and the "virus analogy"
28:36 Speed of AI cognition
39:25 Narrow AI and autonomy
52:23 Reliability of current and future AI
1:02:33 Planning for multiple AI scenarios
1:18:57 Will AIs seek self-preservation?
1:27:57 Is there a unified solution to AI alignment?
1:30:26 Concrete AI safety proposals
A Must Listen!
This podcast is a must-listen for anyone who wants to stay ahead of the curve when it comes to the future of technology. Every episode leaves me with a new perspective to ponder long after it ends. Whether you're a tech enthusiast or simply curious about the ever-evolving role of technology in our lives, this show is an excellent resource. I highly recommend it to anyone who wants to stay informed and inspired!
Science-smart interviewer asks very good questions!
Great, in depth interviews.
Fantastic contribution to mankind! Thanks!