200 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 99 Ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Sneha Revanur on the Social Effects of AI

    Sneha Revanur on the Social Effects of AI

    Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org

    Timestamps:
    00:00 Encode Justice
    06:11 AI ethics and AI safety
    15:49 Humans in the loop
    23:59 AI in social media
    30:42 Deteriorating social skills?
    36:00 AIs identifying as AIs
    43:36 AI influence in elections
    50:32 AIs interacting with human systems

    • 57 min
    Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

    Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

    Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/

    Timestamps:
    00:00 Is AI like a Shoggoth?
    09:50 Scaling laws
    16:41 Are humans more general than AIs?
    21:54 Are AI models explainable?
    27:49 Using AI to explain AI
    32:36 Evidence for AI being uncontrollable
    40:29 AI verifiability
    46:08 Will AI be aligned by default?
    54:29 Creating human-like AI
    1:03:41 Robotics and safety
    1:09:01 Obstacles to AI in the economy
    1:18:00 AI innovation with current models
    1:23:55 AI accidents in the past and future

    • 1 hr 31 min
    Special: Flo Crivello on AI as a New Form of Life

    Special: Flo Crivello on AI as a New Form of Life

    On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years.

    Timestamps:
    00:00 Technological progress
    07:59 Regulatory capture and AI
    11:53 AI as a new form of life
    15:44 Can AI development be paused?
    20:12 Biden's executive order on AI
    22:54 How would a GPU kill switch work?
    27:00 Regulating models or applications?
    32:13 AGI in 2-8 years
    42:00 China and US collaboration on AI

    • 47 min
    Carl Robichaud on Preventing Nuclear War

    Carl Robichaud on Preventing Nuclear War

    Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/

    Timestamps:
    00:00 A new nuclear arms race
    08:07 How much do world leaders matter?
    18:04 How much does ideology matter?
    22:14 Do nuclear weapons cause stable peace?
    31:29 North Korea
    34:01 Have we overestimated nuclear risk?
    43:24 Time pressure in nuclear decisions
    52:00 Why so many nuclear warheads?
    1:02:17 Has containment been successful?
    1:11:34 Coordination mechanisms
    1:16:31 Technological innovations
    1:25:57 Public perception of nuclear risk
    1:29:52 Easier access to nuclear weapons
    1:33:31 Reaching a stable, low-risk era

    • 1 hr 39 min
    Frank Sauer on Autonomous Weapon Systems

    Frank Sauer on Autonomous Weapon Systems

    Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/

    Timestamps:
    00:00 Autonomy in weapon systems
    12:19 Balance of offense and defense
    20:05 Killer drone systems
    28:53 Is autonomy like nuclear weapons?
    37:20 Low-tech defenses against drones
    48:29 Autonomy and power balance
    1:00:24 Tricking autonomous systems
    1:07:53 Unpredictability of autonomous systems
    1:13:16 Will we trust autonomous systems too much?
    1:27:28 Legal terminology
    1:32:12 Political possibilities

    • 1 hr 42 min
    Darren McKee on Uncontrollable Superintelligence

    Darren McKee on Uncontrollable Superintelligence

    Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.

    Timestamps:
    00:00 Uncontrollable superintelligence
    16:41 AI goals and the "virus analogy"
    28:36 Speed of AI cognition
    39:25 Narrow AI and autonomy
    52:23 Reliability of current and future AI
    1:02:33 Planning for multiple AI scenarios
    1:18:57 Will AIs seek self-preservation?
    1:27:57 Is there a unified solution to AI alignment?
    1:30:26 Concrete AI safety proposals

    • 1 hr 40 min

Customer Reviews

4.8 out of 5
99 Ratings

99 Ratings

Andieo1997 ,

A Must Listen!

This podcast is a must-listen for anyone who wants to stay ahead of the curve when it comes to the future of technology. Every episode leaves me with a new perspective to ponder long after it ends. Whether you're a tech enthusiast or simply curious about the ever-evolving role of technology in our lives, this show is an excellent resource. I highly recommend it to anyone who wants to stay informed and inspired!

457/26777633 ,

Science-smart interviewer asks very good questions!

Great, in depth interviews.

VV7425795 ,

Fantastic contribution to mankind! Thanks!

🤗👍🏻

Top Podcasts In Technology

Cool Zone Media
Lex Fridman
Jason Calacanis
Boston Consulting Group BCG
BBC Radio 4
BG2Pod

You Might Also Like

Rob, Luisa, Keiran, and the 80,000 Hours team
Erik Torenberg, Nathan Labenz
Dwarkesh Patel
Machine Learning Street Talk (MLST)
Sam Harris
Mercatus Center at George Mason University