205 Folgen

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technologie
    • 5,0 • 6 Bewertungen

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Dan Faggella on the Race to AGI

    Dan Faggella on the Race to AGI

    Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com

    Timestamps:
    00:00 Value differences in AI
    12:07 Should we eventually create AGI?
    28:22 What is a worthy successor?
    43:19 AI changing power dynamics
    59:00 Open source AI
    01:05:07 What drives AI progress?
    01:16:36 What limits AI progress?
    01:26:31 Which industries are using AI?

    • 1 Std. 45 Min.
    Liron Shapira on Superintelligence Goals

    Liron Shapira on Superintelligence Goals

    Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.

    Timestamps:
    00:00 Intelligence as optimization-power
    05:18 Will LLMs imitate human values?
    07:15 Why would AI develop dangerous goals?
    09:55 Goal-completeness
    12:53 Alignment to which values?
    22:12 Is AI just another technology?
    31:20 What is FOOM?
    38:59 Risks from centralized power
    49:18 Can AI defend us against AI?
    56:28 An Apollo program for AI safety
    01:04:49 Do we only have one chance?
    01:07:34 Are we living in a crucial time?
    01:16:52 Would superintelligence be fragile?
    01:21:42 Would human-inspired AI be safe?

    • 1 Std. 26 Min.
    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com

    Timestamps:
    00:00 A scenario of nuclear war
    06:56 Who would launch an attack?
    13:50 Detecting nuclear attacks
    19:37 The first critical seconds
    29:42 Decisions under time pressure
    34:27 Lessons from insiders
    44:18 Submarines
    51:06 How did we end up like this?
    59:40 Interceptor missiles
    1:11:25 Nuclear weapons and cyberattacks
    1:17:35 Concentration of power

    • 1 Std. 26 Min.
    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/.

    Timestamps:
    0:20 AI Impacts surveys
    18:11 What AI will look like in 20 years
    22:43 Experts’ extinction risk predictions
    29:35 Opinions on slowing down AI development
    31:25 AI “arms races”
    34:00 AI risk areas with the most agreement
    40:41 Do “high hopes and dire concerns” go hand-in-hand?
    42:00 Intelligence explosions
    45:37 Discontinuous progress
    49:43 Impacts of AI crossing the human-level intelligence threshold
    59:39 What does AI learn from human culture?
    1:02:59 AI scaling
    1:05:04 What should we do?

    • 1 Std. 8 Min.
    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info

    Timestamps:
    00:00 Pausing AI
    10:23 Risks during an AI pause
    19:41 Hardware overhang
    29:04 Technological progress
    37:00 Safety research during a pause
    54:42 Social dynamics of AI risk
    1:10:00 What prevents cooperation?
    1:18:21 What about China?
    1:28:24 Protesting AGI corporations

    • 1 Std. 36 Min.
    Sneha Revanur on the Social Effects of AI

    Sneha Revanur on the Social Effects of AI

    Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org

    Timestamps:
    00:00 Encode Justice
    06:11 AI ethics and AI safety
    15:49 Humans in the loop
    23:59 AI in social media
    30:42 Deteriorating social skills?
    36:00 AIs identifying as AIs
    43:36 AI influence in elections
    50:32 AIs interacting with human systems

    • 57 Min.

Kundenrezensionen

5,0 von 5
6 Bewertungen

6 Bewertungen

rmoehn ,

‘Split interviews’ are a good idea

I like that the episodes are not too long. Unlike the 80,000 Hours Podcast, which has multi-hour interviews with one guest, the FLI Podcast has one guest for multiple shorter episodes. This makes them easier to commit to, especially for someone like me who often doesn’t find the topics intrinsically interesting, but still thinks they’re important to learn about.

Top‑Podcasts in Technologie

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
Mac & i - der Apple-Podcast
Mac & i
Apple Events (video)
Apple
Mission Klima – Lösungen für die Krise
NDR Info
KREWKAST
Felix Bahlinger, Julian Völzke

Das gefällt dir vielleicht auch

Dwarkesh Podcast
Dwarkesh Patel
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Conversations with Tyler
Mercatus Center at George Mason University
Eye On A.I.
Craig S. Smith
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People