Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  1. Why Building Superintelligence Means Human Extinction (with Nate Soares)

    18 SET

    Why Building Superintelligence Means Human Extinction (with Nate Soares)

    Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence. LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute -  https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (01:05) Introduction and Book Discussion (03:34) Psychology of AI Alarmism (07:52) Intelligence Threshold Effects (11:38) Growing vs Crafting AI (18:23) Illusion of AI Control (26:45) Why Iteration Won't Work (34:35) The No Retries Problem (38:22) Computer Security Lessons (49:13) The Cursed Problem (59:32) Multiple Curses and Complications (01:09:44) AI's Infrastructure Advantage (01:16:26) Grading Humanity's Response (01:22:55) Time Needed for Solutions (01:32:07) International Ban Necessity SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1 h 40 min
  2. What Markets Tell Us About AI Timelines (with Basil Halperin)

    1 SET

    What Markets Tell Us About AI Timelines (with Basil Halperin)

    Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/CHAPTERS: (00:00) Episode Preview (00:49) Introduction and Background (05:19) Efficient Market Hypothesis Explained (10:34) Markets and Low Probability Events (16:09) Information Diffusion on Wall Street (24:34) Stock Prices vs Interest Rates (28:47) New Goods Counter-Argument (40:41) Why Focus on Interest Rates (45:00) AI Secrecy and Market Efficiency (50:52) Short Timeline Disagreements (55:13) Wealth Concentration Effects (01:01:55) Alternative Economic Indicators (01:12:47) Benchmarks vs Economic Impact (01:25:17) Open Research Questions SOCIAL LINKS: Website: https://future-of-life-institute-podcast.aipodcast.ing Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP PRODUCED BY:  https://aipodcast.ing

    1 h 36 min
  3. AGI Security: How We Defend the Future (with Esben Kran)

    22 AGO

    AGI Security: How We Defend the Future (with Esben Kran)

    Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.    Learn more about Esben's work at: https://blog.kran.ai   00:00 – Intro and preview  01:13 – AGI security vs traditional cybersecurity  02:36 – Rebuilding societal infrastructure for embedded security  03:33 – Sentware: adaptive, self-improving malware  04:59 – New attack surfaces  05:38 – Social media as misaligned AI  06:46 – Personal vs societal defenses  09:13 – Why private companies underinvest in security  13:01 – Security as the foundation for any AI deployment  14:15 – Oversight without a surveillance state  17:19 – Protocols for safe agent communication  20:25 – The expensive internet hypothesis  23:30 – Distributed safety for companies and governments  28:20 – Cloudflare’s “agent labyrinth” example  31:08 – Positive vision for distributed security  33:49 – Human value when labor is automated  41:19 – Encoding law for machines: contracts and enforcement  44:36 – DarkBench: detecting manipulative LLM behavior  55:22 – The AGI endgame: default path vs designed future  57:37 – Powerful tool AI  01:09:55 – Fast takeoff risk  01:16:09 – Realistic optimism

    1 h 18 min
  4. Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

    15 AGO

    Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

    Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.   Follow Benjamin's work at: https://benjamintodd.substack.com   Timestamps:  00:00 What are reasoning models?   04:04 Reinforcement learning supercharges reasoning  05:06 Reasoning models vs. agents  10:04 Economic impact of automated math/code  12:14 Compute as a bottleneck  15:20 Shift from giant pre-training to post-training/agents  17:02 Three feedback loops: algorithms, chips, robots  20:33 How fast could an algorithmic loop run?  22:03 Chip design and production acceleration  23:42 Industrial/robotics loop and growth dynamics  29:52 Society’s slow reaction; “warning shots”  33:03 Robotics: software and hardware bottlenecks  35:05 Scaling robot production  38:12 Robots at ~$0.20/hour?   43:13 Regulation and humans-in-the-loop  49:06 Personal prep: why it still matters  52:04 Build an information network  55:01 Save more money  58:58 Land, real estate, and scarcity in an AI world  01:02:15 Valuable skills: get close to AI, or far from it  01:06:49 Fame, relationships, citizenship  01:10:01 Redistribution, welfare, and politics under AI  01:12:04 Try to become more resilient   01:14:36 Information hygiene  01:22:16 Seven-year horizon and scaling limits by ~2030

    1 h 27 min
  5. How AI Could Help Overthrow Governments (with Tom Davidson)

    17 LUG

    How AI Could Help Overthrow Governments (with Tom Davidson)

    On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats.   Learn more about Tom's work here: https://www.forethought.org   Timestamps:   00:00:00  Preview: why preventing AI-enabled coups matters  00:01:24  What do we mean by an “AI-enabled coup”?  00:01:59  Capabilities AIs would need (persuasion, strategy, productivity)  00:02:36  Cyber-offense and the road to robotized militaries  00:05:32  Step-by-step example of an AI-enabled military coup  00:08:35  How AI-enabled coups would differ from historical coups  00:09:24  Democratic backsliding (Venezuela, Hungary, U.S. parallels)  00:12:38  Singular loyalties, secret loyalties, exclusive access  00:14:01  Secret-loyalty scenario: CEO with hidden control  00:18:10  From sleeper agents to sophisticated covert AIs  00:22:22  Exclusive-access threat: one project races ahead  00:29:03  Could one country outgrow the rest of the world?  00:40:00  Could a single company dominate global GDP?  00:47:01  Autocracies vs democracies  00:54:43  Mitigations for singular and secret loyalties  01:06:25  Guardrails, monitoring, and controlled-use APIs  01:12:38  Using AI itself to preserve checks-and-balances  01:24:53  Risk indicators to watch for AI-enabled coups  01:33:05  Tom’s risk estimates for the next 5 and 30 years  01:46:50  How you can help – research, policy, and careers

    1 h 54 min

Valutazioni e recensioni

5
su 5
2 valutazioni

Descrizione

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Potrebbero piacerti anche…