Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  1. Can Defense in Depth Work for AI? (with Adam Gleave)

    VOR 5 STD.

    Can Defense in Depth Work for AI? (with Adam Gleave)

    Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building. LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.ai PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of Governance SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1 Std. 19 Min.
  2. Why Building Superintelligence Means Human Extinction (with Nate Soares)

    18. SEPT.

    Why Building Superintelligence Means Human Extinction (with Nate Soares)

    Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence. LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute -  https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (01:05) Introduction and Book Discussion (03:34) Psychology of AI Alarmism (07:52) Intelligence Threshold Effects (11:38) Growing vs Crafting AI (18:23) Illusion of AI Control (26:45) Why Iteration Won't Work (34:35) The No Retries Problem (38:22) Computer Security Lessons (49:13) The Cursed Problem (59:32) Multiple Curses and Complications (01:09:44) AI's Infrastructure Advantage (01:16:26) Grading Humanity's Response (01:22:55) Time Needed for Solutions (01:32:07) International Ban Necessity SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1 Std. 40 Min.
  3. What Markets Tell Us About AI Timelines (with Basil Halperin)

    1. SEPT.

    What Markets Tell Us About AI Timelines (with Basil Halperin)

    Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/CHAPTERS: (00:00) Episode Preview (00:49) Introduction and Background (05:19) Efficient Market Hypothesis Explained (10:34) Markets and Low Probability Events (16:09) Information Diffusion on Wall Street (24:34) Stock Prices vs Interest Rates (28:47) New Goods Counter-Argument (40:41) Why Focus on Interest Rates (45:00) AI Secrecy and Market Efficiency (50:52) Short Timeline Disagreements (55:13) Wealth Concentration Effects (01:01:55) Alternative Economic Indicators (01:12:47) Benchmarks vs Economic Impact (01:25:17) Open Research Questions SOCIAL LINKS: Website: https://future-of-life-institute-podcast.aipodcast.ing Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP PRODUCED BY:  https://aipodcast.ing

    1 Std. 36 Min.
  4. AGI Security: How We Defend the Future (with Esben Kran)

    22. AUG.

    AGI Security: How We Defend the Future (with Esben Kran)

    Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.    Learn more about Esben's work at: https://blog.kran.ai   00:00 – Intro and preview  01:13 – AGI security vs traditional cybersecurity  02:36 – Rebuilding societal infrastructure for embedded security  03:33 – Sentware: adaptive, self-improving malware  04:59 – New attack surfaces  05:38 – Social media as misaligned AI  06:46 – Personal vs societal defenses  09:13 – Why private companies underinvest in security  13:01 – Security as the foundation for any AI deployment  14:15 – Oversight without a surveillance state  17:19 – Protocols for safe agent communication  20:25 – The expensive internet hypothesis  23:30 – Distributed safety for companies and governments  28:20 – Cloudflare’s “agent labyrinth” example  31:08 – Positive vision for distributed security  33:49 – Human value when labor is automated  41:19 – Encoding law for machines: contracts and enforcement  44:36 – DarkBench: detecting manipulative LLM behavior  55:22 – The AGI endgame: default path vs designed future  57:37 – Powerful tool AI  01:09:55 – Fast takeoff risk  01:16:09 – Realistic optimism

    1 Std. 18 Min.
  5. Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

    15. AUG.

    Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

    Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.   Follow Benjamin's work at: https://benjamintodd.substack.com   Timestamps:  00:00 What are reasoning models?   04:04 Reinforcement learning supercharges reasoning  05:06 Reasoning models vs. agents  10:04 Economic impact of automated math/code  12:14 Compute as a bottleneck  15:20 Shift from giant pre-training to post-training/agents  17:02 Three feedback loops: algorithms, chips, robots  20:33 How fast could an algorithmic loop run?  22:03 Chip design and production acceleration  23:42 Industrial/robotics loop and growth dynamics  29:52 Society’s slow reaction; “warning shots”  33:03 Robotics: software and hardware bottlenecks  35:05 Scaling robot production  38:12 Robots at ~$0.20/hour?   43:13 Regulation and humans-in-the-loop  49:06 Personal prep: why it still matters  52:04 Build an information network  55:01 Save more money  58:58 Land, real estate, and scarcity in an AI world  01:02:15 Valuable skills: get close to AI, or far from it  01:06:49 Fame, relationships, citizenship  01:10:01 Redistribution, welfare, and politics under AI  01:12:04 Try to become more resilient   01:14:36 Information hygiene  01:22:16 Seven-year horizon and scaling limits by ~2030

    1 Std. 27 Min.

Bewertungen und Rezensionen

5
von 5
2 Bewertungen

Info

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Das gefällt dir vielleicht auch