Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  1. What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

    2 DAYS AGO

    What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

    Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates. LINKS: About the AI Whistleblower InitiativeKarl Koch PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (00:55) Starting the Whistleblower Initiative (05:43) Current State of Protections (13:04) Path to Optimal Policies (23:28) A Whistleblower's First Steps (32:29) Life After Whistleblowing (39:24) Evaluating Company Policies (48:19) Alternatives to Whistleblowing (55:24) High-Stakes Future Scenarios (01:02:27) AI and National Security SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP DISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations.

    1h 8m
  2. Can Machines Be Truly Creative? (with Maya Ackerman)

    24 OCT

    Can Machines Be Truly Creative? (with Maya Ackerman)

    Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities. LINKS:- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (01:00) Defining Human Creativity (02:58) Machine and AI Creativity (06:25) Measuring Subjective Creativity (10:07) Creativity in Animals (13:43) Alignment Damages Creativity (19:09) Creativity is Hallucination (26:13) Humble Creative Machines (30:50) Incentives and Replacement (40:36) Analogies for the Future (43:57) Collaborating with AI (52:20) Reinforcement Learning & Slop (55:59) AI in Education SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1h 2m
  3. From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

    14 OCT

    From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

    Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry. LINKS:- Parmy Olson on X (Twitter): https://x.com/parmy- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244 PRODUCED BY:https://aipodcast.ing CHAPTERS:(00:00) Episode Preview(01:18) Introducing Parmy Olson(02:37) Personalities Driving AI(06:45) From Research to Products(12:45) Has the Mission Changed?(19:43) The Role of Regulators(21:44) Skepticism of AI Utopia(28:00) The Human Cost(33:48) Embracing Controversy(40:51) The Role of Journalism(41:40) Big Tech's Influence SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    47 min
  4. Can Defense in Depth Work for AI? (with Adam Gleave)

    3 OCT

    Can Defense in Depth Work for AI? (with Adam Gleave)

    Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building. LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.ai PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of Governance SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1h 19m
  5. Why Building Superintelligence Means Human Extinction (with Nate Soares)

    18 SEPT

    Why Building Superintelligence Means Human Extinction (with Nate Soares)

    Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence. LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute -  https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/ PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Episode Preview (01:05) Introduction and Book Discussion (03:34) Psychology of AI Alarmism (07:52) Intelligence Threshold Effects (11:38) Growing vs Crafting AI (18:23) Illusion of AI Control (26:45) Why Iteration Won't Work (34:35) The No Retries Problem (38:22) Computer Security Lessons (49:13) The Cursed Problem (59:32) Multiple Curses and Complications (01:09:44) AI's Infrastructure Advantage (01:16:26) Grading Humanity's Response (01:22:55) Time Needed for Solutions (01:32:07) International Ban Necessity SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    1h 40m
  6. What Markets Tell Us About AI Timelines (with Basil Halperin)

    1 SEPT

    What Markets Tell Us About AI Timelines (with Basil Halperin)

    Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/CHAPTERS: (00:00) Episode Preview (00:49) Introduction and Background (05:19) Efficient Market Hypothesis Explained (10:34) Markets and Low Probability Events (16:09) Information Diffusion on Wall Street (24:34) Stock Prices vs Interest Rates (28:47) New Goods Counter-Argument (40:41) Why Focus on Interest Rates (45:00) AI Secrecy and Market Efficiency (50:52) Short Timeline Disagreements (55:13) Wealth Concentration Effects (01:01:55) Alternative Economic Indicators (01:12:47) Benchmarks vs Economic Impact (01:25:17) Open Research Questions SOCIAL LINKS: Website: https://future-of-life-institute-podcast.aipodcast.ing Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP PRODUCED BY:  https://aipodcast.ing

    1h 36m

About

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

You Might Also Like