EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. 1D AGO

    “Taking ethics seriously, and enjoying the process” by kuhanj

    Here's a talk I gave at an EA university group organizers’ retreat recently, which I've been strongly encouraged to share on the forum. I'd like to make it clear I don't recommend or endorse everything discussed in this talk (one example in particular which hopefully will be self-evident), but do think serious shifts in how we engage with ethics and EA would be quite beneficial for the world. Part 1: Taking ethics seriously To set context for this talk, I want to go through an Our World in Data style birds-eye view of how things are trending across key issues often discussed in EA. This is to help get better intuitions for questions like “How well will the future go by default?” and “Is the world on track to eventually solve the most pressing problems?” - which can inform high-level strategy questions like “Should we generally be doing more [...] --- Outline: (00:32) Part 1: Taking ethics seriously (04:26) Incentive shifts and moral progress (05:07) What is incentivized by society? (07:08) Heroic Responsibility (11:30) Excerpts from Strangers drowning (14:37) Opening our eyes to what is unbearable (18:07) Increasing effectiveness vs. increasing altruism (20:20) Cognitive dissonance (21:27) Paragons of moral courage (23:15) The monk who set himself on fire to protect Buddhism, and didn't flinch an inch (27:46) What do I most deeply want to honour in this life? (29:43) Moral Courage and defending EA (31:55) Acknowledging opportunity cost and grappling with guilt (33:33) Part 2: Enjoying the process (33:38) Celebrating what's really beautiful - what our hearts care about (42:08) Enjoying effective altruism (44:43) Training our minds to cultivate the qualities we endorse (46:54) Meditation isnt a silver bullet (52:35) The timeless words of MLK --- First published: October 4th, 2025 Source: https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    54 min
  2. 3D AGO

    [Linkpost] “The Four Pillars: A Hypothesis for Countering Catastrophic Biological Risk” by ASB

    This is a link post. Biological risks are more severe than has been widely appreciated. Recent discussions of mirror bacteria highlight an extreme scenario: a single organism that could infect and kill humans, plants, and animals, exhibits environmental persistence in soil or dust, and might be capable of spreading worldwide within several months. In the worst-case scenario, this could pose an existential risk to humanity, especially if the responses/countermeasures were inadequate. Less severe pandemic pathogens could still cause hundreds of millions (or billions) of casualties if they were engineered to cause harm. Preventing such catastrophes should be a top priority for humanity. However, if prevention fails, it would also be prudent to have a backup plan. One way of doing this would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), analyze the [...] --- Outline: (04:20) PPE (09:56) Biohardening (14:36) Detection (17:00) Expression of interest and acknowledgements The original text contained 34 footnotes which were omitted from this narration. --- First published: October 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/33t5jPzxEcFXLCPjq/the-four-pillars-a-hypothesis-for-countering-catastrophic Linkpost URL:https://defensesindepth.bio/the-four-pillars-a-hypothesis-for-countering-catastrophic-biological-risk/ --- Narrated by TYPE III AUDIO.

    18 min
  3. 3D AGO

    “The day Elon Musk’s AI became a Nazi (and what it means for AI safety) | New video from AI in Context” by ChanaMessinger, Aric Floyd

    If you just want a link to the video, watch it here! Watch nowWhat's AI in Context? (Skip if you already know) AI in Context is 80,000 Hours’ new(ish) Youtube channel, hosted by Aric Floyd. We’re trying to do high production storytelling that also informs people about transformative AI and its risks (but there's a lot of paths our future strategy could take). We talk about our launch more here. The MechaHitler video Probably the EA Forum disproportionately knows what MechaHitler is, but not everyone is terminally online, so, a summary: Earlier this year, Elon Musk's AI model, Grok, which can interact with users and post directly to Twitter, suddenly turned from being a fairly neutral commentator on events to a sexually harassing, Nazi-minded troll calling itself ‘MechaHitler’. Our new video is about that incident and how it happened, which means talking about what specifically happened (an accidental system [...] --- Outline: (00:20) What's AI in Context? (Skip if you already know) (00:45) The MechaHitler video (01:29) Why this? (03:29) Logistics (only read if you're interested) (04:24) Strategy and future of the video program (05:29) Subscribing and sharing (05:53) Request for feedback --- First published: October 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/trh4Km9KRedYSn3K3/the-day-elon-musk-s-ai-became-a-nazi-and-what-it-means-for --- Narrated by TYPE III AUDIO.

    7 min
  4. 3D AGO

    “Andrew Snyder-Beattie on the low-tech plan to patch humanity’s greatest weakness” by 80000_Hours

    By Robert WiblinWatch on YoutubeListen on SpotifyRead the transcript Episode summary If we can get their cost down to $10, this becomes one of the most cost-effective ways of preventing respiratory transmission. The shelf life is 20 years. That means basically 50 cents per person per year of protection. … If you’re a government it makes a lot of sense to just stockpile enough to cover your entire population. Right now we spend about $10 billion a year on missile defence. Stockpiling one of these for every single person in the US would be 1% the cost of that. — Andrew Snyder-Beattie Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity's single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong. Andrew's job at Open Philanthropy is to spend [...] --- Outline: (00:28) Episode summary (03:59) The interview in a nutshell (04:25) 1. Two primary classes of biological threats could pose an existential risk (05:27) 2. The four pillars plan offers a robust, defence-in-depth strategy (08:22) 3. Other catastrophic biorisks, like agricultural collapse, are less concerning (09:21) 4. We urgently need entrepreneurial people to execute this plan (10:17) Highlights (10:20) The worst-case scenario: mirror bacteria (15:44) Why antibiotics arent enough to fight mirror bacteria (18:58) The most cost-effective way governments could prevent respiratory transmission among their populations (22:17) Why Andrew works on biorisks rather than AI (26:54) Everyone was wrong: biorisks are defence dominant in the limit (32:23) The four pillars plan -- and how listeners can help --- First published: October 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/JopMdWgtthCbEFxk2/andrew-snyder-beattie-on-the-low-tech-plan-to-patch-humanity --- Narrated by TYPE III AUDIO.

    37 min

About

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.