EA Forum Podcast (Curated & popular)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

  1. 21H AGO

    “GHD discussion here is slowly dying” by NickLaing

    Epistemic status: A bit sad (I know that's not an epistemic status) The best development Forum on the internet? 3 years ago a headline “FTX SBF blah blah blah” triggered my memory “oh that's right, that effective altruism thing”. A few years earlier I had read “Doing Good Better” in our Northern Ugandan hut, and was excited by how the ideas matched my experience of seeing the BINGOs [1] on the ground here doing not-much-good at all. Soon after my wife dragged me to Cambridge for a year and I joined an EA group. I was drawn in to a beautiful crew of good, ernest people trying to do the best they could with their lives -[2] something I’d only seen before among a few people at church. I was most impressed by their veganism, practising what they preached. But after going back to Uganda I forgot about the whole EA thing. But 3 years later the FTX headlines and a google search led me to the EA forum, which to my delight turned out to be the best place on the internet to discuss global health and development. My first foray was a not-very-good post [...] --- Outline: (00:16) The best development Forum on the internet? (01:29) A steady decline (02:59) Why? (04:34) Is this fine? (05:02) Is this less fine? (06:29) How to Boost GHD discourse? --- First published: March 15th, 2026 Source: https://forum.effectivealtruism.org/posts/4jbbjTTJ87baMrkY4/ghd-discussion-here-is-slowly-dying --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 min
  2. 2D AGO

    “The case for AI safety capacity-building work” by abergal

    I work on the capacity-building team on the Global Catastrophic Risks-half of Coefficient Giving (formerly known as Open Philanthropy). Our remit is, roughly, to increase the amount of talent aiming to prevent unprecedented, globally catastrophic events. These days, we’re mostly focused on AI, and we’ve funded a number of projects and grantees that readers of this post might be familiar with– including MATS, BlueDot Impact, Constellation, 80,000 Hours, CEA, the Curve, FAR.AI's events, university groups, and many other workshops and projects. The post aims to make the case that broadly, capacity-building work (including on AI risk) has been and continues to be extremely impactful, and to encourage people to consider pursuing relevant projects and careers. This post is written from my personal perspective; that said, my sense is that a number of CG staff and others in the AI safety space share my views. I include some quotes from them at the end of this post. I’m writing this post partly out of a desire to correct what I perceive as an asymmetry in terms of how excited I and others at Coefficient Giving are about this kind of work vs. how much people in the EA and AI [...] --- Outline: (02:15) The case for capacity-building work (04:11) Surveys (06:49) Testimonials (08:21) Neel Nanda (Senior Research Scientist at Google DeepMind) (11:15) Max Nadeau (Associate Program Officer (Technical AI Safety) at Coefficient Giving) (12:51) Rachel Weinberg (founder and former head of The Curve, currently at AI Futures Project) (14:30) Marius Hobbhann (CEO and founder of Apollo Research) (16:38) Adam Kaufman (member of technical staff at Redwood Research) (18:10) Gabriel Wu (member of technical staff (alignment) at OpenAI) (19:37) Catherine Brewer (Senior Program Associate (AI Governance) at Coefficient Giving) (21:12) Aric Floyd (video host for AI in Context) (23:12) Ryan Kidd (Director of MATS) (25:43) What tends to work? (28:34) Whats good to do now? (29:31) Who should be doing this work? (31:02) What would doing this work look like? (31:13) Working at an organization doing good work in the space (31:46) Constellation - CEO (32:46) Kairos - various early generalist positions (33:42) Starting or running your own capacity-building project or organization (34:07) Working on a capacity-building project part-time (34:30) Subscribing to Multiplier, a Substack with thoughts from our team (and other AI grantmaking staff at CG) (34:39) Letting our team know (35:03) Social proof (35:25) Julian Hazell, AI governance and policy at Coefficient Giving (36:19) Trevor Levin, AI governance and policy at Coefficient Giving (36:51) Ryan Greenblatt, Chief Scientist at Redwood Research: (37:21) Buck Shlegeris, CEO of Redwood Research (39:52) Appendix --- First published: March 10th, 2026 Source: https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work --- Narrated by TYPE III AUDIO.

    42 min
  3. 4D AGO

    “Some good news: Ahold Delhaize to go cage-free” by ElliotTep

    For those not working in the space this probably isn't on your radar, but the animal advocacy movement just secured a huge win with Ahold Delhaize, convincing the fourth-largest supermarket company in the US to set the strongest cage-free policy of any large US retailer: A roadmap with benchmarks to fully eliminate caged egg cartons, expand cage-free offerings, and increase the percentage of cage-free sales. A pledge to annually report on its progress. At all 2,000+ locations, placing large, promotional shelf tags in front of cage-free cartons to differentiate cage-free and caged cartons for consumers.  This was a giant campaign. My understanding is that other companies were watching to see if this campaign would succeed or fail, to see if they would need to follow suit. In addition to the animals helped, this win will add pressure for competitors to do the same. This was a coordinated effort among many groups, including: Center For Responsible Food Business; Animal Equality; International Council for Animal Welfare; The Humane League; Mercy For Animals; Compassion in World Farming; Coalition to Abolish the Fur Trade and Animal Activist Collective. Animal Equality states that this will affect 5-7 million hens. I know a lot [...] --- First published: March 4th, 2026 Source: https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free --- Narrated by TYPE III AUDIO.

    2 min
  4. MAR 8

    [Linkpost] “Effective Altruism Will Be Great Again” by Mjreard

    This is a link post. Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open. Sequence thinkers will be forgiven and rejoice In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen. I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...] --- Outline: (02:37) The Retreat (07:03) What Greatness Demands (10:59) Effective Altruism is Good and Right --- First published: March 7th, 2026 Source: https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again Linkpost URL:https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 min
  5. MAR 5

    “Responsible Scaling Policy v3” by Holden Karnofsky

    All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background. Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update. First, the big picture: I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.) I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...] --- Outline: (05:32) How it started: the original goals of RSPs (11:25) How its going: the good and the bad (11:51) A note on my general orientation toward this topic (14:56) Goal 1: forcing functions for improved risk mitigations (15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard (18:24) A mixed success/failure story: impact on information security (20:42) ASL-4 and ASL-5 prep: the wrong incentives (25:00) When forcing functions do and dont work well (27:52) Goal 2 (testbed for practices and policies that can feed into regulation) (29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations) (30:59) RSP v3s attempt to amplify the good and reduce the bad (36:01) Do these benefits apply only to the most safety-oriented companies? (37:40) A revised, but not overturned, vision for RSPs (39:08) Q&A (39:10) On the move away from implied unilateral commitments (39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk? (40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma? (42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk? (43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious? (45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out? (46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause? (47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments? (48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action. (49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea? (50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y. (51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you? (53:19) On other components of the new RSP (53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others. (55:29) Why is external review only required at an extreme capability level? Why not just require it now? (58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory? (59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all? (01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them? (01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done? (01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations? (01:02:06) What is the point of making commitments if you can revise them anytime? --- First published: February 24th, 2026 Source: https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1 --- Narrated by TYPE III AUDIO.

    1h 3m
  6. MAR 1

    “Here’s to the Polypropylene Makers” by Jeff Kaufman 🔸

    Six years ago, as covid-19 was rapidly spreading through the US, my sister was working as a medical resident. One day she was handed an N95 and told to "guard it with her life", because there weren't any more coming. N95s are made from meltblown polypropylene, produced from plastic pellets manufactured in a small number of chemical plants. Building more would take too long: we needed these plants producing all the pellets they could. Braskem America operated plants in Marcus Hook PA and Neal WV. If there were infections on-site, the whole operation would need to shut down, and the factories that turned their pellets into mask fabric would stall. Companies everywhere were figuring out how to deal with this risk. The standard approach was staggering shifts, social distancing, temperature checks, and lots of handwashing. This reduced risk, but it was still significant: each shift change was an opportunity for someone to bring an infection from the community into the factory. I don't know who had the idea, but someone said: what if we never left? About eighty people, across both plants, volunteered to move in. The plan was four weeks, twelve-hour [...] --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    4 min

About

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

You Might Also Like