EA Forum Podcast (Curated & popular)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

  1. 1D AGO

    [Linkpost] “Effective Altruism Will Be Great Again” by Mjreard

    This is a link post. Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open. Sequence thinkers will be forgiven and rejoice In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen. I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...] --- Outline: (02:37) The Retreat (07:03) What Greatness Demands (10:59) Effective Altruism is Good and Right --- First published: March 7th, 2026 Source: https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again Linkpost URL:https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 min
  2. 5D AGO

    “Responsible Scaling Policy v3” by Holden Karnofsky

    All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background. Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update. First, the big picture: I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.) I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...] --- Outline: (05:32) How it started: the original goals of RSPs (11:25) How its going: the good and the bad (11:51) A note on my general orientation toward this topic (14:56) Goal 1: forcing functions for improved risk mitigations (15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard (18:24) A mixed success/failure story: impact on information security (20:42) ASL-4 and ASL-5 prep: the wrong incentives (25:00) When forcing functions do and dont work well (27:52) Goal 2 (testbed for practices and policies that can feed into regulation) (29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations) (30:59) RSP v3s attempt to amplify the good and reduce the bad (36:01) Do these benefits apply only to the most safety-oriented companies? (37:40) A revised, but not overturned, vision for RSPs (39:08) Q&A (39:10) On the move away from implied unilateral commitments (39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk? (40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma? (42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk? (43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious? (45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out? (46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause? (47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments? (48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action. (49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea? (50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y. (51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you? (53:19) On other components of the new RSP (53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others. (55:29) Why is external review only required at an extreme capability level? Why not just require it now? (58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory? (59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all? (01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them? (01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done? (01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations? (01:02:06) What is the point of making commitments if you can revise them anytime? --- First published: February 24th, 2026 Source: https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1 --- Narrated by TYPE III AUDIO.

    1h 3m
  3. MAR 1

    “Here’s to the Polypropylene Makers” by Jeff Kaufman 🔸

    Six years ago, as covid-19 was rapidly spreading through the US, my sister was working as a medical resident. One day she was handed an N95 and told to "guard it with her life", because there weren't any more coming. N95s are made from meltblown polypropylene, produced from plastic pellets manufactured in a small number of chemical plants. Building more would take too long: we needed these plants producing all the pellets they could. Braskem America operated plants in Marcus Hook PA and Neal WV. If there were infections on-site, the whole operation would need to shut down, and the factories that turned their pellets into mask fabric would stall. Companies everywhere were figuring out how to deal with this risk. The standard approach was staggering shifts, social distancing, temperature checks, and lots of handwashing. This reduced risk, but it was still significant: each shift change was an opportunity for someone to bring an infection from the community into the factory. I don't know who had the idea, but someone said: what if we never left? About eighty people, across both plants, volunteered to move in. The plan was four weeks, twelve-hour [...] --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    4 min
  4. FEB 28

    “You’re not burning out because you’re tired” by stefan.torges

    I burned out badly a few years ago. I've since had several conversations with people in the EA community who are heading toward burnout themselves, and I noticed they were sometimes thinking about it in ways that I worry wouldn't help them. So I want to share what I think is actually going on, and what I wish someone had told me earlier. A theory of burnout There are good models of the mechanism of burnout already out there. Anna Salamon has written about willpower as a kind of internal currency: your conscious planner "earns" trust with your deeper, more visceral processes by choosing actions that nourish them, and goes "credibility-broke" when it spends that trust without replenishing it. Cate Hall describes something similar with her metaphor of the elephant and the rider: the rider promises the elephant rewards in exchange for effort, and burnout is what happens when those promises are broken too many times. I usually explain this in terms of an energy imbalance: you're putting more into your work than you're getting back. Not just in terms of rest, but in terms of meaning, autonomy, connection, a sense of accomplishment, positive feedback. All the things that [...] --- Outline: (00:29) A theory of burnout (02:23) Why EA culture builds effective cages (06:11) What it actually felt like (07:10) What I want to push back on (08:31) What Id encourage if youre in the grey zone (10:50) What recovery actually looked like (11:55) What I learned, and didnt learn --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/2veCceQkhjovCfdbg/you-re-not-burning-out-because-you-re-tired --- Narrated by TYPE III AUDIO.

    14 min
  5. FEB 27

    “CEA’s response to sexual harassment” by Fran

    In this piece, I discuss the sexual harassment I experienced at the Centre for Effective Altruism, the organisation's response, the outcomes of two independent legal reviews, and the final settlement. In the second part of this piece, I make cultural critiques of CEA and EA more broadly. Everything shared here reflects my own experience and perspective. I have anonymised the perpetrator, but I reference specific leadership roles where I believe this to be appropriate and necessary. Trigger warnings: non-specific reference of rape and specific discussion of sexual harassment TL;DR (One-page summary) After I was raped (outside of and unrelated to work), a colleague at CEA wrote and circulated a document that included a sexualised description of my rape, speculation about my mental health, and commentary on my personal life, all without my consent. Several senior leaders, including the CEO and the now-former COO, received this document and took no safeguarding action for approximately nine months. I was never officially informed of its existence; I only learned about it informally through one of the recipients. After I filed a harassment report, the incident was independently investigated and determined to be harassment. Despite this, I was denied access to the document [...] --- Outline: (00:47) TL;DR (One-page summary) (03:38) A more detailed account (03:42) The sexual harassment incident (06:42) The investigation (10:38) The appeal and final report (14:02) Public accountability versus internal processes (16:59) The final settlement agreement (18:54) I still think there is a lot of good in effective altruism (20:33) Various cultural reflections (20:50) 1) Sexual harassment is not the natural result of an open and high-trust culture, it is the natural result of misogyny. (22:46) 2) The danger of EAs fixation on intent and why he didnt mean it is not good enough. (24:11) 3) Cowardice and deference at CEA. (26:30) 4) Women in EA are often encouraged to try and settle things informally or to trust their organisations -- another abuse of high-trust culture. (28:45) 5) A harmful misunderstanding of trauma and mitigating vs. aggravating factors. (30:27) 6) I have encountered so many EAs who believe it is easy for victims to speak publicly, or to share their experiences with other community members. And thus, if they arent regularly hearing from victims, harassment must be rare. (33:02) To any women who have faced something similar (34:40) Acknowledgements --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/cea-s-response-to-sexual-harassment --- Narrated by TYPE III AUDIO.

    36 min
  6. FEB 22

    “500k mid-career professionals want to do more good with their careers. Can we help them?” by Dom Jackman

    I'm Dom Jackman. I founded Escape the City in 2010 to help people leave corporate jobs and find work that matters. 16 years later, 500k+ professionals have used the platform - mostly people 5-15 years into careers at places like McKinsey, Deloitte, Google, the big banks - who feel a growing gap between what they do all day and what they actually care about. I'm not from the EA community. I'm writing this because I think there's a real overlap between the people I work with and what the EA talent ecosystem actually needs. I want to test that before investing serious time in it. What I've noticed Reading through talent discussions on this forum, there's a consistent theme: the pipeline is strongest for early-career people. 80,000 Hours does great work for students and recent grads. Probably Good provides broad guidance. BlueDot, MATS, Talos build skills for specific cause areas. But mid-career professionals with real commercial experience keep coming up as underserved. The "Gaps and opportunities in the EA talent & recruiting landscape" post nails it: these people "don't have 'EA capital,' may be poorly networked and might feel alienated by current messaging." The post calls for "custom entry [...] --- Outline: (00:51) What Ive noticed (01:40) What I see every day (02:28) What Im thinking about building (03:24) Honest questions (04:39) Not looking for funding (04:58) Artifacts --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with --- Narrated by TYPE III AUDIO.

    6 min
  7. FEB 19

    “Our Levels of Ambition Should Match The Problems We’re Solving” by Matt Beard

    [I am a career advisor at 80,000 Hours. I've been thinking about something Will MacAskill said recently in an interview with my shrimp-friend Matt: "should people be more ambitious? I genuinely think yes. I think people systematically aren't ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we're facing—and the scale of those problems is very large indeed." This post is my reflection on these ideas.] ************ My last post argued that if you want to have a great career, your goal should not be to get a job. Instead, you should choose an important problem to work on, then “get good and be known.” Building skills will allow you to solve problems and reap the benefits. In the ~500 career advising calls I’ve hosted in the past year, the most common response I’ve heard has been: “Okay, how good? How well known? How many hours of practice will get me there?” Most people want to calibrate their ambitions so that the time and energy they invest feels worth it to them. I empathize with this, but when I’m honest– with myself for my own [...] --- Outline: (06:28) Jensen Huang is more ambitious than you (12:58) Most extreme ambition is misplaced (17:45) Okay, how can altruistic people aim higher and work harder? (21:17) Ambition at the End of the Human Era (24:03) Closing Caveats - Efficiency, Burnout, and Choosing What Matters --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    26 min

About

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

You Might Also Like