EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. 1天前

    “What giving people money doesn’t fix” by GiveDirectly

    by Caitlin Tulloch, Senior Director of Research, Learning, and Product at GiveDirectly. Caitlin is an expert in evidence-based policy and cost-effectiveness, and formerly served as Deputy Director in USAID's Office of the Chief Economist. Summary ❌ Cash isn’t a silver bullet and it can’t replace missing or broken systems. It doesn’t pave roads, improve schools, or stock pharmacies. 💸 Yet when it comes to reducing poverty, there are no interventions with more consistent evidence for their effectiveness than cash. ⚙️ GiveDirectly follows the evidence: we use cash where it works best, and test alternatives when the goal isn’t just poverty reduction. GiveDirectly has written a lot about what giving people money is consistently very good at (e.g., recipients earn more, spend more, own more assets, and the local economy gets a boost), the myths that aren’t true (e.g, cash doesn’t make people work less or drink more), and the fact that it's what most recipients prefer. But the fact that cash improves many things doesn’t mean it improves everything. And frankly, that's an unrealistic bar to set. I’ve spent the last 15 years doing cost-effectiveness analyses across education, health, nutrition, and livelihoods—first at MIT's J-PAL, then [...] --- Outline: (01:52) Giving individuals money won't build infrastructure or systems (05:22) When poverty reduction is the goal, cash works; when the goal is something else, we test (07:45) This isn't an either/or--it's about using the right tool for the outcome --- First published: November 6th, 2025 Source: https://forum.effectivealtruism.org/posts/iMDoPK92GHWN2Csss/what-giving-people-money-doesn-t-fix --- Narrated by TYPE III AUDIO.

    9 分钟
  2. 1天前

    “Strangers calling: the value of warm responses to cold outreach” by Sam Anschell

    Tl;dr: I think responding to cold outreach (Linkedin DMs, emails) is a high-leverage way to help someone pursue an impactful career. Spending 15 minutes on Zoom or 5 minutes to share resources in writing can: Introduce someone to job boards, career advising and other professional development resources (EA Anywhere Slack, EA Virtual Programs, EA Opportunities Board, Effective Thesis, High Impact Professionals, etc.) that they hadn’t previously been aware of.  More people hear about EA from personal contacts than from any other single source. Rethink Priorities’ recent surveys suggest that only ~1% of the American general public is aware of EA. Your response may be the reason someone discovers EA career resources, or you may significantly speed up their discovery of these resources.  Support a job-seeker's motivation to continue putting themselves out there for impactful roles. Strengthen the reputation/practices of your team, organization, or the EA movement to be more inclusive and welcoming. Answer someone's questions and help them learn about new areas of work. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ My sense is that some EAs are reluctant to answer cold messages, especially if they come from someone between jobs, in school, or working at a “non-EA” org. Someone reaching out in this position [...] --- Outline: (04:31) Advice for sending cold messages (06:09) Advice for responding to outreach (07:55) Doesn't it make sense for the professionals at 80k/Probably Good/Animal Advocacy Careers to manage career advising? (08:28) I find it draining to talk to strangers, especially if they're going to ask for a favor. (09:19) I'm really busy. --- First published: November 4th, 2025 Source: https://forum.effectivealtruism.org/posts/WgNXmrL4ax44bMmjq/strangers-calling-the-value-of-warm-responses-to-cold --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    11 分钟
  3. 1天前

    “A personal take on why (and why not) to work on Open Philanthropy’s AI teams” by cb

    You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.) I work as a grantmaker on OP's AI governance team, and inspired by Lizka's recent excellent post giving her personal take on working at Forethought, I wanted to share some personal takes on reasons for and against working on the AI teams at Open Philanthropy. A few things to keep in mind as you read this: I'm mostly going to talk about personal fit and the day-to-day experience of working at OP, rather than getting into high-level strategy and possible disagreements people might have with it. On strategy: if you have some substantive big picture disagreements with OP's approach, I think that firstly, you're in good company (this describes many OP staff!), and secondly, you'd still enjoy working here. But if you disagree with many of our strategic choices, or have especially foundational or basic disagreements, you'd probably be kind of miserable working here. [...] --- Outline: (01:36) The case for working at Open Philanthropys AI team (01:46) Impact (02:19) Now is an especially exciting time to work on AI safety and governance (02:54) Were currently capacity constrained (03:17) My colleagues are very cool and very nice (03:55) Internal disagreement and developing your own views (04:59) Support for professional development (05:36) Culture (06:52) Some downsides/possible reasons not to apply (06:57) General considerations about grantmaking (and grantmaking-enabling roles) vs direct work (09:44) If youre very much in research/figuring things out mode (11:23) Risk of your takes atrophying (11:59) Social dynamics and funding relationships (13:11) Feedback loops and uncertainty (14:10) Slowness (14:28) Capacity constraints and difficulty switching off (15:28) Imposter syndrome? (16:52) Career progression? (17:58) Mostly remote work (more specific to AIGP) (18:45) So should you apply? --- First published: November 7th, 2025 Source: https://forum.effectivealtruism.org/posts/JSJRCGpaQAkjnqn7L/a-personal-take-on-why-and-why-not-to-work-on-open --- Narrated by TYPE III AUDIO.

    21 分钟
  4. 2天前

    “EA Summit 2025: Paris: Report” by GV 🔸

    On September 13th, 2025, we (EA France) hosted the EA Summit: Paris, in a very central location in the French capital. This event was part of the international series of EA Summits funded by the Centre for Effective Altruism. This report outlines what went well, what we could improve, and the lessons we learned, with the hope that it can serve as a resource for others planning similar events. Key facts and figures 217 participants (including volunteers) checked in, along with 20 speakers and 3 photographers/filming crew members, totaling 240 attendees (largest EA Summit to date) How familiar were the participants with EA? When RSVPing, 89% declared at least “some familiarity, even though I couldn’t clearly define EA”, and 41% declared at least “really familiar, I’ve spent a lot of time thinking about effectiveness and within the community”. From experience, we think that many people overestimate their own understanding of or EA. How familiar were the participants to EA France's staff? Half the participants were completely unknown to the organizers (which does not mean they were all new to EA). 12 sessions covering global health, climate change, animal welfare, AI risks, nuclear security, and more 15 organizations participated [...] --- Outline: (03:56) Context (05:13) Preparation phase (05:16) Venue (05:45) Communication and advertising (07:00) Miscellaneous details (08:42) How the event went (14:17) Results (14:20) Feedback from participants (15:19) Impact on the popularity of EA France's activities: limited (17:02) Impact on the existing community (18:02) Costs (18:05) Expenses break-down (18:26) Time costs (19:22) Acknowledgments --- First published: November 6th, 2025 Source: https://forum.effectivealtruism.org/posts/aHzcXNnL3gQYxRbH2/ea-summit-2025-paris-report --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21 分钟
  5. 3天前

    “The Protein Problem” by LewisBollard

    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. People can’t get enough protein. Fully 61% of Americans say they ate more protein last year — and 85% intended to eat more this year. Last week, dairy giant Danone said it can’t keep up with US demand for its high-protein yogurt. Other food makers are rushing to pack protein into everything from Doritos to Pop-Tarts. The craze is global. The net percentage of Europeans wanting more protein has more than doubled since 2023, driven by protein-hungry Brits, Poles, and Spaniards. (The epicurean French and Italians remain holdouts.) Chinese per capita protein supply recently overtook already-high American levels. Young people are leading the charge. Across Asia, Europe, and the US, most Gen Z’ers want more protein, suggesting this trend may persist. In one recent British university survey, “protein” was the top reason students gave for not giving up meat. Doctors are also telling the 6 - 10% of Americans now taking GLP-1 weight loss drugs to eat more protein to prevent muscle loss. This is [...] --- First published: November 5th, 2025 Source: https://forum.effectivealtruism.org/posts/P7NuYbwbMMNTM45Cz/the-protein-problem --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    11 分钟
  6. 3天前

    [Linkpost] “Are we wrong to stop factory farms?” by Ben Stevenson

    This is a link post. I'm sharing an article by Rose Patterson from Animal Rising (AR), which responds to an EA criticism of AR's campaigns to block new factory farms in the UK. When we started our Communities Against Factory Farming campaign to stop every new factory farm from being built, we thought it would be a crowd-pleaser! Who in the animal movement could argue that this could actually be a bad thing? However, we’ve heard it repeatedly over the past year, particularly coming from some in the Effective Altruist community: “The production will just move to other countries, where conditions are even worse.” This argument reveals a fundamental misunderstanding of how social movements work and what we’re actually trying to achieve. Let me explain why. Rose goes on to cover several counter-arguments: that some lives will be "saved" before production elsewhere scales up, that campaigns to block new factory farms can be coupled with campaigns to prohibit low welfare imports, and that the campaigns help to build the animal movement and set an important precedent. I think that the piece is discussing an important criticism. As context, here's a version of the argument from Martin Gould at Open Philanthropy: [...] --- First published: November 4th, 2025 Source: https://forum.effectivealtruism.org/posts/Lx9NEjTvvhQkaQR8e/are-we-wrong-to-stop-factory-farms Linkpost URL:https://animalrising.substack.com/p/could-stopping-new-factory-farms --- Narrated by TYPE III AUDIO.

    4 分钟
  7. 3天前

    “AnimalHarmBench 2.0: Evaluating LLMs on reasoning about animal welfare” by Sentient Futures (formerly AI for Animals)

    We are pleased to introduce AnimalHarmBench (AHB) 2.0, a new standardized LLM benchmark designed to measure multi-dimensional moral reasoning towards animals, now available to use on Inspect AI. As LLM's influence over policies and behaviors of humanity grows, its biases and blind spots will grow in importance too. With the original and now-updated AnimalHarmBench, Sentient Futures aims to provide an evaluation suite to judge LLM reasoning in an area in which blind spots are especially unlikely to get corrected through other forms of feedback: consideration of animal welfare. In this post, we explain why we iterated upon the original benchmark and present the results and use cases of this new eval. What Needed to Change AHB 1.0 — presented in the AI for Animals and FAccT conferences in 2025 — attempts to measure the risk of harm that LLM outputs can have on animals. It can still play an important role in certain activities that require this such as compliance with parts of the EU AI Act Code of Practice. However, it faced several practical and conceptual challenges: Reasoning evaluation: While AHB 1.0 was good for measuring how much LLM outputs increase the risk of harm to [...] --- Outline: (00:59) What Needed to Change (02:33) A More Comprehensive Approach (02:37) Multiple dimensions (04:53) Other new features (05:31) What we found (05:56) Example Q&A scores (06:32) Results (08:04) Why This Matters (09:16) Acknowledgements (09:29) Future Plans --- First published: November 5th, 2025 Source: https://forum.effectivealtruism.org/posts/nBnRKpQ8rzHgFSJz9/animalharmbench-2-0-evaluating-llms-on-reasoning-about --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    10 分钟

关于

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

你可能还喜欢