EA Forum Podcast (Curated & popular)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

  1. MAR 21

    “Broad Timelines” by Toby_Ord

    No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down. But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines? I’ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together — for our portfolio of work aimed towards this end.   AI Timelines By AI timelines, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as artificial general intelligence (AGI), human level AI, transformative AI, or superintelligence. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual's own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have [...] --- Outline: (00:58) AI Timelines (04:38) Short vs Long Timelines (07:05) Broad Timelines (17:55) Implications (19:46) Hedging (20:58) A Different World (24:00) Longterm Actions (28:33) Conclusions --- First published: March 19th, 2026 Source: https://forum.effectivealtruism.org/posts/HCR2AE9it279ggiZT/broad-timelines --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    30 min
  2. MAR 18

    “What I didn’t expect about being a funder” by JamesÖz 🔸

    Crossposted from my blog I am very fortunate to have my job in many ways – I get to talk to, learn from, and give money to amazing people and nonprofits all around the world. I get to allocate a modest amount of resources to incredible organisations that I think are doing some of the best work to improve the world. I don’t have to fundraise for my or my team's salaries anymore. However, there are some things I’ve learned since becoming a philanthropic grantmaker that were either surprising or affected me more strongly than I expected. I will outline some of these below. These are not meant to invoke feelings of “oh poor grantmakers who have access to money and influence” but rather “oh, I never considered things from that perspective”. Hopefully, they will also lead to more productive working relationships between funders and advocacy groups. Here, I discuss: How challenging the trade-offs are that funders face The extremely poor feedback mechanisms that nonprofits have How people treat you differently once you have access to funding, and how that changes you The weight of saying no to good groups Some things that make me feel cynical Trade-offs [...] --- Outline: (01:18) Trade-offs are hard and money is scarce (06:35) Nonprofits have bad feedback mechanisms (13:00) How people treat you differently (and how that changes you) (14:52) Its hard to say no to people (16:08) Its easy to become cynical (19:38) Wrapping up --- First published: March 11th, 2026 Source: https://forum.effectivealtruism.org/posts/umicYzuRsm6okFRKA/what-i-didn-t-expect-about-being-a-funder --- Narrated by TYPE III AUDIO.

    21 min
  3. MAR 17

    “GHD discussion here is slowly dying” by NickLaing

    Epistemic status: A bit sad (I know that's not an epistemic status) The best development Forum on the internet? 3 years ago a headline “FTX SBF blah blah blah” triggered my memory “oh that's right, that effective altruism thing”. A few years earlier I had read “Doing Good Better” in our Northern Ugandan hut, and was excited by how the ideas matched my experience of seeing the BINGOs [1] on the ground here doing not-much-good at all. Soon after my wife dragged me to Cambridge for a year and I joined an EA group. I was drawn in to a beautiful crew of good, ernest people trying to do the best they could with their lives -[2] something I’d only seen before among a few people at church. I was most impressed by their veganism, practising what they preached. But after going back to Uganda I forgot about the whole EA thing. But 3 years later the FTX headlines and a google search led me to the EA forum, which to my delight turned out to be the best place on the internet to discuss global health and development. My first foray was a not-very-good post [...] --- Outline: (00:16) The best development Forum on the internet? (01:29) A steady decline (02:59) Why? (04:34) Is this fine? (05:02) Is this less fine? (06:29) How to Boost GHD discourse? --- First published: March 15th, 2026 Source: https://forum.effectivealtruism.org/posts/4jbbjTTJ87baMrkY4/ghd-discussion-here-is-slowly-dying --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 min
  4. MAR 15

    “The case for AI safety capacity-building work” by abergal

    I work on the capacity-building team on the Global Catastrophic Risks-half of Coefficient Giving (formerly known as Open Philanthropy). Our remit is, roughly, to increase the amount of talent aiming to prevent unprecedented, globally catastrophic events. These days, we’re mostly focused on AI, and we’ve funded a number of projects and grantees that readers of this post might be familiar with– including MATS, BlueDot Impact, Constellation, 80,000 Hours, CEA, the Curve, FAR.AI's events, university groups, and many other workshops and projects. The post aims to make the case that broadly, capacity-building work (including on AI risk) has been and continues to be extremely impactful, and to encourage people to consider pursuing relevant projects and careers. This post is written from my personal perspective; that said, my sense is that a number of CG staff and others in the AI safety space share my views. I include some quotes from them at the end of this post. I’m writing this post partly out of a desire to correct what I perceive as an asymmetry in terms of how excited I and others at Coefficient Giving are about this kind of work vs. how much people in the EA and AI [...] --- Outline: (02:15) The case for capacity-building work (04:11) Surveys (06:49) Testimonials (08:21) Neel Nanda (Senior Research Scientist at Google DeepMind) (11:15) Max Nadeau (Associate Program Officer (Technical AI Safety) at Coefficient Giving) (12:51) Rachel Weinberg (founder and former head of The Curve, currently at AI Futures Project) (14:30) Marius Hobbhann (CEO and founder of Apollo Research) (16:38) Adam Kaufman (member of technical staff at Redwood Research) (18:10) Gabriel Wu (member of technical staff (alignment) at OpenAI) (19:37) Catherine Brewer (Senior Program Associate (AI Governance) at Coefficient Giving) (21:12) Aric Floyd (video host for AI in Context) (23:12) Ryan Kidd (Director of MATS) (25:43) What tends to work? (28:34) Whats good to do now? (29:31) Who should be doing this work? (31:02) What would doing this work look like? (31:13) Working at an organization doing good work in the space (31:46) Constellation - CEO (32:46) Kairos - various early generalist positions (33:42) Starting or running your own capacity-building project or organization (34:07) Working on a capacity-building project part-time (34:30) Subscribing to Multiplier, a Substack with thoughts from our team (and other AI grantmaking staff at CG) (34:39) Letting our team know (35:03) Social proof (35:25) Julian Hazell, AI governance and policy at Coefficient Giving (36:19) Trevor Levin, AI governance and policy at Coefficient Giving (36:51) Ryan Greenblatt, Chief Scientist at Redwood Research: (37:21) Buck Shlegeris, CEO of Redwood Research (39:52) Appendix --- First published: March 10th, 2026 Source: https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work --- Narrated by TYPE III AUDIO.

    42 min
  5. MAR 13

    “Some good news: Ahold Delhaize to go cage-free” by ElliotTep

    For those not working in the space this probably isn't on your radar, but the animal advocacy movement just secured a huge win with Ahold Delhaize, convincing the fourth-largest supermarket company in the US to set the strongest cage-free policy of any large US retailer: A roadmap with benchmarks to fully eliminate caged egg cartons, expand cage-free offerings, and increase the percentage of cage-free sales. A pledge to annually report on its progress. At all 2,000+ locations, placing large, promotional shelf tags in front of cage-free cartons to differentiate cage-free and caged cartons for consumers.  This was a giant campaign. My understanding is that other companies were watching to see if this campaign would succeed or fail, to see if they would need to follow suit. In addition to the animals helped, this win will add pressure for competitors to do the same. This was a coordinated effort among many groups, including: Center For Responsible Food Business; Animal Equality; International Council for Animal Welfare; The Humane League; Mercy For Animals; Compassion in World Farming; Coalition to Abolish the Fur Trade and Animal Activist Collective. Animal Equality states that this will affect 5-7 million hens. I know a lot [...] --- First published: March 4th, 2026 Source: https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free --- Narrated by TYPE III AUDIO.

    2 min
  6. MAR 8

    [Linkpost] “Effective Altruism Will Be Great Again” by Mjreard

    This is a link post. Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open. Sequence thinkers will be forgiven and rejoice In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen. I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...] --- Outline: (02:37) The Retreat (07:03) What Greatness Demands (10:59) Effective Altruism is Good and Right --- First published: March 7th, 2026 Source: https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again Linkpost URL:https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 min

About

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

You Might Also Like