EA Forum Podcast (Curated & popular)

EA Forum Team
EA Forum Podcast (Curated & popular)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

  1. 6D AGO

    “We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

    Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles [...] --- Outline: (00:14) Summary (06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak (06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions (09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak (17:35) Aggregation methods disagree (21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical (24:07) Objections and Replies (24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector? (25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say (27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that (30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me? (31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation? (34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to (41:44) Conclusion (or well, what do I recommend?) (44:05) Acknowledgements The original text contained 20 footnotes which were omitted from this narration. --- First published: July 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    46 min
  2. 6D AGO

    “80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by ChanaMessinger, Aric Floyd

    About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today's world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues [...] --- Outline: (00:18) About the program (01:40) Our first long-form video (03:14) Strategy and future of the video program (04:18) Subscribing and sharing (04:57) Request for feedback --- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ERuwFvYdymRsuWaKj/80-000-hours-is-producing-ai-in-context-a-new-youtube --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    6 min
  3. 6D AGO

    “A shallow review of what transformative AI means for animal welfare” by Lizka, Ben_West🔸

    Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There's been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” We’re skeptical of the case for most speculative “TAI>AW” projects We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run [...] --- Outline: (00:28) Summary (02:17) 1. Paradigm shifts, how they screw up our levers, and the eras we might target (02:26) If advanced AI transforms the world, a lot of our assumptions about the world will soon be broken (04:13) Should we be aiming to improve animal welfare in the long-run future (in transformed eras)? (06:45) A Note on Pascalian Wagers (08:36) Discounting for obsoletion & the value of normal-world-targeting interventions given a coming paradigm shift (11:16) 2. Considering some specific interventions (11:47) 2.1. Interventions that target normal(ish) eras (11:53) 🔹 Leveraging AI progress to boost standard animal welfare work (12:59) ❌ Trying to prevent near-term uses of AI that worsen conditions in factory farming (bad PLF) (14:18) 2.2. Interventions that try to improve animal welfare in the long-run (past the paradigm shift) (14:27) ⭐ Guarding against misguided prohibitions & other bad lock-ins (16:33) 🔹 Exploring wild animal welfare & not over-indexing on farming (17:59) 💬 Shaping AI values (25:34) 💬 Other kinds of interventions? (/overview of post-paradigm AW interventions) (27:57) 3. Potentially important questions (32:58) Conclusion (34:34) Appendices (34:37) Some previous work & how this piece fits in (36:21) How our work fits in The original text contained 34 footnotes which were omitted from this narration. --- First published: July 8th, 2025 Source: https://forum.effectivealtruism.org/posts/tGdWott5GCnKYmRKb/a-shallow-review-of-what-transformative-ai-means-for-animal --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    38 min
  4. JUL 6

    [Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch

    This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] --- Outline: (01:16) Bentham Bulldog's Case Against Honey (02:42) Where I agree with Bentham's Bulldog (03:08) Where I disagree --- First published: July 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    6 min

Ratings & Reviews

4.9
out of 5
8 Ratings

About

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada