EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. 20H AGO

    [Linkpost] “Animal Welfare at the Start of the Industrial Revolution” by Niki Dupuis

    This is a link post. I am not a historian! Fact check me please, dear God! We are at the beginning of another industrial revolution. The first was the automation of muscle, the current one is the automation of mind. To get into the right frame of mind, I want to step back and imagine: We are living in the UK[1], the year is 1800, and we are trying to end animal suffering. We know nothing about how the tech is about to develop, nor how the economy or politics is about to be transformed, but there are faint clues. Steam engines have started powering textile factories and milling grain, but no trains yet, no electricity. There are almost a billion people in the world, and a similar number of livestock living almost entirely on small farms. The first factory farms won't go up for almost 150 years. Let's go back in time. Farmed animalsAll animals kept for food are living on small farms. The conditions are not great: they have little protection from the cold, no veterinary care, and they are slaughtered crudely[2] and in public. However, compared to life on a factory farm, things look pretty idyllic. --- Outline: (01:10) Farmed animals (02:23) Work animals (03:02) Blood sports (03:46) Animal testing (04:57) Fishing and whaling (05:55) Wild animals (06:24) Culture and philosophy (07:21) Data availability (08:05) Politics (09:17) Colonialism and (human) slavery (10:23) Some specific takeaways (12:49) Conclusion --- First published: February 5th, 2026 Source: https://forum.effectivealtruism.org/posts/CHQdcXjBudqq4QFk9/animal-welfare-at-the-start-of-the-industrial-revolution Linkpost URL:https://lovedoesnotscale.substack.com/p/animal-welfare-at-the-start-of-the --- Narrated by TYPE III AUDIO.

    14 min
  2. 22H AGO

    [Linkpost] “The shrimp bet: When big numbers outsprint the evidence” by Vasco Grilo🔸

    This is a link post. [Subtitle.] Sentience and moral priority-setting This is a crosspost for The shrimp bet: When big numbers outsprint the evidence by Rob Velzeboer, which was originally published on 27 January 2026. TLDR: Shrimp welfare looks like the ultimate “scale + tractability” slam-dunk: massive numbers, cheap fixes, grim-sounding deaths. But the flagship farmed species—the penaeid shrimp L. vannamei—is an evidential outlier: beyond basic nociception, the sentience case is close to empty, and the limited evidence we do have points the wrong way on key markers. In the report that kicked off this wave, it was included for administrative clarity, not because sentience looked likely. If you let precaution plus expected-value reasoning run on that evidential bar, you don’t stop at shrimp, but you get pulled into insects and the rest of modern life's collateral killing. My view is that we shouldn’t let raw numbers and optimistic assumptions about sentience guide our moral priorities: most weight should go to high-confidence, severe, tractable suffering, and extremely low-confidence beings with high numbers should be treated as explicit research-and-standards bets, unless at least some higher-order evidence actually suggests pain. “At least I’m not a shrimp.” It's a line I’d often repeat [...] --- Outline: (03:37) PART 1: SHRIMP (03:41) Why focus on shrimp (08:26) The evidence for shrimp pain (15:01) Skepticism (19:11) PART 2: PRIORITIZATION (19:16) The implications of this view (25:04) Pain severity (32:05) Prioritization (36:18) References --- First published: February 6th, 2026 Source: https://forum.effectivealtruism.org/posts/ruG5GrTcE2DBiCDfW/the-shrimp-bet-when-big-numbers-outsprint-the-evidence Linkpost URL:https://robvelzeboer.substack.com/p/the-shrimp-bet --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    37 min
  3. 1D AGO

    [Linkpost] “Some tools for collective epistemics” by Forethought, Owen Cotton-Barratt, Lizka, Oliver Sourbut

    This is a link post. We’ve recently published a set of design sketches for AI tools that help with collective epistemics. We think that these tools could be a pretty big deal: If it gets easier to track what's trustworthy and what isn’t, we might end up in an equilibrium which rewards honesty This could make the world saner in a bunch of ways, and in particular could give us a better shot at handling the transition to more advanced AI systems We’re excited for people to get started on building tech that gets us closer to that world. We’re hoping that our design sketches will make this area more concrete, and inspire people to get started. The (overly-)specific technologies we sketch out are: Community notes for everything — Anywhere on the internet, content that may be misleading comes served with context that a large proportion of readers find helpful Rhetoric highlighting — Sentences which are persuasive-but-misleading, or which misrepresent cited work, are automatically flagged to readers or writers Reliability tracking — Users can effortlessly discover the track record of statements on a given topic from a given actor; those with bad records come with health warnings Epistemic [...] --- First published: February 6th, 2026 Source: https://forum.effectivealtruism.org/posts/zMuDoeXA9nBSeAc5g/some-tools-for-collective-epistemics Linkpost URL:https://www.forethought.org/research/design-sketches-collective-epistemics --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    3 min
  4. 1D AGO

    “Building sustainable momentum: progress report on CEA’s 2025-26 strategy” by Zachary Robinson🔸, Oscar Howie

    “I” refers to Zach, the Centre for Effective Altruism's CEO. Oscar is CEA's Chief of Staff. We are grateful to all the CEA staff and community members who have contributed to the development and implementation of our strategy. Mistakes are of course our own. Executive summary One year into our 2025-26 strategy, we have reversed our programs’ negative growth trajectory from 2023-24 and will sustain that momentum in 2026 while preparing to hit much more ambitious goals from 2027 onwards: CEA grew the number of people engaging with our programs by 20–25% year-over-year across each tier of our engagement funnel, beating our targets of 7.5–10% without increasing spending, and reversing the moderate decreases in engagement with our programs throughout 2023–24. We laid the foundations for furthering our contribution to EA funding diversification by merging with EA Funds and hiring an accomplished new Director (Loic Watine). And we strengthened CEA's own foundations, establishing our in-house Operations Team and growing our headcount from 42 to 66 while increasing talent density, including another incoming experienced Director for our new Strategy and M&E function (Rory Fenton). We also faced some challenges: Beyond the impact of EA growth on the perception of the [...] --- Outline: (00:34) Executive summary (03:05) Stewardship and sustainable momentum (04:22) Growing the EA community (05:05) 2025 (08:17) 2026 (09:35) Improving the EA brand (10:02) 2025 (15:24) 2026 (16:30) Diversifying EA funding (16:50) 2025 (19:53) 2026 (21:20) Strengthening CEA (21:38) Operations Team (22:07) Staffing (23:38) Spending (24:46) Monitoring & evaluation (25:40) EV, still --- First published: February 6th, 2026 Source: https://forum.effectivealtruism.org/posts/Dy4iGHbAkKAQ4t2Dw/building-sustainable-momentum-progress-report-on-cea-s-2025 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    27 min
  5. 1D AGO

    “Agent Economics: a BOTEC on feasibility” by Margot Stakenborg

    Summary: I built a simple back-of-the-envelope model of AI agent economics that combines Ord's half-life analysis of agent reliability with real inference costs. The core idea is that agent cost per successful outcome scales exponentially with task length, while human cost scales linearly. This creates a sharp viability boundary that cost reductions alone cannot meaningfully shift. The only parameter that matters much is the agent's half-life (reliability horizon), which is precisely the thing that requires the continual learning breakthrough (which I think is essential for AGI-level agents) that some place 5-20 years away. I think this has underappreciated implications for the $2T+ AI infrastructure investment thesis. The setup Toby Ord's "Half-Life" analysis (2025) demonstrated that AI agent success rates on tasks decay exponentially with task length, following a pattern analogous to radioactive decay. If an agent completes a 1-hour task with 50% probability, it completes a 2-hour task with roughly 25% probability and a 4-hour task with about 6%. There is a constant per-step failure probability, and because longer tasks chain more steps, success decays exponentially. METR's 2025 data showed the 50% time horizon for the best agents was roughly 2.5-5 hours (model-dependent) and had been doubling every ~7 [...] --- Outline: (00:57) The setup (02:04) The model (03:26) Results: base case (05:01) Finding 1: cost reductions cannot beat the exponential (06:24) Finding 2: the half-life is the whole game (08:02) Finding 3: task decomposition helps but has limits (09:33) What this means for the investment thesis (11:38) Interactive model (11:57) Caveats and limitations --- First published: February 6th, 2026 Source: https://forum.effectivealtruism.org/posts/2Zn23gCZrgzSLuDCn/agent-economics-a-botec-on-feasibility --- Narrated by TYPE III AUDIO.

    14 min
  6. 2D AGO

    “AI benchmarking has a Y-axis problem” by Lizka

    TLDR: People plot benchmark scores over time and then do math on them, looking for speed-ups & inflection points, interpreting slopes, or extending apparent trends. But that math doesn’t actually tell you anything real unless the scores have natural units. Most don’t. Think of benchmark scores as funhouse-mirror projections of “true” capability-space, which stretch some regions and compress others by assigning warped scores for how much accomplishing that task counts in units of “AI progress”. A plot on axes without canonical units will look very different depending on how much weight we assign to different bits of progress.[1] Epistemic status: I haven’t vetted this post carefully, and have no real background in benchmarking or statistics. Benchmark scores vs "units of AI progress" Benchmarks look like rulers; they give us scores that we want to treat as (noisy) measurements of AI progress. But since most benchmark score are expressed in quite squishy units, that can be quite misleading. The typical benchmark is a grab-bag of tasks along with an aggregate scoring rule like “fraction completed”[2] ✅ Scores like this can help us... Loosely rank models (“is A>B on coding ability?”) Operationalize & track milestones (“can [...] --- Outline: (01:00) Benchmark scores vs units of AI progress (02:42) Exceptions: benchmarks with more natural units (04:48) Does aggregation help? (06:27) Where does this leave us? (06:30) Non-benchmark methods often seem better (07:32) Mind the Y-axis problem (09:05) Bonus notes / informal appendices (09:13) I. A more detailed example of the Y-axis problem in action (11:53) II. An abstract sketch of whats going on (benchmarks as warped projections) --- First published: February 6th, 2026 Source: https://forum.effectivealtruism.org/posts/P8jsAySQzfgkeoDgb/ai-benchmarking-has-a-y-axis-problem --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    15 min

About

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.