EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. قبل ٩ ساعات

    “Cost-Effectiveness of THL’s Corporate Cage-free Campaigns” by Caroline Mills, ASuchy, BSweeney

    Summary Using historical data from 2015-2024, The Humane League (THL) estimates that each dollar spent on cage-free corporate campaigns has spared approximately two hens from cages. This estimate covers both the acquisition of new corporate commitments and the accountability work required to ensure those commitments are fulfilled across THL's US and Global corporate cage-free efforts. The calculation draws on: Corporate cage-free policy fulfillment data collected by THL Estimates of the number of hens spared by each corporate policy THL helped secure Historical program cost data, adjusted for inflation and including proportional overhead It is important to note that this is not an estimate of THL's overall cost-effectiveness. Rather, this analysis covers only THL's corporate cage-free campaigns. Other programs, such as THL's movement building work via the Open Wing Alliance or broiler welfare, are excluded here. The goal of this analysis is twofold: to inform internal resource allocation, and to offer a transparent, accessible response to one of the most common questions asked by THL supporters. The methodology used prioritizes accessibility over granularity at some points, which are detailed in the “Additional Assumptions & Limitations” section. While this model is simplified, we are encouraged that the resulting [...] --- Outline: (00:13) Summary (02:27) Methodology & Assumptions (07:25) Additional Assumptions & Limitations (10:28) Future Work --- First published: November 11th, 2025 Source: https://forum.effectivealtruism.org/posts/Fbx9hf2e6MaLfoNwD/cost-effectiveness-of-thl-s-corporate-cage-free-campaigns --- Narrated by TYPE III AUDIO.

    ١١ من الدقائق
  2. قبل ١٣ ساعة

    “The overall cost-effectiveness of an intervention often matters less than the counterfactual use of its funding” by abrahamrowe

    Cross-posted from Good Structures. For impact-minded donors, it's natural to focus on doing the most cost-effective thing. Suppose you’re genuinely neutral on what you do, as long as it maximizes the good. If you’re donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it. But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes: Research organizations that try to influence the allocation or use of charitable funds. Donor advisors who work with donors to find promising opportunities. People arguing to community members on venues like the EA Forum. Charity recommenders like GiveWell and Animal Charity Evaluators. These are endeavors where you’re specifically trying to influence the giving of others. And when you influence the giving of others, you don’t get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do. This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren’t always fully digested by the [...] --- Outline: (03:34) Impact is largely a function of what the donor would have done otherwise. (04:36) Is improving the use of effective or ineffective charitable dollars easier? (06:14) How do people respond to these lower impact interventions? (08:14) What are the implications of paying a lot more attention to funding counterfactuals? (10:21) Objections to this argument. --- First published: November 12th, 2025 Source: https://forum.effectivealtruism.org/posts/YrMFHJm7mbswJd7Me/the-overall-cost-effectiveness-of-an-intervention-often --- Narrated by TYPE III AUDIO.

    ١٣ من الدقائق
  3. قبل ١٩ ساعة

    “The Crisis EA Cannot Afford to Ignore” by Allegra_P

    More than two years ago, Sudan descended into war. What began as a clash between the army and paramilitary quickly became one of the world's worst humanitarian crises. Millions have been displaced. Families are going hungry. Communities are cut off from medicine. Violence and disease spread in silence. And the world keeps looking away, even as the human cost grows daily. So why are you seeing more about Sudan in the news now? On October 26, the RSF captured El Fasher after an 18-month siege. What followed: 460+ patients and companions killed at a Maternity Hospital. 82,000 people fled on foot. Mass graves visible in satellite imagery. Since the war began in April 2023: 150,000+ killed (which is most likely undercounted). 13 million displaced, the largest displacement crisis in the world. 30 million people, over half of Sudan's population, need humanitarian assistance. Famine confirmed in El Fasher and elsewhere. Acute malnutrition rates up to 35% in children. Listen to today's episode of the Daily, where New York Times correspondent Declan Walsh explains how this became one of the worst humanitarian crises in decades. But here's what makes this different from a "hopeless" crisis. The infrastructure to save lives [...] --- First published: November 11th, 2025 Source: https://forum.effectivealtruism.org/posts/vqaK5y5ksiDSfMzqd/the-crisis-ea-cannot-afford-to-ignore --- Narrated by TYPE III AUDIO.

    ٥ من الدقائق
  4. قبل يوم واحد

    [Linkpost] “Introducing LEAP: The Longitudinal Expert AI Panel” by Forecasting Research Institute

    This is a link post. Every month, the Forecasting Research Institute asks top computer scientists, economists, industry leaders, policy experts and superforecasters for their AI predictions. Here's what we learned from the first three months of forecasts: AI is already reshaping labor markets, culture, science, and the economy—yet experts debate its value, risks, and how fast it will integrate into everyday life. Leaders of AI companies forecast near futures in which AI cures all diseases, replaces whole classes of jobs, and supercharges GDP growth. Skeptics see small gains at best, with AI's impact amounting to little more than a modest boost in productivity—if anything at all. Despite these clashing narratives, there is little work systematically mapping the full spectrum of views among computer scientists, economists, technologists in the private sector, and the public. We fill this gap with LEAP, a monthly survey tracking the probabilistic forecasts of experts, superforecasters, and the public. Expert participants include top-cited AI and ML scientists, prominent economists, key technical staff at frontier AI companies, and influential policy experts from a broad range of NGOs. LEAP operates on three key principles: Accountability: LEAP forecasts are detailed and verifiable, encouraging disciplined thinking and allowing us [...] --- Outline: (02:52) The LEAP Panel (04:55) LEAP Waves 1-3 (05:25) Insights from Waves 1-3 (05:29) 1. Experts expect sizable societal effects from AI by 2040. (09:03) 2. Experts disagree and express substantial uncertainty about the trajectory of AI. (09:48) 3. The median expert expects significantly less AI progress than leaders of frontier AI companies. (13:23) 4. Experts predict much faster AI progress than the general public. (16:41) 5. There are few differences in prediction between superforecasters and experts, but, where there is disagreement, experts tend to expect more AI progress. We don't see systematic differences between the beliefs of computer scientists, economists, industry professionals, and policy professionals. (19:34) Future LEAP waves The original text contained 2 footnotes which were omitted from this narration. --- First published: November 10th, 2025 Source: https://forum.effectivealtruism.org/posts/PjGRFxXrGENQTTkWm/introducing-leap-the-longitudinal-expert-ai-panel Linkpost URL:https://forecastingresearch.substack.com/p/introducing-leap --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٢٢ من الدقائق
  5. قبل يومين

    “Visionary Pragmatism: A Third Way for Animal Advocacy” by Dilan Fernando

    This work is my own, written in my spare time, and doesn’t reflect the views of my employer. Less than ~3% of the text is AI-generated. Thank you to Laila Kassam, Haven King-Nobles, Lincoln Quirk, Tom Billington and Harley McDonald-Eckersall for their feedback, which doesn’t imply their endorsement of the ideas presented. Summary Most animal advocates want sweeping change — to end factory farming at the very least, and often to go even further. But across the movement, we rarely talk in depth about how we'll actually achieve these kinds of long-term goals. Instead, I believe much of the movement has adopted a mindset I call short-term pragmatism: a focus on measurable, near-term wins that has delivered real victories, but which risks leaving us without a path to our ultimate aims. I suspect the convergence towards this mentality is a reaction to another dominant mindset, passionate idealism. This post argues that to achieve the long-term goals we truly aspire to, we must think differently. I make the case for visionary pragmatism — a third way that starts with an ambitious end goal and applies clear thinking to achieve it. To illustrate how animal advocates can position ourselves as [...] --- Outline: (00:32) Summary (02:00) Introductory Context (04:04) 1. The problem: our current way of thinking risks long-term failure (05:24) 1.1. Short-term pragmatism: our dominant mindset (08:20) 1.2. Why (I think) short-term pragmatism draws us in (10:18) 1.3. The strategic limitations of short-term pragmatism (12:12) 2. The solution: visionary pragmatism (12:33) 2.1. Defining visionary pragmatism (13:31) 2.2. Six core qualities of visionary pragmatism (42:02) 2.3. Summarising visionary pragmatism (42:45) 3. Next steps: cultivating visionary pragmatism (44:31) 4. Over to you! --- First published: November 9th, 2025 Source: https://forum.effectivealtruism.org/posts/38K7pn8SSpRgRGKGq/visionary-pragmatism-a-third-way-for-animal-advocacy --- Narrated by TYPE III AUDIO.

    ٤٦ من الدقائق
  6. قبل يومين

    “How and Why to Make EA Cool” by JustinPortela

    EA has always had a coolness crisis. The name itself is clunky and overly precise. The logo is fine-but-not-great, and the visual branding has never excited anybody. EA orgs have spent over a decade struggling to tell exciting stories or get serious numbers of social media followers. In short: It's never been sexy; it's never been cool. Maybe you don’t think being cool matters. That's a fine opinion if you’re ok with EA being a group of 10,000 people, ~70% male and ~75% white, circularly spending Dustin Moskovitz's money. But imagine a movement of a million people. A million people donating a percentage of their income to create a community fund as large as Open Phil's. A million people working at high-impact organizations. Getting to a million people means being cool in public campaigns. It means cool branding and cool social media posts and cool copy in emails. The unfortunate truth is that the attention economy consumes us, and the world is a popularity contest. EAs often have this dastardly intuition that they can convince people just by being right. I once watched an EA tell a man to rip up his backyard grass and [...] --- Outline: (03:24) 1. Heroes need villains (05:10) 2. Please, for God's sake, hire non-EA creative talent (05:58) 3. Invest in creatives. (06:59) 4. Learn to drop the caveats --- First published: November 9th, 2025 Source: https://forum.effectivealtruism.org/posts/aBoA8Hri8nfZMzkEB/how-and-why-to-make-ea-cool --- Narrated by TYPE III AUDIO.

    ٨ من الدقائق
  7. قبل يومين

    “Problems I’ve Tried to Legibilize” by Wei Dai

    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI researchers, funders, company leaders, government policymakers, their advisors (including future AI advisors), and the general public. Philosophical problems Probability theory Decision theory Beyond astronomical waste (possibility of influencing vastly larger universes beyond our own) Interaction between bargaining and logical uncertainty Metaethics Metaphilosophy: 1, 2 Problems with specific philosophical and alignment ideas Utilitarianism: 1, 2 Solomonoff induction "Provable" safety CEV Corrigibility IDA (and many scattered comments) UDASSA UDT Human-AI safety (x- and s-risks arising from the interaction between human nature and AI design) Value differences/conflicts between humans “Morality is scary” (human morality is often the result of status games amplifying random aspects of human value, with frightening results) [...] --- First published: November 9th, 2025 Source: https://forum.effectivealtruism.org/posts/vYQciLZYP4yTtRvWk/problems-i-ve-tried-to-legibilize --- Narrated by TYPE III AUDIO.

    ٤ من الدقائق

حول

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

قد يعجبك أيضًا