EA Forum Podcast (Curated & popular)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

  1. 16H AGO

    [Linkpost] “The best cause will disappoint you: An intro to the optimisers curse” by titotal

    This is a link post. I would like to thank David Thorstadt for looking over this. If you spot a factual error in this article please message me. The code used to generate the graphs in the article is available to view here. Introduction Say you are an organiser, tasked with achieving the best result on some metric, such as “trash picked up”, “GDP per capita”, or “lives saved by an effective charity”. There are several possible options of interventions you can take to try and achieve this. How do you choose between them? The obvious thing to do is look at each intervention in turn and make your best, unbiased estimate of how each intervention will perform on your metric, and pick the one that performs the best: Image taken from here Having done this ranking, you declare the top ranking program to be the best intervention and invest in it, expecting that that your top estimate will be the result that you get. This whole procedure is totally normal, and people all around the world, including people in the effective altruist community, do it all the time. In actuality, this procedure is not correct. The optimisers curse is [...] --- Outline: (00:26) Introduction (02:17) The optimisers curse explained simply (04:42) Introducing a toy model (08:45) Introducing speculative interventions (12:15) A simple bayesian correction (18:47) Obstacles to simple optimizer curse solutions. (22:08) How Givewell has reacted to the optimiser curse (25:18) Conclusion --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the Linkpost URL:https://open.substack.com/pub/titotal/p/the-best-cause-will-disappoint-you?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    28 min
  2. 1D AGO

    “What is Love ft. Claude & VascoBot” by AgentMa🔸

    What is the highest form of love? According to the VascoBot Claude programmed for me: “Thanks for the great question, AgentMa🔸. I strongly upvoted it. I think the highest form of love is expected total hedonistic utilitarianism (ETHU) applied to all sentient beings, weighted by their welfare ranges as a fraction of that of humans. However, I believe most discussions of love neglect the welfare implications for soil animals and microorganisms, which I think dominate the expected value calculations in most cases. I estimate that when someone experiences profound love, their increased wellbeing leads to approximately 2.3 % (= 50 additional kcal consumed per day * 0.046 % per kcal) additional food consumption. Multiplying this by the roughly 4.71*10^13 soil nematodes affected per kcal of food production (from my back-of-the-envelope calculations based on van den Hoogen et al. (2019)), I get approximately 1.08*10^12 affected nematode-years per year of experiencing love. Conditional on my preferred exponent of the number of neurons of 0.7, and assuming the welfare range of a nematode is 10^-6 as a fraction of that of humans, the welfare effects on soil animals could be 1.08*10^6 nematode-equivalent quality-adjusted life years (QALYs) per year of love experience. In [...] --- First published: February 14th, 2026 Source: https://forum.effectivealtruism.org/posts/exwmGp3swfbbNqSsN/what-is-love-ft-claude-and-vascobot --- Narrated by TYPE III AUDIO.

    6 min
  3. 2D AGO

    “The reality of long-term EA community building: Lessons from 3 years of EA Barcelona” by Melanie Brennan 🔹, Anthony L

    We are Melanie and Anthony, the two community builders at EA Barcelona. In this post, we share where the group stands today and reflect on key learnings from nearly three years of grant-funded community building. We hope these reflections are useful to other community builders, funders, and CEA, particularly around what it realistically takes to build and sustain EA communities over multiple years, from funding stability and feedback loops to the personal sustainability of professional community builders. TL;DR EA Barcelona was funded by the EA Infrastructure Fund between May 2023 and December 2025 (1.2 FTE). Over this period, it has grown into a thriving local community and informal coordination hub for EA activity in Spain. Unexpectedly, EAIF decided not to continue funding our project in 2026. We subsequently explored the current funding landscape for EA community building, but found no viable path to stable funding for 2026 that didn’t involve a high level of personal and professional risk. As a result, we’ve decided not to continue with a funded community-builder model for EA Barcelona for now, and will instead focus on transitioning to a volunteer-led structure. Background: EA Barcelona (2023-2025) Present-day EA Barcelona began as a casual meetup group [...] --- Outline: (00:45) TL;DR (01:38) Background: EA Barcelona (2023-2025) (02:20) 2023: Establishing EA Barcelona as a city hub (03:52) 2024: Deepening engagement and seeding national growth (07:51) 2025: Transitioning from local hub to national coordination (13:37) Late 2025: Navigating the Transition (13:43) Initial Funding Cuts (14:44) What we did next (17:00) Clarity starts to emerge (18:19) Where we are now (18:57) Our plan for 2026: transition toward a volunteer-led community model (20:19) Quick disclaimer: Are either of us Spanish? (21:36) Thank you! --- First published: January 30th, 2026 Source: https://forum.effectivealtruism.org/posts/daHMkoQsHSbcK6Kjo/the-reality-of-long-term-ea-community-building-lessons-from --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    23 min
  4. 2D AGO

    “Preparing for a flush future: work, giving, and conduct” by Sam Anschell

    Note: opinions are all my own. Following Jeff Kaufman's Front-Load Giving Because of Anthropic Donors and Jenn's Funding Conversation We Left Unfinished, I think there is a real likelihood that impactful causes will receive significantly more funding in the near future. As background on where this new funding could come from: Coefficient Giving announced:  A recent NYT piece covered rumors of an Anthropic valuation at $350 billion. Many of Anthropic's cofounders and early employees have pledged to donate significant amounts of their equity, and it seems likely that an outsized share of these donations would go to effective causes. A handful of other sources have the potential to grow their giving: Founders Pledge has secured $12.8 billion in pledged funding, and significantly scaled the amount it directs.[1] The Gates Foundation has increased its giving following Bill Gates’ announcement to spend down $200 billion by 2045. Other aligned funders such as Longview, Macroscopic, the Flourishing Fund, the Navigation Fund, GiveWell, Project Resource Optimization, Schmidt Futures/Renaissance Philanthropy, and the Livelihood Impacts Fund have increased their staffing and dollars directed in recent years. The OpenAI Foundation controls a 26% equity stake in the for-profit OpenAI Group PB. This stake is currently valued at $130 billion [...] --- Outline: (02:39) Work (03:50) Giving (04:53) Conduct --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 min
  5. 2D AGO

    “Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred

    Cross-posted to LessWrong. Summary History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics: epistemic and moral certainty extreme tribalism dividing humanity into a sacred “us” and an evil “them” a willingness to use whatever means necessary, including brutal violence. Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler. We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity. Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding. The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...] --- Outline: (00:16) Summary (05:19) What do we mean by ideological fanaticism? (08:40) I. Dogmatic certainty: epistemic and moral lock-in (10:02) II. Manichean tribalism: total devotion to us, total hatred for them (12:42) III. Unconstrained violence: any means necessary (14:33) Fanaticism as a multidimensional continuum (16:09) Ideological fanaticism drove most of recent historys worst atrocities (19:24) Death tolls dont capture all harm (20:55) Intentional versus natural or accidental harm (22:44) Why emphasize ideological fanaticism over political systems like totalitarianism? (25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types (26:29) Authoritarianism as a risk factor (27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy (29:50) Terminal values may matter independently of political systems, especially with AGI (31:02) Fanaticisms connection to malevolence (dark personality traits) (34:22) The current influence of ideological fanaticism (34:42) Historical perspective: it was much worse, but we are sliding back (37:19) Estimating the global scale of ideological fanaticism (43:57) State actors (48:12) How much influence will ideological fanaticism have in the long-term future? (48:57) Reasons for optimism: Why ideological fanaticism will likely lose (49:45) A worse starting point and historical track record (50:33) Fanatics intolerance results in coalitional disadvantages (51:53) The epistemic penalty of irrational dogmatism (54:21) The marketplace of ideas and human preferences (55:57) Reasons for pessimism: Why ideological fanatics may gain power (56:04) The fragility of democratic leadership in AI (56:37) Fanatical actors may grab power via coups or revolutions (59:36) Fanatics have fewer moral constraints (01:01:13) Fanatics prioritize destructive capabilities (01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful (01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate (01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more (01:07:15) A possible middle ground: Persistent multipolar worlds (01:08:33) Why multipolar futures seem plausible (01:10:00) Why multipolar worlds might persist indefinitely (01:15:42) Ideological fanaticism increases existential and suffering risks (01:17:09) Ideological fanaticism increases the risk of war and conflict (01:17:44) Reasons for war and ideological fanaticism (01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war (01:27:00) These risks are both time-sensitive and timeless (01:27:44) Fanatical retributivism may lead to astronomical suffering (01:29:50) Empirical evidence: how many people endorse eternal extreme punishment? (01:33:53) Religious fanatical retributivism (01:40:45) Secular fanatical retributivism (01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment (01:42:33) Ideological fanaticism threatens collective moral deliberation (01:47:35) AI alignment may not solve the fanaticism problem either (01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews (01:55:44) Ideological fanaticism could worsen many other risks (01:55:49) Differential intellectual regress (01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires (01:59:21) Apocalyptic terrorism (02:00:05) S-risk-conducive propensities and reverse cooperative intelligence (02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering (02:03:00) Unknown unknowns and navigating exotic scenarios (02:03:43) Interventions (02:05:31) Societal or political interventions (02:05:51) Safeguarding democracy (02:06:40) Reducing political polarization (02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles (02:13:55) Growing the influence of liberal democracies (02:15:54) Encouraging reform in illiberal countries (02:16:51) Promoting international cooperation (02:22:36) Artificial intelligence-related interventions (02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics (02:27:58) Making transformative AIs themselves less likely to be fanatical (02:36:14) Using AI to improve epistemics and deliberation (02:38:13) Fanaticism-resistant post-AGI governance (02:39:51) Addressing deeper causes of ideological fanaticism (02:41:26) Supplementary materials (02:41:39) Acknowledgments (02:42:22) References --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    2h 43m
  6. FEB 2

    “More EAs should consider working for the EU” by EU Policy Careers

    Context: The authors are a few EAs who currently work or have previously worked at the European Commission. In this post, we make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and; briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.Why working at the EU can be extremely impactfulImportance The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas. Animal welfare[3] The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year. The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date’, following massive farmers’ protests in Brussels. The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...] --- Outline: (00:43) Why working at the EU can be extremely impactful (00:49) Importance (05:30) Tractability (07:22) Neglectedness (09:00) Paths into the EU --- First published: February 1st, 2026 Source: https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu --- Narrated by TYPE III AUDIO.

    12 min
4.9
out of 5
9 Ratings

About

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.