EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. 21H AGO

    [Linkpost] “Notes on equanimity from the inside” by stefan.torges

    This is a link post. I've always thought of myself as even-keeled and equanimous; that my mind is still. In hindsight, I had no idea what I was talking about. Halfway through my second ten-day meditation retreat, I experienced a depth of equanimity that broke my existing frame of reference. It's hard to convey in words. My reflection afterwards was something like “What the f**k was that?” More poetically: it felt deep and dark, like my entire experience was submerged in a deep sea trench. Two things about this experience seem worth taking seriously. The first is that equanimity, felt from the inside, doesn't sit neatly on the scale I'd previously used to think about good and bad experiences. The second is stranger: from inside the state, certain questions I'd taken for granted about how to act well in the world stopped quite working. Equanimity and axiology The closest thing in the EA-adjacent literature to what I'm describing is probably Lukas Gloor's tranquilism, which notably also is inspired by Buddhist sources. It's a partial axiological theory that roughly says well-being is freedom from cravings. This contrasts with classical hedonism, where experiences fall on an axis from suffering through neutral to [...] --- Outline: (01:05) Equanimity and axiology (03:10) Equanimity and consequentialism (04:44) Equanimity and epistemology --- First published: May 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/EsZbFsATZfSSym3Wt/notes-on-equanimity-from-the-inside Linkpost URL:https://www.lesswrong.com/posts/MNFEhMHpEsgqxjBa2/notes-on-equanimity-from-the-inside --- Narrated by TYPE III AUDIO.

    7 min
  2. 1D AGO

    “A new rationalist self-improvement book: the 12 Levers” by spencerg

    I'm publishing a book that I think can fairly be described as a rationalist approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach. My co-author, Jeremy Stevenson, and I read over 100 of the most popular self-improvement books of all time and carefully reviewed more than 20 types of therapy in an attempt to answer the question: What are all of the most useful psychological strategies for improving your life? Every time a book or therapy said to do something or provided a method or technique, we extracted it. We then carefully categorized the ~500 techniques. Our conclusion, which surprised us, was that to a reasonable degree of approximation, we were able to subsume all of these numerous approaches within just 12 high-level psychological strategies. We call these "The 12 Levers," which is also the name of our book. We also investigated the evidence behind each of these levers. The book does include stories, but they are not the focus - we choose one or two stories to tell about the history of each Lever or a person who embodies it to [...] --- Outline: (01:41) 1. A lot of techniques are recycled or repackaged (03:20) 2. A lot of self-help techniques dont have as much evidence as youd think (06:30) 3. Some techniques work better than others, but only on average (07:59) 4. At a fundamental level, you control surprisingly few things. (10:32) 5. Hundreds of self-help techniques exist, but they all boil down to just 12 broad psychological strategies for improving your life --- First published: May 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/tqERRa9aJkrkot28C/a-new-rationalist-self-improvement-book-the-12-levers --- Narrated by TYPE III AUDIO.

    11 min
  3. 3D AGO

    “How to actually give money away” by NickAllardice

    This was originally posted here. It's written for an audience that's not deep in the weeds of EA giving theory/culture, but a few people suggested I post here as there's much that's additive to or divergent from some common EA practices. Feedback / disagreements welcome! Also my first time posting here. Hi! -- Most people who intend to give large amounts of money away never actually do. The money sits. In donor-advised funds, in "someday" plans, in good intentions. This is the default pathway - not an edge case - and if you don't design against it, it will happen to you too. I've watched this from every angle. As CEO of Change.org we processed 5M+ donations. I now run GiveDirectly; 160 thousand people have donated, dozens regularly give 1 million dollars+, and some give 50 million dollars-100 million dollars+ at a time. I've advised two of the biggest philanthropic institutions in the world (Coefficient Giving and GiveWell) on donor engagement and growth, and sat on half a dozen nonprofit boards. I've also been giving away 10-20% of my annual income for 17 years; I grew up low-income, so this started as a very modest amount but has added [...] --- Outline: (01:59) 1. The default pathway is to delay. Fight this. (04:51) 2. Pick a few causes and write them down. (07:33) 3. Build a portfolio across causes and risk/return. (09:48) 4. Use the index funds of giving. (11:55) 5. For the love of god, dont over-staff. (13:58) 6. Make giving a recurring event, not a to-do. (15:36) A few final hot takes --- First published: April 30th, 2026 Source: https://forum.effectivealtruism.org/posts/fRLt59FNXxmCaAkYF/how-to-actually-give-money-away --- Narrated by TYPE III AUDIO.

    20 min
  4. 3D AGO

    “Book Review: All the Lives You Can Change” by Bentham’s Bulldog

    Crosspost. (Reminder: the farm bill which is being voted on imminently would destroy most state level animal protections and be the worst law for farm animal welfare ever passed. Please, please, contact your representatives and tell them to vote no on it—more details here, including a bunch of other activities that are even higher impact. The house votes on this today, so this is the last day you can productively call your representatives.) Effective altruism is a social movement that's about trying to do good as effectively as possible, with charity, career, and life projects broadly. If you go to a random local effective altruism event, most of the people there will be atheists, even in a country that's mostly Christian. But this isn’t because of any deep conflict between Christian ideas and effective altruism. I’m in a Facebook group with a bunch of Christian philosophers, and about half of them are effective altruists in some form. Similarly, Aron Wall who is among the most devout Christians I know, is an effective altruist and gives his money to the most effective charities he can find. My friend who runs the YouTube channel Apologetics Squared is another very religious [...] --- First published: April 30th, 2026 Source: https://forum.effectivealtruism.org/posts/CC9jePAMibACapk3i/book-review-all-the-lives-you-can-change --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    13 min
  5. 3D AGO

    “Celebrating 10 Years of EAGx” by Niki Kesseler

    Today marks the 10-year anniversary of EAGx. On April 30, 2016, the very first EAGx events—EAGxBoston and EAGxBerkeley—were held simultaneously. A decade later, it's incredible to reflect on how much this community has grown, both in scale and in depth. Since then, we’ve hosted 67 EAGx events and 18 EA Summits (a format we introduced in 2024), bringing us to 85 events in total—with our 86th happening this weekend (EAGxDC!). Across these, we’ve welcomed over 24,500 attendees in 20 countries and across six continents. A few milestones that stand out: EAGxAustralasia is our longest-running series (6 editions, rotating across cities) EAGxBerlin has been our most frequently hosted city (5 times) Our largest event, EAGxBoston 2022, brought together 935 attendees The portfolio continues to grow: this year alone, we’re planning 10 EAGx events and at least 22 EA Summits EAGxAustralasia 2019 EAGxBerlin 2025 EAGxNordics 2026 The EA Summit format, in particular, has scaled quickly—from 409 attendees across 3 pilot events in 2024 to over 1,700 across 2025–26 already. It's been exciting to see how this complements the EAGx model and opens up new ways for people to engage. But more than the numbers, EAGx has always been about people [...] --- First published: April 30th, 2026 Source: https://forum.effectivealtruism.org/posts/EwbuBhQMiNcmaznHq/celebrating-10-years-of-eagx --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    5 min
  6. 3D AGO

    [Linkpost] “Open strategic questions for digital minds” by Lucius Caviola

    This is a link post. These are strategic questions about digital minds and AI welfare that I think are especially important, and where I’d like to see more progress. A common theme is that they matter for what we should do concretely under uncertainty about AI moral status. This is a current snapshot of my views and I expect them to change. What do you think? Any questions you’d add? Approach What's robustly good to do now, under deep uncertainty? I think this is the leading question we should ask. We don’t know whether AIs are or will become moral patients, and resolving that question isn’t tractable in the short term. What matters most are the long-run effects of our actions, since the vast majority of digital minds, if they ever exist, will be created after the transition to advanced AI. And there are serious long-run risks from both over- and under-attributing moral status. So we should look for actions that are robustly positive in the long run: good if AIs are (or will be) moral patients, not bad if they aren’t, and compatible with human and animal welfare (~AI safety). Finding such actions is hard, and most options carry [...] --- Outline: (00:36) Approach (00:39) Whats robustly good to do now, under deep uncertainty? (01:37) Can AI welfare work wait for ASI? (03:02) What to do under different AI takeoff scenarios? (04:26) AI safety × AI welfare (04:30) Do AI safety and welfare conflict? (06:11) How might AI welfare shape deal-making with AIs? (07:19) Relations (07:22) Should AIs have legal rights, and if so, which? (08:59) How will AI-AI interactions shape the welfare of digital minds? (10:08) What would harmonious coexistence look like? (10:55) Creation (10:58) How can we influence those who will shape the welfare of digital minds? (12:52) Is restricting the creation of digital minds feasible? (13:55) Who will deliberately create digital minds, and why? (15:07) How will digital minds spread to space? (15:55) Design (15:58) How can we make AIs value the welfare of digital minds? (16:59) Do different types of digital minds require different strategies? (18:56) Will digital minds be happy by default? (20:11) What preferences will digital minds have, and what follows? (21:31) Society (21:34) What memes should we spread? (23:05) What interest groups and coalitions will form around digital minds? (23:55) What role will China play? (24:49) How will religions respond? (26:00) Beyond (26:03) What crucial considerations are we missing? --- First published: April 29th, 2026 Source: https://forum.effectivealtruism.org/posts/TdRdivQNeRxXrscQx/open-strategic-questions-for-digital-minds Linkpost URL:https://outpaced.substack.com/p/open-strategic-questions-for-digital --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    27 min

About

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.