317 episodes

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

EA Forum Podcast (Curated & popular‪)‬ EA Forum Team

    • Society & Culture
    • 4.9 • 7 Ratings

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

    “A Scar Worth Bearing: My Improbable Story of Kidney Donation” by Elizabeth Klugh

    “A Scar Worth Bearing: My Improbable Story of Kidney Donation” by Elizabeth Klugh

    TL;DR: I donated my kidney and you can too. If that's too scary, consider blood donation, the bone marrow registry, post-mortem organ donation, or other living donations (birth tissue, liver donation).
    Kidney donation sucks. It's scary, painful, disruptive, scarring. My friends and family urged me not to; words were exchanged, tears were shed. My risk of preeclampsia tripled, that of end stage renal disease multiplied by five. I had to turn down two job offers while prepping for donation.
    It is easy to read philosophical arguments in favor of donation, agree with them, and put the book back on the shelf. But it is different when your friend needs a kidney: Love bears all things, believes all things, hopes all things, endures all things.
    Eighteen months ago, at 28-years-old, my friend Alan started losing weight. He developed a distinctive butterfly-shaped rash and became too weak to eat. On February [...]
    ---

    First published:

    May 30th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/xiDKb3XvJxKiwNevJ/a-scar-worth-bearing-my-improbable-story-of-kidney-donation

    ---
    Narrated by TYPE III AUDIO.

    • 4 min
    “Introducing Ansh: A Charity Entrepreneurship Incubated Charity” by Supriya

    “Introducing Ansh: A Charity Entrepreneurship Incubated Charity” by Supriya

    Executive Summary
    Ansh, a 1-year-old Charity Entrepreneurship incubated charity, has been delivering an evidence-based, scientifically proven intervention called Kangaroo Care to low birth weight and premature babies in 2 government hospitals in India since January 2024. Ansh estimates that their programs are saving, on average, 4 lives a month per facility and a total of 98 lives per year. The cost of one life saved is approximately $2077 (current costs, not a potential estimate). Ansh is now replicating the programs in two additional hospitals, doubling their impact before the end of this year.
    According to the World Health Organization (WHO), neonatal conditions[1] are among the top 3 causes of death worldwide[2]. This makes neonatal mortality one of the largest-scale causes of suffering and death today. In 2022, 2.3 million babies died in the first 28 days of life (i.e. the newborn/neonatal period) (World Health Organisation, 2024). Let's compare [...]
    ---
    Outline:
    (00:06) Executive Summary
    (02:33) I. The Problem and Solution
    (04:29) II. Introducing Ansh
    (08:36) III. Our Impact To Date
    (09:02) Baseline Neonatal Mortality
    (10:49) Lives Saved
    (12:56) Cost-Effectiveness
    (15:36) IV. Our Plans For The Future
    (16:04) (1) KC Improvements
    (18:48) (2) Scale Up
    (20:34) V. Acknowledgments and Partnerships
    The original text contained 12 footnotes which were omitted from this narration.
    ---

    First published:

    May 29th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/hTEaKau8D4Ah3NPcu/introducing-ansh-a-charity-entrepreneurship-incubated

    ---
    Narrated by TYPE III AUDIO.

    • 21 min
    “Against a Happiness Ceiling: Replicating Killingsworth & Kahneman (2022)” by charlieh943

    “Against a Happiness Ceiling: Replicating Killingsworth & Kahneman (2022)” by charlieh943

    Epistemic Status: somewhat confident: I may have made coding mistakes. R code is here if you feel like checking.
    Introduction: 
    In their 2022 article, Matthew Killingsworth and Daniel Kahneman looked to reconcile the results from two of their papers. Kahneman (2010) had reported that above a certain income level ($75,000 USD), extra income had no association with increases in individual happiness. Killingsworth (2021) suggested that it did.
    Kahneman and Killingsworth (henceforth KK) claimed they had resolved this conflict by (correctly) hypothesizing that:
    1) There is an unhappy minority, whose unhappiness diminishes with rising income up to a threshold, then shows no further progress (i.e., Kahnemann's leveling off);
    2) In the happier majority, happiness continues to rise with income even in the high range of incomes (i.e., Kllingsworth continued log-linear finding)
    (More info on this discussion can be found in Spencer Greenberg's thoroughly enjoyable blog post. Spencer [...]
    ---
    Outline:
    (00:18) Introduction:
    (03:04) Summary of Findings
    (04:07) Results
    (05:07) Median Regressions
    (05:21) Figure 1
    (06:16) Regressions at Various Percentiles
    (06:55) Figure 2
    (08:38) Implications
    (10:50) Table 1: Happiness at Different Percentiles (above, KK; below, me)
    The original text contained 2 footnotes which were omitted from this narration.
    ---

    First published:

    May 28th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/A5voYMFhPkWTrGkuJ/against-a-happiness-ceiling-replicating-killingsworth-and

    ---
    Narrated by TYPE III AUDIO.

    • 11 min
    “AI companies aren’t really using external evaluators” by Zach Stein-Perlman

    “AI companies aren’t really using external evaluators” by Zach Stein-Perlman

    From my new blog: AI Lab Watch. All posts will be crossposted to LessWrong. Subscribe on Substack.
    Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment.
    Clarification: there are many kinds of audits. This post is about model evals for dangerous capabilities. But I'm not aware of the labs using other kinds of audits to prevent extreme risks, excluding normal security/compliance audits.
    Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and—if the evaluator can publish [...]
    The original text contained 5 footnotes which were omitted from this narration.
    ---

    First published:

    May 26th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/ZPyhxiBqupZXLxLNd/ai-companies-aren-t-really-using-external-evaluators-1

    ---
    Narrated by TYPE III AUDIO.

    • 8 min
    “89%of cage-free egg commitments with deadlines of 2023 or earlier have been fulfilled” by ASuchy

    “89%of cage-free egg commitments with deadlines of 2023 or earlier have been fulfilled” by ASuchy

    This is a link post. The report concludes that the cage-free fulfillment rate is maintaining its momentum at 89%. The producer, retailer, and manufacturer industries are some of the most cage-free forward sectors when it comes to fulfillment. Some major companies across sectors that fulfilled their commitments in 2023 (or years ahead of schedule) include Hershey (Global), Woolworths (South Africa), Famous Brands (Africa), Scandic Hotels (Europe), Monolog Coffee (Indonesia), Special Dog (Brazil), Azzuri Group (Europe), McDonald's (US), TGI Fridays (US), and The Cheesecake Factory (US).
    ---

    First published:

    May 24th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/SG38cPw5C7wLXAeFn/89-of-cage-free-egg-commitments-with-deadlines-of-2023-or

    ---
    Narrated by TYPE III AUDIO.

    • 1 min
    “Articles about recent OpenAI departures” by bruce

    “Articles about recent OpenAI departures” by bruce

    This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.
    Some quotes perhaps worth highlighting:
    Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there’ll be much focus on avoiding catastrophic risk from future AI models.
    -Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more").
    “I joined with substantial hope that OpenAI [...]


    The original text contained 1 footnote which was omitted from this narration.
    ---

    First published:

    May 17th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures

    ---
    Narrated by TYPE III AUDIO.

    • 2 min

Customer Reviews

4.9 out of 5
7 Ratings

7 Ratings

jpsteadman ,

Thank you for making this

This really helps me access the forum a lot more than I otherwise would

Top Podcasts In Society & Culture

Animal
The New York Times
Stuff You Should Know
iHeartPodcasts
Shawn Ryan Show
Shawn Ryan | Cumulus Podcast Network
This American Life
This American Life
Fail Better with David Duchovny
Lemonada Media
Disrespectfully
Katie Maloney, Dayna Kathan

You Might Also Like

Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Astral Codex Ten Podcast
Jeremiah
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Dwarkesh Podcast
Dwarkesh Patel
The Ezra Klein Show
New York Times Opinion