570 episodes

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.

If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

EA Forum Podcast (All audio‪)‬ EA Forum Team

    • Society & Culture

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.

If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

    “In DC, a new wave of AI lobbyists gains the upper hand” by Chris Leong

    “In DC, a new wave of AI lobbyists gains the upper hand” by Chris Leong

    This is a link post. "The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America's AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology. ... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.
    ... Last year, Rep. Ted Lieu (D-Calif.) declared himself “freaked out” by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he's unconvinced by claims that Congress must [...]

    ---

    First published:

    May 13th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/QBCHcqZ8mCqfdtARn/in-dc-a-new-wave-of-ai-lobbyists-gains-the-upper-hand

    ---
    Narrated by TYPE III AUDIO.

    • 2 min
    “Impact Accelerator Program for EA Professionals” by High Impact Professionals

    “Impact Accelerator Program for EA Professionals” by High Impact Professionals

    High Impact Professionals is excited to announce that applications are now open for the Summer 2024 round of our Impact Accelerator Program (IAP). The IAP is a 6-week program designed to equip experienced EA-aligned professionals (not currently working at an EA organization) with the knowledge and tools necessary to make a meaningful impact and empower them to start taking actionable steps right away.
    To date, the program has been a success, with several alumni having already changed careers to new impactful roles and nearly all participants planning to do the same in the next ~6 months. IAP alumni are also volunteering an average of ~100 hours at EA-aligned orgs/projects and donating on average more than 7% of their annual salary to effective charities. We are currently running the Spring 2024 round, which features a larger number of participants and cohorts.
    We’re pleased to open up this new Summer [...]
    ---
    Outline:
    (01:13) Program Objectives
    (01:43) Program Overview
    (03:48) Why Should You Apply?
    (04:57) Who Should Apply?
    ---

    First published:

    May 10th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/rm2GeCZ5BfqGDxZ9W/impact-accelerator-program-for-ea-professionals-2

    ---
    Narrated by TYPE III AUDIO.

    • 6 min
    [Linkpost] “‘Cool Things Our GHW Grantees Have Done in 2023’ — Open Philanthropy” by Lizka

    [Linkpost] “‘Cool Things Our GHW Grantees Have Done in 2023’ — Open Philanthropy” by Lizka

    Open Philanthropy[1] recently shared a blog post with a list of some cool things accomplished in 2023 by grantees of their Global Health and Wellbeing (GHW) programs (including farm animal welfare). The post “aims to highlight just a few updates on what our grantees accomplished in 2023, to showcase their impact and make [OP's] work a little more tangible.”
    I'm link-posting because I found it valuable to read about these projects, several of which I hadn’t heard of. And I like that despite its brevity, the post manages to include a lot of relevant information (and links), along with explanations of the key relevant theories of change and opportunity.
    For people who don’t want to click through to the post itself, I’m including an overview of what's included and a selection of excerpts below.
    Open Philanthropy's current Global Health and Wellbeing focus areas — images taken from here. Overview
    [...]
    ---
    Outline:
    (01:04) Overview
    (02:25) Examples/excerpts from the post
    (02:40) 1.1 Dr. Sachchida Tripathi (air quality sensors)
    (04:27) 2.2. SAVAC (accelerating the development and implementation of strep A vaccines)
    (05:58) 3.2 Dr. Allan Basbaum (pain research)
    (07:29) 5.1 Institute for Progress
    (08:35) 6.1 Open Wing Alliance
    (09:57) 6.2 Aquaculture Stewardship Council
    (10:57) Quick closing thoughts (from myself)
    The original text contained 6 footnotes which were omitted from this narration.
    ---

    First published:

    May 8th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/bL5qWoyMf42ANtBdX/cool-things-our-ghw-grantees-have-done-in-2023-open


    Linkpost URL:https://www.openphilanthropy.org/research/cool-things-our-grantees-have-done-in-2023/

    ---
    Narrated by TYPE III AUDIO.

    • 12 min
    “Notes on risk compensation” by trammell

    “Notes on risk compensation” by trammell

    This article contains more than 100 uses of logical or mathematical notation, so an audio narration would be too hard to follow. You'll find a link to the original text in the episode description.
    ---

    First published:

    May 12th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/cbgT2JTckTc8d7yGs/notes-on-risk-compensation

    ---
    Narrated by TYPE III AUDIO.

    • 33 sec
    “MATS Winter 2023-24 Retrospective” by Rocket

    “MATS Winter 2023-24 Retrospective” by Rocket

    Co-Authors: @Rocket, @Ryan Kidd, @LauraVaughan, @McKennaFitzgerald, @Christian Smith, @Juan Gil, @Henry Sleight
    The ML Alignment & Theory Scholars program (MATS) is an education and research mentorship program for researchers entering the field of AI safety. This winter, we held the fifth iteration of the MATS program, in which 63 scholars received mentorship from 20 research mentors. In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs.
    Summary. Key details about the Winter Program:
    The four main changes we made after our Summer program were: Reducing our scholar stipend from $40/h to $30/h based on alumni feedback; Transitioning Scholar Support to Research Management; Using the full Lighthaven campus for office space as well as housing; Replacing Alignment 201 with AI Strategy Discussions. Educational attainment of MATS scholars: 48% of scholars were pursuing a bachelor's degree, master's [...] ---
    Outline:
    (06:49) Theory of Change
    (08:40) Winter Program Overview
    (08:44) Schedule
    (09:42) Mentor Selection
    (09:46) Approach
    (12:56) Scholar Allocation
    (13:51) Winter Mentor Portfolio
    (15:00) Mentors’ Counterfactual Winters
    (15:26) Other Mentorship Programs
    (15:58) Scholar Selection
    (18:56) Educational Attainment of Scholars
    (19:37) Scholars’ Counterfactual Winters
    (21:03) Engineering Tests
    (22:49) Stipends
    (25:59) Mentor Suggestions
    (28:04) Neel Nanda's Training Phase (Nov 20-Dec 22)
    (30:02) Research Phase Elements (Jan 8-Mar 15)
    (30:43) Mentorship
    (32:43) Research Management
    (38:32) Seminars and Workshops
    (39:35) Milestone Assignments
    (41:56) Lighthaven Office
    (42:35) Strategy Discussions
    (45:04) Networking Events
    (46:27) Social Events
    (47:00) Community Health
    (49:23) Extension Phase (Apr 1 - Jul 19)
    (51:29) Winter Program Evaluation
    (51:43) Evaluating Program Elements
    (52:38) Overall Program
    (53:29) Mentorship
    (57:10) Research Management
    (01:10:54) Seminars and Workshops
    (01:12:01) Lighthaven Office
    (01:15:33) Strategy Discussions
    (01:17:17) Networking Events
    (01:20:06) Community Health
    (01:23:05) Evaluating Key Scholar Outcomes
    (01:23:10) Scholar Self-Reports
    (01:26:26) Mentor Evaluations
    (01:28:28) Milestone Assignments
    (01:30:19) Funding and Other Career Obstacles
    (01:34:19) Evaluating Key Mentor Outcomes
    (01:34:24) Self-Reported Benefits
    (01:35:14) Biggest Impact
    (01:37:31) Testimonials
    (01:39:38) Improved Mentor Abilities
    (01:40:00) Lessons and Changes for Future Programs
    (01:40:05) Advisory Board for Mentorship Selection
    (01:40:32) Fewer Mechanistic Interpretability Mentors
    (01:41:05) More AI Governance Mentors
    (01:42:09) Pre-Screening with CodeSignal
    (01:42:32) Research Manager Hiring
    (01:42:56) Modified Discussion Groups
    (01:43:32) Acknowledgements
    The original text contained 8 footnotes which were omitted from this narration.
    ---

    First published:

    May 11th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/Cz4tE2zwcwwakqBo8/mats-winter-2023-24-retrospective

    ---
    Narrated by TYPE III AUDIO.

    • 1 hr 45 min
    “Introducing Senti - Animal Ethics AI Assistant” by Animal_Ethics

    “Introducing Senti - Animal Ethics AI Assistant” by Animal_Ethics

    Animal Ethics has recently launched Senti, an Ethical AI assistant designed to answer questions related to animal ethics, wild animal suffering, and longtermism. We at Animal Ethics believe that while AI technologies could potentially pose significant risks to animals, they could benefit all sentient beings if used responsibly. For example, Animal advocates can leverage AI to amplify our message and improve our approach to share information about Animal Ethics with a wider audience.
    There is a lack of knowledge today not just among the general public, but also among people sympathetic to nonhuman animals, about the basic concepts and arguments underpinning the critique of speciesism, animal exploitation, concern for wild animal suffering, and future sentient beings. Many of the ideas are unintuitive as well, so it helps people to be able to chat and ask followup questions in order to cement their understanding. We hope this tool will [...]
    ---

    First published:

    May 9th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/prQkGagzHRjBskemd/introducing-senti-animal-ethics-ai-assistant

    ---
    Narrated by TYPE III AUDIO.

    • 3 min

Top Podcasts In Society & Culture

Stuff You Should Know
iHeartPodcasts
This American Life
This American Life
Fail Better with David Duchovny
Lemonada Media
The Ezra Klein Show
New York Times Opinion
Shawn Ryan Show
Shawn Ryan | Cumulus Podcast Network
Freakonomics Radio
Freakonomics Radio + Stitcher

You Might Also Like

80k After Hours
The 80000 Hours team
Astral Codex Ten Podcast
Jeremiah
EconTalk
Russ Roberts
Practical AI: Machine Learning, Data Science
Changelog Media
Conversations with Tyler
Mercatus Center at George Mason University
The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis
Nathaniel Whittemore