570 episodes

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.

If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

EA Forum Podcast (All audio‪)‬ EA Forum Team

    • Society & Culture

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.

If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

    “89%of cage-free egg commitments with deadlines of 2023 or earlier have been fulfilled” by ASuchy

    “89%of cage-free egg commitments with deadlines of 2023 or earlier have been fulfilled” by ASuchy

    This is a link post. The report concludes that the cage-free fulfillment rate is maintaining its momentum at 89%. The producer, retailer, and manufacturer industries are some of the most cage-free forward sectors when it comes to fulfillment. Some major companies across sectors that fulfilled their commitments in 2023 (or years ahead of schedule) include Hershey (Global), Woolworths (South Africa), Famous Brands (Africa), Scandic Hotels (Europe), Monolog Coffee (Indonesia), Special Dog (Brazil), Azzuri Group (Europe), McDonald's (US), TGI Fridays (US), and The Cheesecake Factory (US).
    ---

    First published:

    May 24th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/SG38cPw5C7wLXAeFn/89-of-cage-free-egg-commitments-with-deadlines-of-2023-or

    ---
    Narrated by TYPE III AUDIO.

    • 1 min
    “AI companies aren’t really using external evaluators” by Zach Stein-Perlman

    “AI companies aren’t really using external evaluators” by Zach Stein-Perlman

    New blog: AI Lab Watch. Subscribe on Substack.
    Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation for GPT-4 and Claude 2 in the first half of 2023, but it seems to have had no special access since then.[1] Other model evaluators also seem to have little access before deployment.
    Frontier AI labs' pre-deployment risk assessment should involve external model evals for dangerous capabilities.[2] External evals can improve a lab's risk assessment and—if the evaluator can publish its results—provide public accountability.
    The evaluator should get deeper access than users will get.
    To evaluate threats from a particular deployment protocol, the evaluator should get somewhat deeper access than users will — then the evaluator's failure to elicit dangerous capabilities is stronger evidence [...] The original text contained 5 footnotes which were omitted from this narration.
    ---

    First published:

    May 24th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/L8NZyq3A3aqbNLHam/ai-companies-aren-t-really-using-external-evaluators

    ---
    Narrated by TYPE III AUDIO.

    • 7 min
    “Talent Needs in Technical AI Safety” by Ryan Kidd

    “Talent Needs in Technical AI Safety” by Ryan Kidd

    Co-Authors: William Brewer @Carson Jones, @McKennaFitzgerald, @Ryan Kidd
    MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety orgs. As the field has grown, it's become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2]
    In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our [...]
    ---
    Outline:
    (01:22) Needs by Organization Type
    (01:29) Archetypes
    (02:23) Connectors / Iterators / Amplifiers
    (05:34) Needs by Organization Type (Expanded)
    (09:11) Impact, Tractability, and Neglectedness (ITN)
    (11:36) ITN Within AIS Field-building
    (14:40) So How Do You Make an AI Safety Professional?
    (15:26) The Development of a Connector
    (18:27) The Development of an Iterator
    (19:50) The Development of an Amplifier
    (21:30) So What is MATS Doing?
    (27:07) Acknowledgements
    The original text contained 4 footnotes which were omitted from this narration.
    ---

    First published:

    May 24th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/AgjDbFhpErpRN5ZbD/talent-needs-in-technical-ai-safety

    ---
    Narrated by TYPE III AUDIO.

    • 28 min
    “Big Picture AI Safety: Introduction” by EuanMcLean

    “Big Picture AI Safety: Introduction” by EuanMcLean

    tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held “traditional” views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer.
    What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many [...]
    ---
    Outline:
    (02:00) Questions
    (03:46) Participants
    (06:29) A very brief summary of what people said
    (06:33) What will happen?
    (07:21) What should we do about it?
    (07:52) What mistakes have been made?
    (08:34) Limitations
    (10:11) Subsequent posts
    ---

    First published:

    May 23rd, 2024


    Source:

    https://forum.effectivealtruism.org/posts/uJioXCz5Foo9eqpJ9/big-picture-ai-safety-introduction

    ---
    Narrated by TYPE III AUDIO.

    • 11 min
    “What mistakes has the AI safety movement made?” by EuanMcLean

    “What mistakes has the AI safety movement made?” by EuanMcLean

    This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
    This post summarizes the responses I received from asking “Are there any big mistakes the AI safety community has made in the past or are currently making?”
    A rough decompositions of the main themes brought up. The figures omit some less popular themes, and double-count respondents who brought up more than one theme. “Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are [...]
    The original text contained 1 footnote which was omitted from this narration.
    ---

    First published:

    May 23rd, 2024


    Source:

    https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made

    ---
    Narrated by TYPE III AUDIO.

    “An invitation to the Berlin EA co-working space TEAMWORK” by Johanna Schröder

    “An invitation to the Berlin EA co-working space TEAMWORK” by Johanna Schröder

      TL;DR
    TEAMWORK, a co-working and event space in Berlin run by Effektiv Spenden, is available for use by the Effective Altruism (EA) community. We offer up to 15 desk spaces in a co-working office for EA professionals and a workshop and event space for a broad range of EA events, all free of charge at present (and at least for the rest of 2024).
    A lot has changed since the space was established in 2021. After a remodeling project in September last year, there has been a notable improvement in the acoustics and soundproofing, leading to a more focused and productive work environment.
    Apply here if you would like to join our TEAMWORK community.
    Small co-working officeMeeting room What TEAMWORK offers
    TEAMWORK is a co-working space focused on EA professionals operated by Effektiv Spenden and located in Berlin. Following a remodeling project in fall 2023, we were able to improve [...]
    ---

    First published:

    May 23rd, 2024


    Source:

    https://forum.effectivealtruism.org/posts/SqZgzKdYrmgKqLyYA/an-invitation-to-the-berlin-ea-co-working-space-teamwork

    ---
    Narrated by TYPE III AUDIO.

    • 5 min

Top Podcasts In Society & Culture

Fail Better with David Duchovny
Lemonada Media
Stuff You Should Know
iHeartPodcasts
Shawn Ryan Show
Shawn Ryan | Cumulus Podcast Network
This American Life
This American Life
Freakonomics Radio
Freakonomics Radio + Stitcher
The Ezra Klein Show
New York Times Opinion

You Might Also Like

Astral Codex Ten Podcast
Jeremiah
EconTalk
Russ Roberts
Conversations with Tyler
Mercatus Center at George Mason University
The Psychology Podcast
iHeartPodcasts
People I (Mostly) Admire
Freakonomics Radio + Stitcher
Making Sense with Sam Harris
Sam Harris