EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. HÁ 7 H

    “Leaving Open Philanthropy, going to Anthropic” by Joe_Carlsmith

    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I’m speaking only for myself and not for Open Phil or Anthropic.) On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] --- Outline: (00:51) On my time at Open Philanthropy (08:11) On going to Anthropic --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

    32min
  2. HÁ 10 H

    “Recruitment is extremely important and impactful. Some people should be completely obsessed with it.” by abrahamrowe

    Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes: Candidates generally have a terrible time Work tests are often unpleasant (and the best candidates have to complete many of them), there are hundreds or thousands of candidates for each role, and generally, people can't get the jobs they’ve been told are the best path to impact. Organizations are often somewhat to moderately unhappy with their candidate pools Organizations really struggle to find the talent they want, despite the number of candidates who apply. Organizations can't find or retain the recruiting talent they want It's extremely hard to find people to do recruitment in this space. Talented recruiters rarely want to stay in their roles. I think the first two points need more discussion, but I haven't seen much discussion about the last. I think this is a major issue: recruitment is probably the most important function for a growing organization, and a skilled recruiter has a fairly large counterfactual impact for the organization they support. So why is it [...] --- Outline: (01:33) Recruitment is high leverage and high impact (03:33) Organizations struggle to hire recruiters (07:52) Many of the people applying to recruitment roles emphasize their experience in recruitment. This isnt the background organizations need (08:44) Almost no one is appropriately obsessed with hiring (10:29) The state of evidence on hiring practices is bad (13:22) Retaining strong recruiters is really hard (14:51) Why might this be less important than I think? (16:40) Im trying to find people interested in this kind of approach to hiring. If this is you, please reach out. --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/HLktkw5LXeqSLCchH/recruitment-is-extremely-important-and-impactful-some-people --- Narrated by TYPE III AUDIO.

    17min
  3. HÁ 1 DIA

    “Will Welfareans Get to Experience the Future?” by MichaelDickens

    Cross-posted from my website. Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try. If welfare is important, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species[1], then these two things are probably also true: The best possible universe isn't filled with humans or human-like beings. It's filled with some other type of being that's much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever "welfare" means. Let's call these beings Welfareans. A universe filled with Welfareans is much better than a universe filled with humanoids. (Historically, people referred to these beings as "hedonium". I dislike that term because hedonium sounds like a thing. It doesn't sound like something that matters. It's supposed to be the opposite of that—it's supposed to be the most profoundly innately valuable sentient being. So I think it's better to describe the beings as Welfareans. I [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: November 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/gFTHuA3LvrZC2qDgx/will-welfareans-get-to-experience-the-future --- Narrated by TYPE III AUDIO.

    5min
  4. HÁ 2 DIAS

    “Humanizing Expected Value” by kuhanj

    I often use the following thought experiment as an intuition pump to make the ethical case for taking expected value, risk/uncertainty-neutrality,[1] and math in general seriously in the context of doing good. There are, of course, issues with pure expected value maximization (e.g. Pascal's Mugging, St. Petersburg paradox and infinities, etc), that I won’t go into in this post. I think these considerations are often given too much weight and applied improperly (which I also won’t discuss in this post). This thought experiment isn’t original, but I forget where I first encountered it. You can still give me credit if you want. Thanks! The graphics below are from Claude, but you can also give me credit if you like them (and blame Claude if you don’t). Thanks again. Thought experiment: Say there's this new, obscure, debilitating disease that affects 5 million people, who will die in the next ten years if not treated. It has no known cure, and for whatever reason, I know nobody else is going to work on it. I’m considering two options for what I could do with my career to tackle this disease. Option A: I can become a doctor and estimate [...] --- First published: November 1st, 2025 Source: https://forum.effectivealtruism.org/posts/aPJuhDz5NC72bdWYM/humanizing-expected-value --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    5min
  5. HÁ 2 DIAS

    “Every Forum Post on EA Career Choice & Job Search” by Tristan W

    TLDR: I went through the entirety of the career choice, career advising, career framework, career capital, working at EA vs. non-EA orgs, and personal fit tags and have filtered to arrive at a list of all posts[1] relevant to figuring out the career aspect of the EA journey[2] up until now (10/25/25). The “Career Choice” section walks through: different career frameworks (how to go about making a career), career factors (aspects specific to yourself which change what sort of career you might have), cause area advice (focusing on how one might make a career in various causes), graduate school and career advice (what studies or careers might be pathways to impact, and what working in them looks like), and community experiences (how others have made career choices and lessons learned). The “Job Search” section walks through searching, applying, and getting rejected from jobs, as well as a section on how EA recruitment works (to better understand the process), and on the EA job market (how it's looking now and how it has changed over time). Introduction Behold! After trudging in the hellscape of tabs for many hours (see my atrocious window below) and trying to craft an overarching structure [...] --- Outline: (01:17) Introduction (06:02) General Advice (10:22) Career Choice (10:26) Resources (11:43) General (14:33) Career Frameworks (14:37) By Cause Area (path) ala 80k (19:24) By Career Aptitudes ala Holden (22:36) Career Factors (22:57) Cause Prioritization (incomplete) (25:40) Career Stage (34:37) Personal Fit (36:38) Location (37:38) Working Outside vs. Inside EA (47:53) Other Factors (49:59) Cause Area Advice (50:03) Animal Welfare (53:00) AI (01:02:22) Biosecurity (01:03:22) Community Building (01:05:47) Earning to Give (01:10:19) GCR Reduction (01:11:10) Global Health (01:12:08) Meta EA (01:16:26) Nuclear (01:16:55) Graduate School Advice (01:18:58) Specific Career Advice (01:20:13) Projects (01:22:52) Charities (01:23:12) Communications (01:23:28) Consulting (01:25:13) Grantmaking (01:25:29) Managing (01:25:42) Operations (01:28:42) Policy (01:35:24) Research (01:39:51) Startups (01:40:49) Other Careers (01:46:31) Community Experiences (01:56:57) Job Search (01:57:08) The Search (02:01:42) The Application (02:04:08) The Rejection (02:06:38) How EA Recruitment Works (02:08:39) The EA Job Market --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/XcB4rMuEvSh8Y4XoB/every-forum-post-on-ea-career-choice-and-job-search --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    2h15min
  6. HÁ 3 DIAS

    “AI, Animals & Digital Minds NYC 2025: Retrospective” by Jonah Woodward, Sentient Futures (formerly AI for Animals), Constance Li, Caroline Oliveira

    Our Mission: Rapidly scale up the size and influence of the community trying to make AI and other transformative technologies go well for sentient nonhumans. One of the key ways we do this is through our events. This article gives insight into our most recent event, AI, Animals and Digital Minds NYC 2025 including: Lightning talks Topics and ideas covered Attendee feedback Event finances Lessons for future events Acknowledgements More information on our community Our next conference, Sentient Futures Summit Bay Area 2026, is now confirmed for February 6th-8th (the weekend before Effective Altruism Global)! You can register here (Early Bird tickets valid until December 1st), and there is more information at the end of this article. We will continue pairing our conferences with EA Global events going forwards – so join our community to stay updated! Overview  AI, Animals & Digital Minds (AIADM) New York City took place from October 9th-10th 2025, at Prime Produce in Hell's Kitchen. It was both our first event in NYC, and our first event that was exclusively an ‘unconference’ – a format which allows attendees to pitch their own ideas for talks, discussions and workshops during the event. The [...] --- Outline: (01:40) Overview (02:39) Content (02:46) Lightning Talks (04:37) Unconference Sessions Highlights (07:36) Event Feedback (09:58) Finances (11:35) Lessons for Future Events (13:30) Acknowledgements (14:39) Get Involved (15:32) Contact --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/Lee5BvordoqXa44hC/ai-animals-and-digital-minds-nyc-2025-retrospective --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16min
  7. HÁ 3 DIAS

    “Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes” by 80000_Hours

    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript Episode summary Whatever your skills are, whatever your interests are, we’re out of the world where you have to be a conceptual self-starter, theorist mathematician, or a policy person — we’re into the world where whatever your skills are, there is probably a way to use them in a way that is helping make maybe humanity's most important event ever go better. — Holden Karnofsky For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback. According to Anthropic's Holden Karnofsky, this situation has now reversed completely. There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven’t appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work’ they could personally help with, in both technical and non-technical areas. In today's interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he's excited to see happening, including: [...] --- Outline: (00:19) Episode summary (04:17) The interview in a nutshell (04:41) 1. The AI race is NOT a coordination problem -- too many actors genuinely want to race (05:30) 2. Anthropic demonstrates it's possible to be both competitive and safety focused (06:56) 3. Success is possible even with terrible execution -- but we shouldn't count on it (07:54) 4. Concrete work in AI safety is now tractable and measurable (09:21) Highlights (09:24) The world is handling AGI about as badly as possible (12:33) Is rogue AI takeover easy or hard? (15:34) The AGI race isnt a coordination failure (21:27) Lessons from farm animal welfare we can use in AI (25:40) The case for working at Anthropic (30:31) Overrated AI risk: Persuasion (35:06) Holden thinks AI companions are bad news --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/Ee43ztGBsks3X5byb/holden-karnofsky-on-dozens-of-amazing-opportunities-to-make --- Narrated by TYPE III AUDIO.

    41min

Sobre

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

Você também pode gostar de