EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. قبل يوم واحد

    “Will Welfareans Get to Experience the Future?” by MichaelDickens

    Cross-posted from my website. Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try. If welfare is important, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species[1], then these two things are probably also true: The best possible universe isn't filled with humans or human-like beings. It's filled with some other type of being that's much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever "welfare" means. Let's call these beings Welfareans. A universe filled with Welfareans is much better than a universe filled with humanoids. (Historically, people referred to these beings as "hedonium". I dislike that term because hedonium sounds like a thing. It doesn't sound like something that matters. It's supposed to be the opposite of that—it's supposed to be the most profoundly innately valuable sentient being. So I think it's better to describe the beings as Welfareans. I [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: November 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/gFTHuA3LvrZC2qDgx/will-welfareans-get-to-experience-the-future --- Narrated by TYPE III AUDIO.

    ٥ من الدقائق
  2. قبل يوم واحد

    “Humanizing Expected Value” by kuhanj

    I often use the following thought experiment as an intuition pump to make the ethical case for taking expected value, risk/uncertainty-neutrality,[1] and math in general seriously in the context of doing good. There are, of course, issues with pure expected value maximization (e.g. Pascal's Mugging, St. Petersburg paradox and infinities, etc), that I won’t go into in this post. I think these considerations are often given too much weight and applied improperly (which I also won’t discuss in this post). This thought experiment isn’t original, but I forget where I first encountered it. You can still give me credit if you want. Thanks! The graphics below are from Claude, but you can also give me credit if you like them (and blame Claude if you don’t). Thanks again. Thought experiment: Say there's this new, obscure, debilitating disease that affects 5 million people, who will die in the next ten years if not treated. It has no known cure, and for whatever reason, I know nobody else is going to work on it. I’m considering two options for what I could do with my career to tackle this disease. Option A: I can become a doctor and estimate [...] --- First published: November 1st, 2025 Source: https://forum.effectivealtruism.org/posts/aPJuhDz5NC72bdWYM/humanizing-expected-value --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٥ من الدقائق
  3. قبل يومين

    “Every Forum Post on EA Career Choice & Job Search” by Tristan W

    TLDR: I went through the entirety of the career choice, career advising, career framework, career capital, working at EA vs. non-EA orgs, and personal fit tags and have filtered to arrive at a list of all posts[1] relevant to figuring out the career aspect of the EA journey[2] up until now (10/25/25). The “Career Choice” section walks through: different career frameworks (how to go about making a career), career factors (aspects specific to yourself which change what sort of career you might have), cause area advice (focusing on how one might make a career in various causes), graduate school and career advice (what studies or careers might be pathways to impact, and what working in them looks like), and community experiences (how others have made career choices and lessons learned). The “Job Search” section walks through searching, applying, and getting rejected from jobs, as well as a section on how EA recruitment works (to better understand the process), and on the EA job market (how it's looking now and how it has changed over time). Introduction Behold! After trudging in the hellscape of tabs for many hours (see my atrocious window below) and trying to craft an overarching structure [...] --- Outline: (01:17) Introduction (06:02) General Advice (10:22) Career Choice (10:26) Resources (11:43) General (14:33) Career Frameworks (14:37) By Cause Area (path) ala 80k (19:24) By Career Aptitudes ala Holden (22:36) Career Factors (22:57) Cause Prioritization (incomplete) (25:40) Career Stage (34:37) Personal Fit (36:38) Location (37:38) Working Outside vs. Inside EA (47:53) Other Factors (49:59) Cause Area Advice (50:03) Animal Welfare (53:00) AI (01:02:22) Biosecurity (01:03:22) Community Building (01:05:47) Earning to Give (01:10:19) GCR Reduction (01:11:10) Global Health (01:12:08) Meta EA (01:16:26) Nuclear (01:16:55) Graduate School Advice (01:18:58) Specific Career Advice (01:20:13) Projects (01:22:52) Charities (01:23:12) Communications (01:23:28) Consulting (01:25:13) Grantmaking (01:25:29) Managing (01:25:42) Operations (01:28:42) Policy (01:35:24) Research (01:39:51) Startups (01:40:49) Other Careers (01:46:31) Community Experiences (01:56:57) Job Search (01:57:08) The Search (02:01:42) The Application (02:04:08) The Rejection (02:06:38) How EA Recruitment Works (02:08:39) The EA Job Market --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/XcB4rMuEvSh8Y4XoB/every-forum-post-on-ea-career-choice-and-job-search --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٢ س ١٥ د
  4. قبل يومين

    “AI, Animals & Digital Minds NYC 2025: Retrospective” by Jonah Woodward, Sentient Futures (formerly AI for Animals), Constance Li, Caroline Oliveira

    Our Mission: Rapidly scale up the size and influence of the community trying to make AI and other transformative technologies go well for sentient nonhumans. One of the key ways we do this is through our events. This article gives insight into our most recent event, AI, Animals and Digital Minds NYC 2025 including: Lightning talks Topics and ideas covered Attendee feedback Event finances Lessons for future events Acknowledgements More information on our community Our next conference, Sentient Futures Summit Bay Area 2026, is now confirmed for February 6th-8th (the weekend before Effective Altruism Global)! You can register here (Early Bird tickets valid until December 1st), and there is more information at the end of this article. We will continue pairing our conferences with EA Global events going forwards – so join our community to stay updated! Overview  AI, Animals & Digital Minds (AIADM) New York City took place from October 9th-10th 2025, at Prime Produce in Hell's Kitchen. It was both our first event in NYC, and our first event that was exclusively an ‘unconference’ – a format which allows attendees to pitch their own ideas for talks, discussions and workshops during the event. The [...] --- Outline: (01:40) Overview (02:39) Content (02:46) Lightning Talks (04:37) Unconference Sessions Highlights (07:36) Event Feedback (09:58) Finances (11:35) Lessons for Future Events (13:30) Acknowledgements (14:39) Get Involved (15:32) Contact --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/Lee5BvordoqXa44hC/ai-animals-and-digital-minds-nyc-2025-retrospective --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٦ من الدقائق
  5. قبل ٣ أيام

    “Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes” by 80000_Hours

    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript Episode summary Whatever your skills are, whatever your interests are, we’re out of the world where you have to be a conceptual self-starter, theorist mathematician, or a policy person — we’re into the world where whatever your skills are, there is probably a way to use them in a way that is helping make maybe humanity's most important event ever go better. — Holden Karnofsky For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback. According to Anthropic's Holden Karnofsky, this situation has now reversed completely. There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven’t appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work’ they could personally help with, in both technical and non-technical areas. In today's interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he's excited to see happening, including: [...] --- Outline: (00:19) Episode summary (04:17) The interview in a nutshell (04:41) 1. The AI race is NOT a coordination problem -- too many actors genuinely want to race (05:30) 2. Anthropic demonstrates it's possible to be both competitive and safety focused (06:56) 3. Success is possible even with terrible execution -- but we shouldn't count on it (07:54) 4. Concrete work in AI safety is now tractable and measurable (09:21) Highlights (09:24) The world is handling AGI about as badly as possible (12:33) Is rogue AI takeover easy or hard? (15:34) The AGI race isnt a coordination failure (21:27) Lessons from farm animal welfare we can use in AI (25:40) The case for working at Anthropic (30:31) Overrated AI risk: Persuasion (35:06) Holden thinks AI companions are bad news --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/Ee43ztGBsks3X5byb/holden-karnofsky-on-dozens-of-amazing-opportunities-to-make --- Narrated by TYPE III AUDIO.

    ٤١ من الدقائق
  6. قبل ٣ أيام

    “Support Metaculus’ First Animal-Focused Forecasting Tournament” by Aditi Basu

    I'm putting together Metaculus' first animal-focused forecasting tournament, and am reaching out to ask for your support in making this happen. What is it? This tournament will generate probabilistic forecasts on decision-relevant questions affecting animals, from alternative proteins and animal welfare policy to AI impacts and wild animal welfare. The goal is to help funders, researchers, and organizations in our community make more informed strategic decisions. Why it matters Most forecasters will be from outside the animal movement, bringing fresh perspectives to questions we care about. Better forecasts can lead to better prioritization of interventions and ultimately better outcomes for animals. How you can help I'm raising funds on Manifund to create a prize pool that will incentivize quality forecasting. The minimum funding goal is $3,000, with an ideal goal of $15,000. You can support here. For more context, I previously posted a call for questions for this tournament. Any support, whether funding or spreading the word, would be hugely appreciated! Very happy to answer any questions in the comments. --- Outline: (00:19) What is it? (00:41) Why it matters (00:57) How you can help --- First published: October 30th, 2025 Source: https://forum.effectivealtruism.org/posts/pbwCf8wwKaXmbmjmd/support-metaculus-first-animal-focused-forecasting --- Narrated by TYPE III AUDIO.

    ٢ من الدقائق
  7. قبل ٣ أيام

    [Linkpost] “The End of OpenAI’s Nonprofit Era” by Garrison

    This is a link post. Key regulators have agreed to let the company kill its profit caps and restructure as a for-profit — with some strings attached This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. This morning, the Delaware and California attorneys general conditionally signed off on OpenAI's plan to restructure as a for-profit public benefit corporation (PBC), seemingly closing the book on a fiercely contested legal fight over the company's future. Microsoft, OpenAI's earliest investor and another party with power to block the restructuring, also said today it would sign off in exchange for changes to its partnership terms and a $135 billion stake in the new PBC. With these stakeholders mollified, OpenAI has now cleared its biggest obstacles to a potential IPO — aside from its projected $115 billion cash burn through 2029. While the news initially seemed like a total defeat for the many opponents of the [...] --- Outline: (00:12) Key regulators have agreed to let the company kill its profit caps and restructure as a for-profit -- with some strings attached (02:07) The governance wins (08:16) No more profit caps (11:54) Testing my prediction (13:22) A political, not a legal, question --- First published: October 29th, 2025 Source: https://forum.effectivealtruism.org/posts/rrPGEvvKSqFqd2bzQ/the-end-of-openai-s-nonprofit-era Linkpost URL:https://www.obsolete.pub/p/the-end-of-openais-nonprofit-era --- Narrated by TYPE III AUDIO.

    ١٨ من الدقائق
  8. قبل ٤ أيام

    “Some surprising hiring practices I follow (as a hiring manager and grantmaker in EA)” by Michelle_Hutchinson

    The best approach to take to hiring differs by industry. This means that best practice differs across startups, think tank, a video production, non-profits, academia, and news outlets will all have different practices. The practices that work best in the EA ecosystem will be different again – but unfortunately the effective altruism organisation landscape is much smaller than all of those, and so people haven't written books on how to do it. I have been working as a hiring manager and sometimes grant maker in this space for over ten years, and I have developed some views on which practices work well in this industry. And I’ve noticed that others have (semi-)independently converged on these! Below I've written out a few of the ones which I follow. I’ve especially tried to list those which might seem most surprising to people coming from industries like the ones listed above. I’m trying to do a pretty quick job of this so that it actually gets out there. [Having written that sentence, I proceeded not to publish the post for 2 years. So if you’re wondering why there are errors in it, it's because this time round I’m actually going [...] --- Outline: (01:45) Run unstructured interviews (03:18) Work tests can be feasible and useful even for high-skill roles (05:24) Informal references are useful (08:07) Get information from people with conflicts of interest (09:43) You can put numbers on things (but don't trust them too much) (10:51) Bonus suggestion for applicants: Ask about the people you'll be working with (11:59) Getting advice --- First published: October 30th, 2025 Source: https://forum.effectivealtruism.org/posts/iZDQ4yJcWY8NB2A84/some-surprising-hiring-practices-i-follow-as-a-hiring --- Narrated by TYPE III AUDIO.

    ١٤ من الدقائق

حول

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

قد يعجبك أيضًا