EA Forum Podcast (All audio)

EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

  1. 11H AGO

    “Mox is the largest AI Safety community space in San Francisco. We’re fundraising!” by Rachel Shu

    Overview Who we are Mox is SF's largest AI safety coworking space, and also its primary Effective Altruism community space. We opened just over a year ago, and over the last year, we’ve served high-impact work in and around AI safety by hosting conferences, fellowships, events, and incubating new organizations. Our theory of change is to provide good infrastructure (offices, event space) and a high density of collegial interactions to people and projects we admire. We're not focusing on a single specific thesis on AI safety. Instead, we aim to support many sorts of people and organizations who: agree that transformative AI is on the horizon, have a strong thesis about what it means for the world to go well, are working on a project that we think credibly advances their thesis. This includes many projects that are directly AI Safety, such as Seldon Lab, the broader Effective Altruist sphere such as Sentient Futures, and even more broadly non-EA projects by EAs or from EA and rationalist-friendly corners of the SF tech scene, such as Arbor Trading Bootcamp (pictured below). Many more examples of such work are are given in the "Current Operations" heading. Our team also [...] --- Outline: (00:13) Overview (00:16) Who we are (02:09) Why were raising (03:01) How to donate (03:14) Funding milestones (03:17) ⬜ Raise $100k by March 15th (will be 1:1 matched!) (03:57) ⬜ Raise $450k by April 1st (05:05) ⬜ Raise $1.2m by June 1st (05:59) Funding updates (06:03) Raised $550k in 2025 (06:31) Delivered above expectations (07:11) Present state of finances (07:22) Revenue: ~$100k/mo (07:46) Expenses: ~$130k/mo (08:13) Sustainable by EOY (08:52) Will still seek funding in future years (09:04) Current operations (09:08) Fellowships & programs (10:10) How does Mox contribute to these programs success? (10:45) Public events (12:42) Individuals & coworking (13:32) Member testimonials! (14:26) Private offices and partner organizations (16:01) Upcoming plans (16:04) Grow and improve our main offerings (17:01) Build an SF hub for animal welfare (17:45) Attract international talent via Global Expert Fellowship (18:30) Incubate new workshops and programs --- First published: March 6th, 2026 Source: https://forum.effectivealtruism.org/posts/YMBrQhjyiuvT83WBz/mox-is-the-largest-ai-safety-community-space-in-san --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    19 min
  2. 22H AGO

    “Announcing: AGI & Animals Debate Week” by Toby Tremlett🔹

    During the week of March 23 – 29[1], we’ll be debating the statement: “If AGI goes well for humans, it’ll probably[2] go well for animals” I'm open to minor changes to the phrasing of this statement, but the general shape is locked in, since people might want to start writing posts for the debate week this weekend. In this post, I’ll give some background on why this topic matters and share some tips and resources to help you form a view on the topic, before the debate week begins. But first: How does a debate week work? During an EA Forum debate week, we put a debate slider on the frontpage banner. It looks something like this: Anyone who logs into the Forum can then vote on this banner. When you vote, you’re also asked if you want to leave a comment, which will appear on a discussion thread, and be attached to your icon on the banner. Alongside the conversation in the discussion thread, Forum users also write posts which might influence readers towards agreeing or disagreeing with the statement. Past debate weeks: AI Welfare Debate Week (Jul 1–7, 2024) — "AI welfare should be an EA [...] --- Outline: (00:42) How does a debate week work? (01:24) Past debate weeks: (02:02) Why this conversation matters (03:30) Questions to consider (04:42) Reading list (05:37) How to contribute to debate week --- First published: March 6th, 2026 Source: https://forum.effectivealtruism.org/posts/tscrmtjpQjDGdhvgQ/announcing-agi-and-animals-debate-week --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 min
  3. 1D AGO

    “Comparing alternative proteins: which ones should we prioritize? A summary of my recent scoping review” by Tom Bry-Chevalier🔸

    Context Last year, I shared on this forum my growing skepticism about cultivated meat and the case for being more critical about where we direct resources in the alternative protein space. That post focused specifically on cultivated meat but the broader question behind it, one I had actually started working on before writing that post, is: if we look at the full landscape of alternative proteins, which ones actually deserve priority? This question matters because, as with any cause prioritization exercise, resources are limited. Every dollar, every policy push, every year of R&D directed toward a suboptimal alternative is a resource not spent on a more promising one.The recent collapse of Ynsect despite €600 million in investment is a painful example of misallocated resources in the French alternative protein landscape, but the problem runs deeper than individual company failures. If we systematically back the wrong horses, we slow down the entire protein transition, and with it our ability to act on climate change and animal welfare. This kind of prioritization thinking, which I'd say was directly inspired by EA principles, led me to write a scoping review that was recently published in npj Science of Food. The rest of [...] --- Outline: (00:15) Context (01:49) The problem with how we currently evaluate alternatives (03:16) What the paper found (03:43) Plant-based meats: the clear frontrunner (05:13) Single-cell proteins: promising but uncertain (06:24) Cultivated meat: still facing the problems I flagged last year (08:18) Insects: the least promising option, despite popular assumptions (10:27) Caveats (11:05) A personal note --- First published: March 3rd, 2026 Source: https://forum.effectivealtruism.org/posts/TMbrMxusL2ptW6pCF/comparing-alternative-proteins-which-ones-should-we --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 min
  4. 2D AGO

    [Linkpost] “Launching AI & Animals: A Documentary” by Animal_Ethics

    This is a link post. Following the release of the teaser earlier this year, Animal Ethics has now launched AI & Animals: A Documentary. The full film is freely available to watch and share at www.AIandAnimals.org. The documentary examines how advances in AI may affect the lives of nonhuman animals, both negatively and positively. It features 16 advocates and academics discussing: How AI systems could increase animal suffering in factory farming How AI-driven research and monitoring can scale efforts to reduce wild animal suffering, and help develop research methods that do not harm animals? The risks of embedding speciesist values in large language models and other AI systems, and what can be done about them How animal advocates can use AI tools to increase the effectiveness and reach of their work Contributors include Peter Singer, Jonathan Birch, Jeff Sebo, Oscar Horta, Bernice Bovenkerk, Matti Wilks, and others. The full list is available at AIandAnimals.org. Most animal advocates are still unaware of the risks and opportunities that AI presents for animals. Meanwhile, the industry is already using AI to increase the efficiency of animal exploitation. History has shown us the cost of being reactive rather than proactive. We failed [...] --- First published: March 3rd, 2026 Source: https://forum.effectivealtruism.org/posts/Nm5utfEwtQFnHDGLh/launching-ai-and-animals-a-documentary Linkpost URL:https://www.animal-ethics.org/animal-ethics-launches-ai-animals-a-documentary/ --- Narrated by TYPE III AUDIO.

    3 min
  5. 2D AGO

    “A call for biosecurity fieldbuilding” by Abbey Chaver

    My name is Abbey, and I work at cG on the Capacity Building team. I’m posting to share an informal update about our excitement for biosecurity capacity building. We’ve seen incredible progress in the field of technical AI safety and governance, and a lot of that is thanks to fieldbuilding programs. We now think that on the margin, it's likely that biosecurity is neglected relative to AI safety. AI progress will likely significantly increase the capability of a bad actor (including a rogue AI) to engineer dangerous pathogens, and we’d like to grow the number of people preparing defenses against that scenario. Specifically, our Biosecurity and Pandemic Preparedness team's research suggests that there are four pillars of pathogen-agnostic defenses that can significantly reduce bio x-risk; and because these are defensive measures (with less dual-use potential), we want to broadly encourage people to work on them. Some projects, like red teaming, could involve infohazards that warrant more caution; you can check in with the BPP team if you want to do work like this or encourage it. The measures are: Personal protective equipment (‘PPE’) Pervasive physical barriers and layers of sterilization (‘biohardening’) Pathogen-agnostic early-warning systems (‘detection’) Rapid, reactive [...] --- First published: March 5th, 2026 Source: https://forum.effectivealtruism.org/posts/N9tDHooyZzACmSmhj/a-call-for-biosecurity-fieldbuilding --- Narrated by TYPE III AUDIO.

    4 min
  6. 2D AGO

    “CLR Summer Research Fellowship 2026” by Center on Long-Term Risk, Tristan Cook, Santeri T 🔹

    We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor. You will work on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can. While this iteration retains the basic structure of previous rounds, there are several key differences: We are particularly interested in applicants who wish to engage in s-risk relevant empirical AI safety work (more details on our priority areas below). We encourage applications from individuals who may be less familiar with CLR's work on s-risk reduction but are nonetheless interested in empirical AI safety research. Our empirical agenda focuses on understanding LLM personas, in particular how malicious traits might arise. We are especially looking for individuals seriously [...] --- Outline: (01:51) About the Summer Research Fellowship (01:55) Purpose of the fellowship (03:29) Priority areas (05:36) What we look for in candidates (07:11) Program details (07:26) Program dates (07:38) Location & office space (08:10) Compensation (08:33) Program length & work quota (08:53) Application process (09:28) Stage 1 (09:52) Stage 2 (10:16) Stage 3 (10:44) Stage 4 (11:13) Why work with CLR (13:13) Inquiries --- First published: March 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/cJBgd6cCke6FQPL5p/clr-summer-research-fellowship-2026 --- Narrated by TYPE III AUDIO.

    14 min

About

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.