80,000 Hours Podcast

The 80,000 Hours team

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.

  1. Will MacAskill – AI character, surviving the intelligence explosion, and the case against utopia

    1 DAY AGO

    Will MacAskill – AI character, surviving the intelligence explosion, and the case against utopia

    Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems will shape culture on an even grander scale, ultimately becoming “the personality of most of the world’s workforce.” So… should they be designed to push us towards the better angels of our nature? Or simply do as we ask? Will MacAskill, philosopher and senior research fellow at Forethought, has been thinking through that and the other thorniest issues that come up in designing an AI personality. He’s also been exploring how we might coexist peacefully with the ‘superintelligent AI’ companies are racing to build. He concludes that we should train such systems to be very risk averse, pay them for their work, and build institutions that enable humans to make credible contracts with AIs themselves. Will and host Rob Wiblin also discuss what a good world after superintelligence would actually look like — a subject that has received surprisingly little attention from the people working to make it. Will argues that we shouldn’t aim for a specific utopian vision: we don’t know enough about what the best possible future actually is to aim directly for it, and trying to lock in today’s best guesses forever risks baking in errors we can’t yet see. Will and Rob explore what we can do to steer towards a good future instead, along with why a coalition of democracies building superintelligence together is safer than any single actor, how absurdly useful ChatGPT is for analytic philosophy, and more. Learn more, video, and full transcript: https://80k.info/wm26 This episode was recorded on February 6, 2026. Chapters: Cold open (00:00:00)Will MacAskill is back — for a 6th time! (00:00:29)AIs’ “character” could be vital to securing a good future (00:00:59)The panic over sychophancy is justified (00:07:54)How opinionated should AI be about ethics? (00:12:59)Commercial pressures won’t fully determine AI character (00:29:38)Risk-averse AI would rather strike a deal than attempt a coup (00:36:46)A coalition of democracies building superintelligence is safer than one doing it alone (01:06:40)How selfish agents could fund the common good (01:19:13)Why not push for pausing AI development? (01:38:39)Effective altruism is making a comeback post-SBF (01:48:18)EA in the age of AGI (01:56:15)Viatopia: an alternative to utopia (02:05:08)The least bad alternative to total utilitarianism? (02:34:42)How AI could kickstart a golden age of philosophy (02:58:03)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Alex MilesProduction: Elizabeth Cox, Nick Stockton, and Katy Moore

    3h 9m
  2. Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

    16 APR

    Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

    Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve considered risks from AI to be the world’s most pressing problem since 2016.  But what led us to this conclusion? Could AI really cause human extinction? We’re not certain, but we think the risk is worth taking very seriously.  In particular, as companies create increasingly powerful AI systems, there’s a concerning chance that: These AI systems may develop dangerous long-term goals we don’t want.To pursue these goals, they may seek power and undermine the safeguards meant to contain them.They may even aim to disempower humanity and potentially cause our extinction.This article is written by Cody Fenwick and Zershaaneh Qureshi, and narrated by Zershaaneh Qureshi. It discusses why future AI systems could disempower humanity, what current AI research reveals about behaviours like power-seeking and deception, and how you can help mitigate the dangers. You can see the original article — packed with graphs, images, footnotes, and further resources — on the 80,000 Hours website:  https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/  Chapters: Risks from power-seeking AI systems (00:01:00)Introduction (00:01:17)Summary (00:03:09)Why are the risks from power-seeking AI a pressing world problem? (00:04:04)Section 1: Humans will likely build advanced AI systems with long-term goals (00:05:43)Section 2: AIs with long-term goals may be inclined to seek power (00:11:32)Section 3: These power-seeking AI systems could successfully disempower humanity (00:26:26)Section 4. People might create power-seeking AI systems without enough safeguards, despite the risks (00:38:34)Section 5: Work on this problem is neglected and tractable (00:47:37)Section 6: What are the arguments against working on this problem? (00:59:20)Section 7: How you can help (01:25:07)Thank you for listening (01:28:56)Audio editing: Dominic ArmstrongProduction: Zershaaneh Qureshi, Elizabeth Cox, and Katy Moore

    1hr 30min
  3. Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

    7 APR

    Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

    What does it really take to lift millions out of poverty and prevent needless deaths? In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India. What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information. Full transcript and links to learn more: https://80k.info/ghd Chapters: Cold open (00:00:00)Luisa’s intro (00:00:58)Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)Economist Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (00:06:55)Charity founder Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps (00:14:33)Economist Rachel Glennerster on how randomised controlled trials are just one way to better understand tricky development problems (00:19:05)Data scientist Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (00:24:36)Charity founder Lucia Coulter on the huge, neglected upsides of reducing lead exposure (00:47:48)Malaria expert James Tibenderana on using gene drives to wipe out the species of mosquitoes that cause malaria (00:53:11)Charity founder Varsha Venugopal on using village gossip to get kids their critical immunisations (01:04:14)Rachel Glennerster on solving tough global problems by creating the right incentives for innovation (01:11:31)Karen Levy on when governments should pay for programmes instead of NGOs (01:26:51)Open Philanthropy lead Alexander Berger on declining returns in global health, and finding and funding the most cost-effective interventions (01:29:40)GiveWell researcher James Snowden on making funding decisions with tricky moral weights (01:34:44)Lucia Coulter on “hits-based giving” approaches to funding global health and development projects (01:43:01)Rachel Glennerster on whether it’s better to fix problems in education with small-scale interventions versus systemic reforms (01:48:12)GiveDirectly cofounder Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (01:51:09)Sarah Eustis-Guthrie on whether more charities should scale back or shut down, and aligning incentives with beneficiaries (01:56:12)James Tibenderana on why we need loads better data to harness the power of AI to eradicate malaria (02:11:22)Lucia Coulter on rapidly scaling a light-touch intervention to more countries (02:20:14)Karen Levy on why pre-policy plans are so great at aligning perspectives (02:32:47)Rachel Glennerster on the value we get from doing the right RCTs well (02:40:04)Economist Mushtaq Khan on really drilling down into why “context matters” for development work (02:50:13)GiveWell cofounder Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (02:57:24)James Tibenderana on whether people actually use antimalarial bed nets for fishing — and why that’s the wrong thing to focus on (03:05:30)Karen Levy on working with governments to get big results (03:10:53)Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (03:17:38)Karen Levy on working with academics to get the best results on the ground (03:29:03)James Tibenderana on the value of working with local researchers (03:32:15)Lucia Coulter on getting buy-in from both industry and government (03:35:05)Alexander Berger on reasons neartermist work makes sense even by longtermist standards (03:39:26)Economist Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (03:47:42)J-PAL lead Claire Walsh on her career advice for young people who want to get involved in global health and development (03:55:20)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireMusic: CORBITCoordination, transcriptions, and web: Katy Moore

    4h 7m
  4. What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

    3 APR

    What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

    When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.) Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received. Watch on YouTube: The Meta Leaks Are Worse Than You Think Chapters: Introduction (00:00:00)What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)Charge 1: Hypocrisy (00:01:21)Charge 2: Naivety (00:04:55)Charge 3: Undemocratic (00:09:38)You don't have to debate on their terms (00:12:32)The Meta Leaks Are Worse Than You Think (00:13:43)Three fixes for social media's scam problem (00:16:48)We should regulate AI companies as strictly as banks (00:18:46)Video and audio editing: Dominic Armstrong and Simon MonsourTranscripts and web: Elizabeth Cox and Katy Moore

    21 min
  5. #241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

    31 MAR

    #241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

    Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family. That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach. For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right. But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%. Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation. Richard joins host Rob Wiblin to discuss all that plus: What AI biology tools already existWhy mid-tier actors (not amateurs) are the ones getting the most dangerous boostThe three main categories of defence we can pursueWhether there’s a plausible path to a world where engineered pandemics become a thing of the pastThis episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own. Links to learn more, video, and full transcript: https://80k.info/rm Announcements: Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000HoursWe're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editorChapters: Cold open (00:00:00)Who's Richard Moulange? (00:00:31)AI can now design novel genomes (00:01:11)The end of the 'tacit knowledge' barrier (00:04:34)Are risks from bioterrorists overstated? (00:18:20)The 3 key disasters AI makes more likely (00:22:41)Which bad actors does AI help the most? (00:30:03)Experts are more scary than amateurs (00:41:17)Barriers to bioterrorists using AI (00:46:43)AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)Advanced AI biology tools we already have or will soon (01:04:10)Rob argues that the situation is hopeless (01:09:49)Intervention #1: Limit access (01:18:16)Intervention #2: Get AIs to refuse to help (01:32:58)Intervention #3: Surveillance and attribution (01:42:38)Intervention #4: Universal vaccines and antivirals (01:56:38)Intervention #5: Screen all orders for DNA (02:10:00)AI companies talk about def/acc more than they fund it (02:19:52)Can you build a profitable business solving this problem? (02:26:32)This doesn't have to interfere with useful science (much) (02:30:56)What are the best low-tech interventions? (02:33:01)Richard's top request for AI companies (02:37:59)Grok shows governments lack many legal levers (02:53:17)Best ways listeners can help fix AI-Bio (02:56:24)We might end all contagious disease in 20 years (03:03:37)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jeremy ChevillotteTranscripts and web: Elizabeth Cox and Katy Moore

    3h 8m
  6. #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    24 MAR

    #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways. That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses. Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars. What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully. None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability. Links to learn more, video, and full transcript: https://80k.info/sc26 This episode was recorded on February 27, 2026. Chapters: Cold open (00:00:00)Could peace in Ukraine lead to Europe’s next war? (00:00:47)Do Russia’s motives for war still matter? (00:11:41)What does a good ceasefire deal look like? (00:17:38)What’s still holding back a ceasefire (00:38:44)Why Russia might accept Ukraine’s EU membership (00:46:00)How to prevent a spiraling conflict with NATO (00:48:00)What’s next for nuclear arms control (00:49:57)Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)Why Russia still takes NATO seriously (01:02:01)Neither side wants to fight this war again (01:10:49)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITTranscripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

    1hr 12min
  7. #239 – Rose Hadshar on why automating all human labour will break our political system

    17 MAR

    #239 – Rose Hadshar on why automating all human labour will break our political system

    The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all. That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment. She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects. Almost nobody wants this to happen — but we may find ourselves unable to prevent it. If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes? Rose has answers, and they’re not all reassuring. But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems. Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode. Links to learn more, video, and full transcript: https://80k.info/rh This episode was recorded on December 18, 2025. Chapters: Cold open (00:00:00)Who's Rose Hadshar? (00:01:05)Three dynamics that could reshape political power in the AI era (00:02:37)AI gives small groups the productive power of millions (00:12:49)Dynamic 1: When a software update becomes a power grab (00:20:41)Dynamic 2: When AI labour means governments no longer need their citizens (00:31:20)How democracy could persist in name but not substance (00:45:15)Dynamic 3: When AI filters our reality (00:54:54)Good intentions won't stop power concentration (01:08:27)Slower-moving worlds could still get scary (01:23:57)Why AI-powered tyranny will be tough to topple (01:31:53)How power concentration compares to "gradual disempowerment" (01:38:18)Some interventions are cross-cutting — and others could backfire (01:43:54)What fighting back actually looks like (01:55:15)Why power concentration researchers should avoid getting too "spicy" (02:04:10)Why the "Manhattan Project" approach should worry you — but truly international projects might not be safe either (02:09:18)Rose wants to keep humans around! (02:12:06)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Nick Stockton and Katy Moore

    2h 14m

Trailer

About

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.

You Might Also Like