80,000 Hours Podcast

The 80,000 Hours team

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.

  1. Is there a case against Anthropic? And: The Meta leaks are worse than you think.

    7H AGO

    Is there a case against Anthropic? And: The Meta leaks are worse than you think.

    When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.) Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received. Watch on YouTube: The Meta Leaks Are Worse Than You Think Chapters: Introduction (00:00:00)What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)Charge 1: Hypocrisy (00:01:21)Charge 2: Naivety (00:04:55)Charge 3: Undemocratic (00:09:38)You don't have to debate on their terms (00:12:32)The Meta Leaks Are Worse Than You Think (00:13:43)Three fixes for social media's scam problem (00:16:48)We should regulate AI companies as strictly as banks (00:18:46)Video and audio editing: Dominic Armstrong and Simon MonsourTranscripts and web: Elizabeth Cox and Katy Moore

    21 min
  2. Could a biologist armed with AI kill a billion people? | Dr Richard Moulange

    3D AGO

    Could a biologist armed with AI kill a billion people? | Dr Richard Moulange

    Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family. That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach. For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right. But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%. Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation. Richard joins host Rob Wiblin to discuss all that plus: What AI biology tools already existWhy mid-tier actors (not amateurs) are the ones getting the most dangerous boostThe three main categories of defence we can pursueWhether there’s a plausible path to a world where engineered pandemics become a thing of the pastThis episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own. Links to learn more, video, and full transcript: https://80k.info/rm Announcements: Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000HoursWe're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editorChapters: Cold open (00:00:00)Who's Richard Moulange? (00:00:31)AI can now design novel genomes (00:01:11)The end of the 'tacit knowledge' barrier (00:04:34)Are risks from bioterrorists overstated? (00:18:20)The 3 key disasters AI makes more likely (00:22:41)Which bad actors does AI help the most? (00:30:03)Experts are more scary than amateurs (00:41:17)Barriers to bioterrorists using AI (00:46:43)AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)Advanced AI biology tools we already have or will soon (01:04:10)Rob argues that the situation is hopeless (01:09:49)Intervention #1: Limit access (01:18:16)Intervention #2: Get AIs to refuse to help (01:32:58)Intervention #3: Surveillance and attribution (01:42:38)Intervention #4: Universal vaccines and antivirals (01:56:38)Intervention #5: Screen all orders for DNA (02:10:00)AI companies talk about def/acc more than they fund it (02:19:52)Can you build a profitable business solving this problem? (02:26:32)This doesn't have to interfere with useful science (much) (02:30:56)What are the best low-tech interventions? (02:33:01)Richard's top request for AI companies (02:37:59)Grok shows governments lack many legal levers (02:53:17)Best ways listeners can help fix AI-Bio (02:56:24)We might end all contagious disease in 20 years (03:03:37)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jeremy ChevillotteTranscripts and web: Elizabeth Cox and Katy Moore

    3h 8m
  3. #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    MAR 24

    #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways. That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses. Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars. What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully. None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability. Links to learn more, video, and full transcript: https://80k.info/sc26 This episode was recorded on February 27, 2026. Chapters: Cold open (00:00:00)Could peace in Ukraine lead to Europe’s next war? (00:00:47)Do Russia’s motives for war still matter? (00:11:41)What does a good ceasefire deal look like? (00:17:38)What’s still holding back a ceasefire (00:38:44)Why Russia might accept Ukraine’s EU membership (00:46:00)How to prevent a spiraling conflict with NATO (00:48:00)What’s next for nuclear arms control (00:49:57)Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)Why Russia still takes NATO seriously (01:02:01)Neither side wants to fight this war again (01:10:49)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITTranscripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

    1h 12m
  4. #239 – Rose Hadshar on why automating human labour will break our political system

    MAR 17

    #239 – Rose Hadshar on why automating human labour will break our political system

    The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all. That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment. She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects. Almost nobody wants this to happen — but we may find ourselves unable to prevent it. If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes? Rose has answers, and they’re not all reassuring. But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems. Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode. Links to learn more, video, and full transcript: https://80k.info/rh This episode was recorded on December 18, 2025. Chapters: Cold open (00:00:00)Who's Rose Hadshar? (00:01:05)Three dynamics that could reshape political power in the AI era (00:02:37)AI gives small groups the productive power of millions (00:12:49)Dynamic 1: When a software update becomes a power grab (00:20:41)Dynamic 2: When AI labour means governments no longer need their citizens (00:31:20)How democracy could persist in name but not substance (00:45:15)Dynamic 3: When AI filters our reality (00:54:54)Good intentions won't stop power concentration (01:08:27)Slower-moving worlds could still get scary (01:23:57)Why AI-powered tyranny will be tough to topple (01:31:53)How power concentration compares to "gradual disempowerment" (01:38:18)Some interventions are cross-cutting — and others could backfire (01:43:54)What fighting back actually looks like (01:55:15)Why power concentration researchers should avoid getting too "spicy" (02:04:10)Why the "Manhattan Project" approach should worry you — but truly international projects might not be safe either (02:09:18)Rose wants to keep humans around! (02:12:06)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Nick Stockton and Katy Moore

    2h 14m
  5. #238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

    MAR 10

    #238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

    How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power. Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence: Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?Would road-mobile launchers still be able to hide in tunnels and under netting?Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot. Links to learn more, video, and full transcript: https://80k.info/swlnl This episode was recorded on November 24, 2025. Chapters: Cold open (00:00:00)Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)How nuclear deterrence actually works (00:01:46)AI vs nuclear submarines (00:10:31)AI vs road-mobile missiles (00:22:21)AI vs missile defence systems (00:28:38)AI vs nuclear command, control, and communications (NC3) (00:35:20)AI won't break deterrence, but may trigger an arms race (00:43:27)Technological supremacy isn't political supremacy (00:52:31)Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)Book and movie recommendations (01:08:53)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Nick Stockton and Katy Moore

    1h 11m
  6. #237 – Robert Long on how we're not ready for AI consciousness

    MAR 3

    #237 – Robert Long on how we're not ready for AI consciousness

    Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with that? Robert Long founded Eleos AI to explore questions like these, on the basis that AI may one day be capable of suffering — or already is. In today’s episode, Robert and host Luisa Rodriguez explore the many ways in which AI consciousness may be very different from anything we’re used to. Things get strange fast: If AI is conscious, where does that consciousness exist? In the base model? A chat session? A single forward pass? If you close the chat, is the AI asleep or dead? To Robert, these kinds of questions aren’t just philosophical exercises: not being clear on AI’s moral status as it transitions from human-level to superhuman intelligence could be dangerous. If we’re too dismissive, we risk unintentionally exploiting sentient beings. If we’re too sympathetic, we might rush to “liberate” AI systems in ways that make them harder to control — worsening existential risk from power-seeking AIs. Robert argues the path through is doing the empirical and philosophical homework now, while the stakes are still manageable. The field is tiny. Eleos AI is three people. As a result, Robert argues that driven researchers with a willingness to venture into uncertain territory can push out the frontier on these questions remarkably quickly. Links to learn more, video, and full transcript: https://80k.info/rl26 This episode was recorded November 18–19, 2025.Chapters: Cold open (00:00:00)Who’s Robert Long? (00:00:42)How AIs are (and aren't) like farmed animals (00:01:18)If AIs love their jobs… is that worse? (00:11:05)Are LLMs just playing a role, or feeling it too? (00:31:58)Do AIs die when the chat ends? (00:55:09)Studying AI welfare empirically: behaviour, neuroscience, and development (01:27:34)Why Eleos spent weeks talking to Claude even though it's unreliable (01:51:58)Can LLMs learn to introspect? (01:57:58)Mechanistic interpretability as AI neuroscience (02:08:01)Does consciousness require biological materials? (02:31:06)Eleos’s work & building the playbook for AI welfare (02:50:36)Avoiding the trap of wild speculation (03:18:15)Robert's top research tip: don't do it alone (03:22:43)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

    3h 26m
  7. #236 – Max Harms on why teaching AI right from wrong could get everyone killed

    FEB 24

    #236 – Max Harms on why teaching AI right from wrong could get everyone killed

    Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all. In Max’s view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough. It’s a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max’s colleagues at the Machine Intelligence Research Institute. To Max, the book’s core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction. And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we’ve learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance. We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don’t care. Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down. This leads to Max’s research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power. According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans. Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he’s calling for collaborators to get in touch at maxharms.com. Links to learn more, video, and full transcript: https://80k.info/mh26 This episode was recorded on October 19, 2025. Chapters: Cold open (00:00:00)Who's Max Harms? (00:01:22)A note from Rob Wiblin (00:01:58)If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:04:26)Evolution failed to 'align' us, just as we'll fail to align AI (00:26:22)We're training AIs to want to stay alive and value power for its own sake (00:44:31)Objections: Is the 'squiggle/paperclip problem' really real? (00:53:54)Can we get empirical evidence re: 'alignment by default'? (01:06:24)Why do few AI researchers share Max's perspective? (01:11:37)We're training AI to pursue goals relentlessly — and superintelligence will too (01:19:53)The case for a radical slowdown (01:26:07)Max's best hope: corrigibility as stepping stone to alignment (01:29:09)Corrigibility is both uniquely valuable, and practical, to train (01:33:44)What training could ever make models corrigible enough? (01:46:13)Corrigibility is also terribly risky due to misuse risk (01:52:44)A single researcher could make a corrigibility benchmark. Nobody has. (02:00:04)Red Heart & why Max writes hard science fiction (02:13:27)Should you homeschool? Depends how weird your kids are. (02:35:12)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore

    2h 41m

Trailer

4.7
out of 5
308 Ratings

About

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.

You Might Also Like