80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.

  1. Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

    4H AGO

    Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

    Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going. She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR. So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one? Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all: Cars enabled carjackings and drive-by shootings, but also faster police pursuits.Microbiology enabled bioweapons, but also faster vaccine development.The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making. The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity. But if we find some clever technique to overcome that, we could still fail — because the companies simply don’t follow through on their promises. They say redirecting resources to alignment and security is their strategy for dealing with the risks generated by their research — but none have quantitative commitments about what fraction of AI labour they’ll redirect during crunch time. And the competitive pressures during a recursive self-improvement loop could be irresistible. In today’s conversation, Ajeya and Rob discuss what assumptions this plan requires, the specific problems AI could help solve during crunch time, and why — even if we pull it off — we’ll be white-knuckling it the whole way through. Links to learn more, video, and full transcript: https://80k.info/ac26 This episode was recorded on October 20, 2025. Chapters: Cold open (00:00:00)Ajeya’s strong track record for identifying key AI issues (00:00:43)The 1,000-fold disagreement about AI's effect on economic growth (00:02:30)Could any evidence actually change people's minds? (00:22:48)The most dangerous AI progress might remain secret (00:29:55)White-knuckling the 12-month window after automated AI R&D (00:46:16)AI help is most valuable right before things go crazy (01:10:36)Foundations should go from paying researchers to paying for inference (01:23:08)Will frontier AI even be for sale during the explosion? (01:30:21)Pre-crunch prep: what we should do right now (01:42:10)A grantmaking trial by fire at Coefficient Giving (01:45:12)Sabbatical and reflections on effective altruism (02:05:32)The mundane factors that drive career satisfaction (02:34:33)EA as an incubator for avant-garde causes others won't touch (02:44:07)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcriptions, and web: Katy Moore

    2h 55m
  2. #179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

    FEB 3

    #179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

    Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain. From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool. So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all? Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox. Rebroadcast: This episode originally aired in February 2024. Links to learn more, video, and full transcript: https://80k.info/rn In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as: How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.The “smoke detector principle” of why we experience so many false alarms along with true threats.The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.Evolutionary theories on why we age and die.And much more.Chapters: Cold Open (00:00:00)Rob's Intro (00:00:55)The interview begins (00:03:01)The history of evolutionary medicine (00:03:56)The evolutionary origin of anxiety (00:12:37)Design tradeoffs, diseases, and adaptations (00:43:19)The tricker case of depression (00:48:57)The purpose of low mood (00:54:08)Big mood swings vs barely any mood swings (01:22:41)Is mental health actually getting worse? (01:33:43)A general explanation for bodies breaking (01:37:27)Freudianism and the origins of morality and love (01:48:53)Evolutionary medicine in general (02:02:42)Objections to evolutionary psychology (02:16:29)How do you test evolutionary hypotheses to rule out the bad explanations? (02:23:19)Striving and meaning in careers (02:25:12)Why do people age and die? (02:45:16)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic ArmstrongTranscriptions: Katy Moore

    2h 51m
  3. #234 – David Duvenaud on why 'aligned AI' would still kill democracy

    JAN 27

    #234 – David Duvenaud on why 'aligned AI' would still kill democracy

    Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity. For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution. Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.' Links to learn more, video, and full transcript: https://80k.info/dd He argues democracy wasn’t the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting? “The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they’ve needed us,” David explains. “Life can only get so bad when you’re needed. That’s the key thing that’s going to change.” In David’s telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they’re at a disadvantage compared to governments that strategically restrict civil liberties. But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that’s increasingly shaped by machine-to-machine communication — even if every AI does exactly what it’s told. This episode was recorded on August 21, 2025. Chapters: Cold open (00:00:00)Who’s David Duvenaud? (00:00:50)Alignment isn’t enough: we still lose control (00:01:30)Smart AI advice can still lead to terrible outcomes (00:14:14)How gradual disempowerment would occur (00:19:02)Economic disempowerment: Humans become "meddlesome parasites" (00:22:05)Humans become a "criminally decadent" waste of energy (00:29:29)Is humans losing control actually bad, ethically? (00:40:36)Political disempowerment: Governments stop needing people (00:57:26)Can human culture survive in an AI-dominated world? (01:10:23)Will the future be determined by competitive forces? (01:26:51)Can we find a single good post-AGI equilibria for humans? (01:34:29)Do we know anything useful to do about this? (01:44:43)How important is this problem compared to other AGI issues? (01:56:03)Improving global coordination may be our best bet (02:04:56)The 'Gradual Disempowerment Index' (02:07:26)The government will fight to write AI constitutions (02:10:33)“The intelligence curse” and Workshop Labs (02:16:58)Mapping out disempowerment in a world of aligned AGIs (02:22:48)What do David’s CompSci colleagues think of all this? (02:29:19)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore

    2h 32m
  4. #145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

    JAN 20

    #145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

    In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success. It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress. But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable. Rebroadcast: This episode was originally aired in February 2023. Links to learn more, video, and full transcript: https://80k.link/CLB While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If he's right, a counterfactual history where slavery remains widespread in 2023 isn't so far-fetched. As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age, slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilisation, in South Asia, and in parts of early modern East Asia, Korea, China. It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s. That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there's only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail. Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary. Mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour? In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off. Christopher and host Rob Wiblin also discuss: Various instantiations of slavery throughout human historySigns of antislavery sentiment before the 17th centuryThe role of the Quakers in early British abolitionist movementThe importance of individual “heroes” in the abolitionist movementArguments against the idea that the abolition of slavery was contingentWhether there have ever been any major moral shifts that were inevitableChapters: Rob's intro (00:00:00)Cold open (00:01:45)Who's Christopher Brown? (00:03:00)Was abolitionism inevitable? (00:08:53)The history of slavery (00:14:35)Signs of antislavery sentiment before the 17th century (00:19:24)Quakers (00:32:37)Attitudes to slavery in other religions (00:44:37)Quaker advocacy (00:56:28)Inevitability and contingency (01:06:29)Moral revolution (01:16:39)The importance of specific individuals (01:29:23)Later stages of the antislavery movement (01:41:33)Economic theory of abolition (01:55:27)Influence of knowledge work and education (02:12:15)Moral foundations theory (02:20:43)Figuring out how contingent events are (02:32:42)Least bad argument for why abolition was inevitable (02:41:45)Were any major moral shifts inevitable? (02:47:29)Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore

    2h 56m
  5. #233 – James Smith on how to prevent a mirror life catastrophe

    JAN 13

    #233 – James Smith on how to prevent a mirror life catastrophe

    When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described. What convinced him? Mirror bacteria would be constructed entirely from molecules that are the mirror images of their naturally occurring counterparts. This seemingly trivial difference creates a fundamental break in the tree of life. For billions of years, the mechanisms underlying immune systems and keeping natural populations of microorganisms in check have evolved to recognise threats by their molecular shape — like a hand fitting into a matching glove. Learn more, video, and full transcript: https://80k.info/js26 Mirror bacteria would upend that assumption, creating two enormous problems: Many critical immune pathways would likely fail to activate, creating risks of fatal infection across many species.Mirror bacteria could have substantial resistance to natural predators: for example, they would be essentially immune to the viruses that currently keep bacteria populations in check. That could help them spread and become irreversibly entrenched across diverse ecosystems.Unlike ordinary pathogens, which are typically species-specific, mirror bacteria’s reversed molecular structure means they could potentially infect humans, livestock, wildlife, and plants simultaneously. The same fundamental problem — reversed molecular structure breaking immune recognition — could affect most immune systems across the tree of life. People, animals, and plants could be infected from any contaminated soil, dust, or species. The discovery of these risks came as a surprise. The December 2024 Science paper that brought international attention to mirror life was coauthored by 38 leading scientists, including two Nobel Prize winners and several who had previously wanted to create mirror organisms. James is now the director of the Mirror Biology Dialogues Fund, which supports conversations among scientists and other experts about how these risks might be addressed. Scientists tracking the field think that mirror bacteria might be feasible in 10–30 years, or possibly sooner. But scientists have already created substantial components of the cellular machinery needed for mirror life. We can regulate precursor technologies to mirror life before they become technically feasible — but only if we act before the research crosses critical thresholds. Once certain capabilities exist, we can’t undo that knowledge. Addressing these risks could actually be very tractable: unlike other technologies where massive potential benefits accompany catastrophic risks, mirror life appears to offer minimal advantages beyond academic interest. Nonetheless, James notes that fewer than 10 people currently work full-time on mirror life risks and governance. This is an extraordinary opportunity for researchers in biosecurity, synthetic biology, immunology, policy, and many other fields to help solve an entirely preventable catastrophe — James even believes the issue is on par with AI safety as a priority for some people, depending on their skill set. The Mirror Biology Dialogues Fund is hiring! Deputy director: https://80k.info/mbdfddOperations lead: https://80k.info/mbdfopsExpression of interest for other roles: https://80k.info/mbdfeoiThis episode was recorded on November 5-6, 2025. Chapters: Cold open (00:00:00)Who's James Smith? (00:00:49)Why is mirror life so dangerous? (00:01:12)Mirror life and the human immune system (00:15:40)Nonhuman animals will also be at risk (00:28:25)Will plants be susceptible to mirror bacteria? (00:34:57)Mirror bacteria's effect on ecosystems (00:39:34)How close are we to making mirror bacteria? (00:52:16)Policies for governing mirror life research (01:06:39)Countermeasures if mirror bacteria are released into the world (01:22:06)Why hasn't mirror life evolved on its own? (01:28:37)Why wouldn't antibodies or antibiotics save us from mirror bacteria? (01:31:52)Will the environment be toxic to mirror life? (01:39:21)Are there too many uncertainties to act now? (01:44:18)The potential benefits of mirror molecules and mirror life (01:46:55)Might we encounter mirror life in space? (01:52:44)Sounding the alarms about mirror life: the backstory (01:54:55)How to get involved (02:02:44)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operators: Jeremy Chevillotte and Alex MilesCoordination, transcripts, and web: Katy Moore

    2h 10m
  6. #144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

    JAN 9

    #144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

    What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer. But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function. If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead. Rebroadcast: this episode was originally released in January 2023. Links to learn more, video, and full transcript: https://80k.link/AA As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:  Cells will proliferate when they shouldn't. Cells won't die when they should. Cells won't engage in the kind of division of labour that they should. Cells won’t do the jobs that they're supposed to do. Cells will monopolise resources. And cells will trash the environment.When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics. We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster. Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since Homo sapiens came about. Here’s a quote from Athena: “So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.” You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss: Cheating within cells themselvesCooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or starsWhether it’s too out-there to think of humans as engaging in cancerous behaviourWhy elephants get deadly cancers less often than humans, despite having way more cellsWhen a cell should commit suicideThe strategy of deliberately not treating cancer aggressivelySuperhuman cooperationAnd at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including: Staying happy while thinking about the apocalypsePractical steps to prepare for the apocalypseAnd whether a zombie apocalypse is already happening among Tasmanian devilsChapters: Rob's intro (00:00:00)The interview begins (00:02:22)Cooperation (00:06:12)Cancer (00:09:52)How multicellular life survives (00:20:10)Why our anti-contagious-cancer mechanisms are so successful (00:32:34)Why elephants get deadly cancers less often than humans (00:48:50)Life extension (01:02:00)Honour among cancer thieves (01:06:21)When a cell should commit suicide (01:14:00)When the human body deliberately produces tumours (01:19:58)Surprising approaches for managing cancer (01:25:47)Analogies to human cooperation (01:39:32)Applying the "not treating cancer aggressively" strategy to real life (01:55:29)Humanity on Earth, and Earth in the universe (02:01:53)Superhuman cooperation (02:08:51)Cheating within cells (02:15:17)Father's genes vs. mother's genes (02:26:18)Everything is Fine: How to Thrive in the Apocalypse (02:40:13)Do we really live in an era of unusual risk? (02:54:53)Staying happy while thinking about the apocalypse (02:58:50)Overrated worries about the apocalypse (03:13:11)The zombie apocalypse (03:22:35) Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore

    3h 31m
  7. #142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

    JAN 6

    #142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

    John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work, he's written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. Rebroadcast: this episode was originally released in December 2022. YouTube video version: https://youtu.be/MEd7TT_nMJE Links to learn more, video, and full transcript: https://80k.link/JM We ask him what we think are the most important things everyone ought to know about linguistics, including: Can you communicate faster in some languages than others, or is there some constraint that prevents that?Does learning a second or third language make you smarter or not?Can a language decay and get worse at communicating what people want to say?If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own?Did Shakespeare write in a foreign language, and if so, should we translate his plays?How much does language really shape the way we think?Are creoles the best languages in the world — languages that ideally we would all speak?What would be the optimal number of languages globally?Does trying to save dying languages do their speakers a favour, or is it more of an imposition?Should we bother to teach foreign languages in UK and US schools?Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make?We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits! Chapters: Rob's intro (00:00:00)Who's John McWhorter? (00:05:02)Does learning another language make you smarter? (00:05:54)Updating Shakespeare (00:07:52)Should we bother teaching foreign languages in school? (00:12:09)Language loss (00:16:05)The optimal number of languages for humanity (00:27:57)Do we reason about the world using language and words? (00:31:22)Can we communicate meaningful information more quickly in some languages? (00:35:04)Creole languages (00:38:48)AI and the future of language (00:50:45)Should we keep ums and ahs in The 80,000 Hours Podcast? (00:59:10)Why the World Looks the Same in Any Language (01:02:07)Producer: Keiran HarrisAudio mastering: Ben Cordell and Simon MonsourVideo editing: Ryan Kessler and Simon MonsourTranscriptions: Katy Moore

    1h 35m

Trailer

4.7
out of 5
307 Ratings

About

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.

You Might Also Like