169 episodes

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80,000 Hours' wherever you get podcasts.

Produced by Keiran Harris. Hosted by Rob Wiblin, Head of Research at 80,000 Hours.

80,000 Hours Podcast with Rob Wiblin The 80000 Hours team

    • Education
    • 5.0 • 1 Rating

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80,000 Hours' wherever you get podcasts.

Produced by Keiran Harris. Hosted by Rob Wiblin, Head of Research at 80,000 Hours.

    #136 – Will MacAskill on what we owe the future

    #136 – Will MacAskill on what we owe the future

    1. People who exist in the future deserve some degree of moral consideration.
    2. The future could be very big, very long, and/or very good.
    3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
    4. So trying to make the world better for future generations is a key priority of our time.

    This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

    Links to learn more, summary and full transcript.

    From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

    Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

    But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed.

    A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it.

    This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

    But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations.

    The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back.

    But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

    In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

    If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as:

    • How Will was eventually won over to longtermism
    • The three best lines of argument against longtermism
    • How to avoid moral fanaticism
    • Which technologies or events are most likely to have permanent effects
    • What 'longtermists' do today in practice
    • How to predict the long-term effect of our actions
    • Whether the future is likely to be good or bad
    • Concrete ideas to make the future better
    • What Will donates his money to personally
    • Potatoes and megafauna
    • And plenty more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    • 2 hrs 54 min
    #135 – Samuel Charap on key lessons from five months of war in Ukraine

    #135 – Samuel Charap on key lessons from five months of war in Ukraine

    After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we're in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

    So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

    Links to learn more, summary and full transcript.

    As Sam lays out, Russia controls much of Ukraine's east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

    Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

    In today's brisk conversation, Rob and Sam cover the following topics:

    • Current territorial control and the level of attrition within Russia’s and Ukraine's military forces.
    • Russia's current goals.
    • Whether Sam's views have changed since March on topics like: Putin's motivations, the wisdom of Ukraine's strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
    • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
    • Whether there's anything to learn from many of our worst fears -- such as the use of bioweapons on civilians -- not coming to pass.
    • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
    • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it's still a long shot.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Ryan Kessler
    Transcriptions: Katy Moore

    • 54 min
    #134 – Ian Morris on what big picture history teaches us

    #134 – Ian Morris on what big picture history teaches us

    Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

    Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

    Why such big systematic changes — and why these changes specifically?

    That's the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

    Links to learn more, summary and full transcript.

    There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

    In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

    On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

    There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

    Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

    In today's episode, we discuss all of Ian's major books, taking on topics such as:

    • Why the Industrial Revolution happened in England rather than China
    • Whether or not wars can lead to less violence
    • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions
    • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration
    • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces”
    • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa
    • In what sense Ian thinks Brexit was “10,000 years in the making”
    • The most common misconceptions about macrohistory

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    • 3 hrs 41 min
    #133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

    #133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

    On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.

    That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.

    Links to learn more, summary and full transcript.

    Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI.

    Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin.

    But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.

    You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.

    So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.

    He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”

    Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare?

    Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.

    They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations.

    They also cover:

    • Whether we could understand what superintelligent systems were doing
    • The value of encouraging people to think about the positive future they want
    • How to give machines goals
    • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
    • Whether we’re sleepwalking into disaster
    • Whether people actually just want their biases confirmed
    • Why Max is worried about government-backed fact-checking
    • And much more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    • 2 hrs 57 min
    #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

    #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

    If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free.

    This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

    Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

    Links to learn more, summary and full transcript.

    The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

    If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

    If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off.

    As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly.

    If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

    We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

    In today's conversation, Rob and Nova cover:

    • How good or bad is information security today
    • The most secure computer systems that exist
    • How to design an AI training compute centre for maximum efficiency
    • Whether 'formal verification' can help us design trustworthy systems
    • How wide the gap is between AI capabilities and AI safety
    • How to disincentivise hackers
    • What should listeners do to strengthen their own security practices
    • And much more.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Beppe Rådvik
    Transcriptions: Katy Moore

    • 2 hrs 42 min
    #131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

    #131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

    “We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!”

    If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on.

    Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm.

    Links to learn more, summary and full transcript.

    But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around.

    As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.”

    So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult.

    Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make.

    A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world.

    The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments.

    This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote.

    And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities?

    As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year.

    They cover:

    • The biggest impediments to bouncing back
    • The reality of humans trying to actually do this
    • The most valuable pro-resilience adjustments we can make today
    • How to recover without much coal or oil
    • How to feed the Earth in disasters
    • And the most exciting recent findings in astrobiology

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.


    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    • 1 hr 5 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Education

Liam
Kaylie Stewart
Ashley Corbo
Olivia Eve Shabo
TED
Duolingo

You Might Also Like

Mercatus Center at George Mason University
New York City Skeptics
Russ Roberts
The 80000 Hours team
Sam Harris
Yascha Mounk