
207 episodes

80,000 Hours Podcast Rob, Luisa, Keiran, and the 80,000 Hours team
-
- Education
-
-
4.8 • 255 Ratings
-
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
-
Great power conflict (Article)
Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.
If you want to check out the links, footnotes and figures in today’s article, you can find those here.
And if you like this article, you might enjoy a couple of related episodes of this podcast:
#128 – Chris Blattman on the five reasons wars happen
#140 – Bear Braumoeller on the case that war isn’t in decline
Audio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris -
#163 – Toby Ord on the perils of maximising the good that you do
Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?
But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”
Links to learn more, summary and full transcript.
Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.
Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.
This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.
Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.
But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.
To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.
Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.
The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.
As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.
In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.
Toby and Rob also discuss:
The rise and fall of FTX and some of its impacts
What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground
What utilitarianism has going for it, and what's wrong with it in Toby's view
How to mathematically model the importance of personal integrity
Which AI labs Toby thinks have been acting more responsibly than others
How having a young child affects Toby’s feelings about AI risk
Whether infinitie -
The 80,000 Hours Career Guide (2023)
An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.
If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift. -
#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI
Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.
But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.
Links to learn more, summary and full transcript.
On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.
And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.
In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:
1. Developing an Apollo programme for technical AI safety2. Instituting capability audits for AI models3. Buying time by exploiting hardware choke points4. Getting critics involved in directly engineering AI models5. Getting AI labs to be guided by motives other than profit6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working9. Creating a mass public movement that understands AI and can demand the necessary controls10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria
As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything.
Rob and Mustafa discuss the above, as well as:
Whether we should be open sourcing AI models
Whether Mustafa's policy views are consistent with his timelines for transformative AI
How people with very different views on these issues get along at AI labs
The failed efforts (so far) to get a wider range of people involved in these decisions
Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4
Whether we'll be blown away by AI progress over the next year
What mandatory regulations government should be imposing on AI labs right now
Appropriate priorities for the UK's upcoming AI safety summit
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore -
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892.
However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.
So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael Webb
In today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.
Links to learn more, summary and full transcript.
They cover:
The jobs most and least exposed to AI
Whether we’ll we see mass unemployment in the short term
How long it took other technologies like electricity and computers to have economy-wide effects
Whether AI will increase or decrease inequality
Whether AI will lead to explosive economic growth
What we can we learn from history, and reasons to think this time is different
Career advice for a world of LLMs
Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved
Michael's take as a musician on AI-generated music
And plenty more
If you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore -
#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment
"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah Ritchie
In today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.
Links to learn more, summary and full transcript.
They cover:
Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get
Her new book about how we could be the first generation to build a sustainable planet
Whether climate change is the most worrying environmental issue
How we reduced outdoor air pollution
Why Hannah is worried about the state of biodiversity
Solutions that address multiple environmental issues at once
How the world coordinated to address the hole in the ozone layer
Surprises from Our World in Data’s research
Psychological challenges that come up in Hannah’s work
And plenty more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Customer Reviews
Nina
Is Nina AI? Is that accent Dublin, Berlin, Kansas City?
Essential listening for all
Delightful podcast - very educational for anyone interested in the world, not just people interested in effective altruism. The hosts are incredibly well prepared and have a lot to add to the conversation without making it about them. While the long (many hour) nature of the conversations allows you to really get a deep understanding of the guests views, something that is rare for interviewees of this caliber.
Always thoughtful discussions with experts in unexpected fields
This podcast does a great job at taking complex topics and breaking them down into understandable concepts - all without dumbing it down for its audience. Without getting caught up in the typical conversations/debates dominating the news cycle, Rob and his guests discuss some of the most pressing issues facing the world (and adding in Some good career tips for anyone in that phase of life). Also love their afterhours sister podcast for extra content.