80,000 Hours Podcast

Rob, Luisa, Keiran, and the 80,000 Hours team
80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.

  1. #205 – Sébastien Moro on the most insane things fish can do

    -4 J

    #205 – Sébastien Moro on the most insane things fish can do

    "You have a tank split in two parts: if the fish gets in the compartment with a red circle, it will receive food, and food will be delivered in the other tank as well. If the fish takes the blue triangle, this fish will receive food, but nothing will be delivered in the other tank. So we have a prosocial choice and antisocial choice. When there is no one in the other part of the tank, the male is choosing randomly. If there is a male, a possible rival: antisocial — almost 100% of the time. Now, if there is his wife — his female, this is a prosocial choice all the time. "And now a question: Is it just because this is a female or is it just for their female? Well, when they're bringing a new female, it’s the antisocial choice all the time. Now, if there is not the female of the male, it will depend on how long he's been separated from his female. At first it will be antisocial, and after a while he will start to switch to prosocial choices." —Sébastien Moro In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience. Links to learn more, highlights, and full transcript. They cover: The insane capabilities of fish in tests of memory, learning, and problem-solving.Examples of fish that can beat primates on cognitive tests and recognise individual human faces.Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.Whether fish can experience emotions, and how this is even studied.The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.Ethical issues raised by evidence that fish may be conscious and experience suffering.And plenty more.Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    3 h 11 min
  2. #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

    16 OCT.

    #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

    Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything. Links to learn more, highlights, video, and full transcript. On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits. But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall. Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years. Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome. Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others. But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him: How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?Is anyone else competitive with EA in terms of impact per dollar?Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?Is EA too big a tent to function?What global problems could EA be ignoring?Should EA be more willing to court controversy?Does EA’s niceness leave it vulnerable to exploitation?What moral philosophy would he have modelled EA on?Rob and Nate also talk about: Nate’s theory of Sam Bankman-Fried’s psychology.Whether we had to “raise or fold” on COVID.Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.“Winners’ tilt.”Whether it’s selfish to slow down AI progress.The ridiculous 13 Keys to the White House.Whether prediction markets are now overrated.Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.And plenty more.Chapters: Cold open (00:00:00)Rob's intro (00:01:03)The interview begins (00:03:08)Sam Bankman-Fried and trust in the effective altruism community (00:04:09)Expected value (00:19:06)Similarities and differences between Sam Altman and SBF (00:24:45)How would Nate do EA differently? (00:31:54)Reservations about utilitarianism (00:44:37)Game theory equilibrium (00:48:51)Differences between EA culture and rationalist culture (00:52:55)What would Nate do with $10 billion to donate? (00:57:07)COVID strategies and tradeoffs (01:06:52)Is it selfish to slow down AI progress? (01:10:02)Democratic legitimacy of AI progress (01:18:33)Dubious election forecasting (01:22:40)Assessing how reliable election forecasting models are (01:29:58)Are prediction markets overrated? (01:41:01)Venture capitalists and risk (01:48:48)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore

    1 h 58 min
  3. #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

    3 OCT.

    #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

    "In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it. "The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-Smith In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World. Links to learn more, highlights, and full transcript. They cover: Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.How the role of culture has been crucial in enabling human technological progress.Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.Whether we can and should avoid death by uploading human minds.And plenty more.Chapters: Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:12)Wild animal suffering and rewilding (00:04:09)Thinking about death (00:32:50)Uploads of ourselves (00:38:04)Culture and how minds make things happen (00:54:05)Challenges for water-based animals (01:01:37)The importance of sea-to-land transitions in animal life (01:10:09)Luisa's outro (01:23:43)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    1 h 25 min
  4. Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

    27 SEPT.

    Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

    In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride. Links to learn more, highlights, and full transcript.They cover: Keiran’s views on free will, and how he came to hold themWhat it’s like not experiencing sustained guilt, shame, and angerWhether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friendWhether giving up guilt and shame also means giving up prideThe implications for loveThe neurological condition ‘Jerk Syndrome’And some practical advice on feeling less guilt, shame, and angerWho this episode is for: People sympathetic to the idea that free will is an illusionPeople who experience tons of guilt, shame, or angerPeople worried about what would happen if they stopped feeling tonnes of guilt, shame, or angerWho this episode isn’t for: People strongly in favour of retributive justicePhilosophers who can’t stand random non-philosophers talking about philosophyNon-philosophers who can’t stand random non-philosophers talking about philosophyChapters: Cold open (00:00:00)Luisa's intro (00:01:16)The chat begins (00:03:15)Keiran's origin story (00:06:30)Charles Whitman (00:11:00)Luisa's origin story (00:16:41)It's unlucky to be a bad person (00:19:57)Doubts about whether free will is an illusion (00:23:09)Acting this way just for other people (00:34:57)Feeling shame over not working enough (00:37:26)First person / third person distinction (00:39:42)Would Luisa become a worse person if she felt less guilt? (00:44:09)Feeling bad about not being a different person (00:48:18)Would Luisa donate less money? (00:55:14)Would Luisa become a worse friend? (01:01:07)Pride (01:08:02)Love (01:15:35)Bears and hurricanes (01:19:53)Jerk Syndrome (01:24:24)Keiran's outro (01:34:47)Get more episodes like this by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type "80k After Hours" into your podcasting app. Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore

    1 h 36 min
  5. #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

    19 SEPT.

    #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

    "For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki Ramakrishnan In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality. Links to learn more, highlights, and full transcript. They cover: What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.Why eliminating major age-related diseases might only extend average lifespan by 15 years.The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.And plenty more.Chapters: Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:02:21)Reasons to explore why we age and die (00:02:35)Evolutionary pressures and animals that don't biologically age (00:06:55)Why does ageing cause us to die? (00:12:24)Is there a hard limit to the human lifespan? (00:17:11)Evolutionary tradeoffs between fitness and longevity (00:21:01)How ageing resets with every generation, and what we can learn from clones (00:23:48)Younger blood (00:31:20)Freezing cells, organs, and bodies (00:36:47)Are the goals of anti-ageing research even realistic? (00:43:44)Dementia (00:49:52)Senescence (01:01:58)Caloric restriction and metabolic pathways (01:11:45)Yamanaka factors (01:34:07)Cancer (01:47:44)Mitochondrial dysfunction (01:58:40)Population effects of extended lifespan (02:06:12)Could increased longevity increase inequality? (02:11:48)What’s surprised Venki about this research (02:16:06)Luisa's outro (02:19:26)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    2 h 20 min
  6. #201 – Ken Goldberg on why your robot butler isn’t here yet

    13 SEPT.

    #201 – Ken Goldberg on why your robot butler isn’t here yet

    "Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surprising, because humans are very good at this. In fact, even with one eye, we can navigate and we can clear the dinner table. But it seems that we’re building in a lot of understanding and intuition about what’s happening in the world and where objects are and how they behave. For robots, it’s very difficult to get a perfectly accurate model of the world and where things are. So if you’re going to go manipulate or grasp an object, a small error in that position will maybe have your robot crash into the object, a delicate wine glass, and probably break it. So the perception and the control are both problems." —Ken Goldberg In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies. Links to learn more, highlights, and full transcript. They cover: Why training robots is harder than training large language models like ChatGPT.The biggest engineering challenges that still remain before robots can be widely useful in the real world.The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.Whether we should be worried about robot labour affecting human employment.Recent breakthroughs in robotics, and what cutting-edge robots can do today.Ken’s work as an artist, where he explores the complex relationship between humans and technology.And plenty more.Chapters: Cold open (00:00:00)Luisa's intro (00:01:19)General purpose robots and the “robotics bubble” (00:03:11)How training robots is different than training large language models (00:14:01)What can robots do today? (00:34:35)Challenges for progress: fault tolerance, multidimensionality, and perception (00:41:00)Recent breakthroughs in robotics (00:52:32)Barriers to making better robots: hardware, software, and physics (01:03:13)Future robots in home care, logistics, food production, and medicine (01:16:35)How might robot labour affect the job market? (01:44:27)Robotics and art (01:51:28)Luisa's outro (02:00:55)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    2 h 2 min
  7. #200 – Ezra Karger on what superforecasters and experts think about existential risks

    4 SEPT.

    #200 – Ezra Karger on what superforecasters and experts think about existential risks

    "It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks. Links to learn more, highlights, and full transcript. They cover: How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.The challenges of predicting low-probability, high-impact events.Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.Whether large language models could help or outperform human forecasters.How people can improve their calibration and start making better forecasts personally.Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.And plenty more.Chapters: Cold open (00:00:00)Luisa’s intro (00:01:07)The interview begins (00:02:54)The Existential Risk Persuasion Tournament (00:05:13)Why is this project important? (00:12:34)How was the tournament set up? (00:17:54)Results from the tournament (00:22:38)Risk from artificial intelligence (00:30:59)How to think about these numbers (00:46:50)Should we trust experts or superforecasters more? (00:49:16)The effect of debate and persuasion (01:02:10)Forecasts from the general public (01:08:33)How can we improve people’s forecasts? (01:18:59)Incentives and recruitment (01:26:30)Criticisms of the tournament (01:33:51)AI adversarial collaboration (01:46:20)Hypotheses about stark differences in views of AI risk (01:51:41)Cruxes and different worldviews (02:17:15)Ezra’s experience as a superforecaster (02:28:57)Forecasting as a research field (02:31:00)Can large language models help or outperform human forecasters? (02:35:01)Is forecasting valuable in the real world? (02:39:11)Ezra’s book recommendations (02:45:29)Luisa's outro (02:47:54) Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    2 h 49 min
  8. #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

    29 AOÛT

    #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

    "I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan Calvin In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature. Links to learn more, highlights, and full transcript. They cover: What’s actually in SB 1047, and which AI models it would apply to.The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.Why California is taking state-level action rather than waiting for federal regulation.How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.And plenty more.Chapters: Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:30)What risks from AI does SB 1047 try to address? (00:03:10)Supporters and critics of the bill (00:11:03)Misunderstandings about the bill (00:24:07)Competition, open source, and liability concerns (00:30:56)Model size thresholds (00:46:24)How is SB 1047 different from the executive order? (00:55:36)Objections Nathan is sympathetic to (00:58:31)Current status of the bill (01:02:57)How can listeners get involved in work like this? (01:05:00)Luisa's outro (01:11:52)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

    1 h 13 min

Bande-annonce

4,8
sur 5
273 notes

À propos

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.

Pour écouter des épisodes au contenu explicite, connectez‑vous.

Recevez les dernières actualités sur cette émission

Connectez‑vous ou inscrivez‑vous pour suivre des émissions, enregistrer des épisodes et recevoir les dernières actualités.

Choisissez un pays ou une région

Afrique, Moyen‑Orient et Inde

Asie‑Pacifique

Europe

Amérique latine et Caraïbes

États‑Unis et Canada