80,000 Hours - Narrations

80,000 Hours

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.

  1. 12/12/2025

    [Problem profile] “Extreme power concentration” by Rose Hadshar

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections. This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%. But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far. Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future. Source: https://80000hours.org/problem-profiles/extreme-power-concentration/ Narrated by a human. --- Outline: (00:00) Introduction (02:15) Summary (07:02) Section 1: Why might AI-enabled power concentration be a pressing problem? (45:02) Section 2: What are the top arguments against working on this problem? (56:36) Section 3: What can you do to help? --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 hr
  2. 17/09/2025

    [Problem profile] “Using AI to enhance societal decision making” by Zershaaneh Qureshi

    The arrival of AGI could “compress a century of progress in a decade”, forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead. We’d be excited to see some more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful. That said, this is not a mature area. There's significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks. So our guess is that, at this stage, it’d be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. If the field proves promising, they could pave the way for more people to get involved later. Narrated by AI. --- Outline: (00:11) Summary (01:19) Our overall view (01:45) Why advancing AI decision-making tools might matter a lot (03:36) AI tools could help us make much better decisions (07:13) We might be able to differentially speed up the rollout of AI decision-making tools (08:49) What are the arguments against working to advance AI decision-making tools? (09:04) These technologies might be developed by default anyway. (11:15) Wouldnt this make dangerous AI capabilities arrive faster, when we should be slowing things down? (13:24) People might use these tools in dangerous ways. (15:44) So should you work on this? (17:49) How to work in this area (18:00) Help build AI decision-making tools (18:48) Complementary work (20:02) Position yourself to help in future (20:49) What opportunities are there? (21:03) Want one-on-one advice on pursuing this path? (21:21) Learn more (22:29) Acknowledgements The original text contained 8 footnotes which were omitted from this narration. --- First published: September 17th, 2025 Source: https://80000hours.org/problem-profiles/ai-enhanced-decision-making --- Narrated by TYPE III AUDIO.

    23 min
  3. 11/08/2025

    [Career review] “Founder of new projects tackling top problems” by Benjamin Todd

    This path involves aiming to found new organisations that aim to tackle bottlenecks in the problems we think are most pressing. In particular, it involves identifying an idea, testing it, and then helping to build an organisation by investing in strategy, hiring, management, culture and so on, with the aim that the organisation can continue to function well without you in the long term. Our focus is on non-profit models since they have the greatest need right now, but for-profits can also be a route to impact. Narrated by AI. --- Outline: (01:33) Recommended (01:42) Review status (01:48) Why might founding a new project be high impact? (04:35) What does it take to succeed? (05:05) A good enough idea (06:11) You need to be able to convince donors (09:21) An idea that really motivates you (10:45) Leadership potential (11:20) Generalist skills (12:07) Enough knowledge of the area (12:41) Good judgement (13:02) The ability, willingness, and resilience to work on something that might not work out (13:48) Examples of people pursuing this path (13:54) Helen Toner (14:31) Holden Karnofsky (15:05) Next steps if you already have an idea (19:14) Next steps if you don't have an idea yet (22:49) Lists of ideas (24:16) Podcast episodes with founders (24:52) Want one-on-one advice on pursuing this path? (25:11) Learn about other high-impact careers The original text contained 2 footnotes which were omitted from this narration. --- First published: November 10th, 2021 Last updated: August 11th, 2025 Source: https://80000hours.org/career-reviews/founder-impactful-organisations --- Narrated by TYPE III AUDIO.

    26 min
  4. 17/07/2025

    [Problem profile] “Risks from power-seeking AI systems” by Cody Fenwick, Zershaaneh Qureshi

    The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI. Narrated by AI. --- Outline: (01:45) Summary (02:38) Our overall view (02:59) Why are risks from power-seeking AI a pressing world problem? (04:24) 1. Humans will likely build advanced AI systems with long-term goals (09:02) 2. AIs with long-term goals may be inclined to seek power and aim to disempower humanity (10:08) We don't know how to reliably control the behaviour of AI systems (13:44) There's good reason to think that AIs may seek power to pursue their own goals (17:26) Advanced AI systems seeking power might be motivated to disempower humanity (20:31) 3. These power-seeking AI systems could successfully disempower humanity and cause an existential catastrophe (21:04) The path to disempowerment (25:49) Why this would be an existential catastrophe (27:29) How likely is an existential catastrophe from power-seeking AI? (30:16) 4. People might create power-seeking AI systems without enough safeguards, despite the risks (31:00) People may think AI systems are safe, when they in fact are not (34:20) People may dismiss the risks or feel incentivised to downplay them (37:06) 5. Work on this problem is neglected and tractable (38:40) Technical safety approaches (44:34) Governance and policy approaches (46:52) What are the arguments against working on this problem? (47:13) Maybe advanced AI systems wont pursue their own goals; theyll just be tools controlled by humans. (49:30) Even if AI systems develop their own goals, they might not seek power to achieve them. (53:16) If this argument is right, why arent all capable humans dangerously power-seeking? (55:39) Maybe we wont build AIs that are smarter than humans, so we dont have to worry about them taking over. (57:01) We might solve these problems by default anyway when trying to make AI systems useful. (59:23) Powerful AI systems of the future will be so different that work today isnt useful. (01:01:25) The problem might be extremely difficult to solve. (01:02:07) Couldnt we just unplug an AI thats pursuing dangerous goals? (01:04:00) Couldnt we just sandbox any potentially dangerous AI until we know its safe? (01:05:47) A truly intelligent system would know not to do harmful things. (01:07:05) How you can help (01:09:29) Want one-on-one advice on pursuing this path? (01:10:03) Learn more (01:12:52) Acknowledgements The original text contained 42 footnotes which were omitted from this narration. --- First published: July 17th, 2025 Source: https://80000hours.org/problem-profiles/risks-from-power-seeking-ai --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 14m
  5. 16/07/2025

    [Problem profile] “Preventing an AI-related catastrophe” by Benjamin Hilton

    The recording may not reflect the most recent changes to this article. Why is it that humans, and not chimpanzees, control the fate of the world? Humans have shaped every corner of our planet. Chimps, despite being pretty smart compared to other nonhuman animals, have not. This is (roughly) because of humans’ intelligence.What do we mean by ‘intelligence’ in this context? Something like “the ability to predictably influence the future.” This involves understanding the world well enough to make plans that can actually work, and the ability to carry out those plans. Humans having the ability to predictably influence the future means they have been able to shape the world around them to fit their goals and desires. We go into more detail on the importance of the ability to make and execute plans later in this article. Companies and governments are spending billions of dollars a year developing AI systems — and as these systems grow more advanced, they could (eventually) displace humans as the most intelligent things on the planet. As we’ll see, they’re making progress. Source: https://80000hours.org/problem-profiles/artificial-intelligence/ Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO. --- Outline: (00:34) Introduction (02:35) Summary (04:52) Main Article Text Begins (06:15) 1. Many AI experts think there’s a non-negligible chance AI will lead to outcomes as bad as extinction (10:56) 2. We’re making advances in AI extremely quickly (15:09) Current trends show rapid progress in the capabilities of ML systems (17:56) When can we expect transformative AI? (18:43) Footnote 21 (19:06) (Text resumes) (22:33) 3. Power-seeking AI could pose an existential threat to humanity (24:21) It’s likely we’ll build advanced planning systems (26:50) These systems seem technically possible and we’ll have strong incentives to build them (27:40) Footnote 26 (28:17) (Main text resumes) (29:30) Footnote 27 (29:57) (Text resumes) (30:33) Footnote 28 (31:41) (Text Resumes) (31:48) Advanced planning systems could easily be dangerously ‘misaligned’ (32:57) Three examples of “misalignment” in a variety of systems (35:43) Footnote 32 (37:15) Why these systems could (by default) be dangerously misaligned (41:36) It might be hard to find ways to prevent this sort of misalignment (48:41) At this point, you may have questions like: (49:28) Disempowerment by AI systems would be an existential catastrophe (51:00) People might deploy misaligned AI systems despite the risk (54:51) This all sounds very abstract. What could an existential catastrophe caused by AI actually look like? (58:13) How could a power-seeking AI actually take power? (58:52) 1. Hacking (01:01:05) 2. Gaining financial resources (01:01:48) 3. Persuading or coercing humans (01:03:24) 4. Gaining broader social influence (01:04:08) 5. Developing new technology (01:05:20) 6. Scaling up its own capabilities (01:06:30) 7. Developing destructive capacity (01:07:39) How could the full story play out? (01:08:56) Existential catastrophe through getting what you measure (01:13:11) Existential catastrophe through a single extremely advanced artificial intelligence (01:18:24) 4. Even if we find a way to avoid power-seeking, there are still risks (01:18:47) AI could worsen war (01:19:34) Footnote 41 (01:21:38) (Text resumes) (01:22:08) AI could be used to develop dangerous new technology (01:22:18) Footnote 42 (01:23:09) AI could empower totalitarian governments (01:23:31) Footnote 45 (01:24:14) (Text resumes) (01:24:29) Other risks from AI (01:25:40) So, how likely is an AI-related catastrophe? (01:28:23) Footnote 46 (01:29:03) (Text resumes) (01:31:10) 5. We can tackle these risks (01:31:59) Technical AI safety research (01:32:56) AI governance research and implementation (01:33:55) Here are some more questions you might have: (01:34:24) 6. This work is extremely neglected (01:36:54) What do we think are the best arguments we’re wrong? (01:37:48) We might have a lot of time to work on this problem (01:40:12) AI might improve gradually over time (01:41:49) We might need to solve alignment anyway to make AI useful (01:44:27) The problem could be extremely difficult to solve (01:46:00) We could be wrong that strategic AI systems are likely to seek power (01:50:56) Arguments against working on AI risk to which we think there are strong responses (01:52:37) Is it even possible to produce artificial general intelligence? (01:54:59) Why can't we just unplug a dangerous AI? (01:56:36) Couldn't we just 'sandbox' any potentially dangerous AI system until we know it's safe? (01:58:02) Surely a truly intelligent AI system would know not to disempower everyone? (01:59:31) Can't you just not give an AI system bad goals? (02:02:07) Isn't the real danger from actual current AI — not some sort of futuristic superintelligence? (02:04:19) But can't AI also do a lot of good? (02:05:23) You'd have to be really stupid to build or use a system that could genuinely kill everyone, right? (02:07:27) Footnote 50 (02:08:17) Why shouldn't I dismiss this as motivated reasoning by a group of people who just like playing with computers and want to think that's important? (02:09:40) This all reads, and feels, like science fiction (02:12:14) Can it make sense to dedicate my career to solving an issue based on a speculative story about a technology that may or may not ever exist? (02:13:51) Is this a form of Pascal's mugging — taking a big bet on tiny probabilities? (02:17:58) What you can do concretely to help (02:19:02) Technical AI safety (02:19:05) Approaches (02:21:29) Key organisations (02:23:45) Conceptual AI safety labs: (02:24:59) AI Safety In Academia (02:26:55) AI governance and strategy (02:26:59) Approaches (02:29:00) Key organisations (02:32:12) Complementary (yet crucial) roles (02:33:09) Other ways to help (02:34:49) Want one-on-one advice on pursuing this path? (02:35:26) Find vacancies on our job board (02:35:37) Top resources to learn more (02:35:57) Note from the author: (02:36:21) Footnote 5 (02:38:02) (Text Resumes)

    3h 9m
  6. 24/06/2025

    [Problem profile] “Catastrophic AI misuse” by The 80,000 Hours team

    On July 16, 1945, humanity had a disturbing first: scientists tested a technology — nuclear weapons — that could cause the destruction of civilisation. Since the attacks on Hiroshima and Nagasaki, humanity hasn’t launched any more nuclear strikes. In part, this is because our institutions developed international norms that, while imperfect, managed to prevent more nuclear attacks. We expect advanced AI will speed up technological advances, with some expecting a century of scientific progress in a decade. Faster scientific progress could have enormous benefits, from cures for deadly diseases to space exploration. Yet this breakneck pace could lower the barriers to creating devastating new weapons while outpacing our ability to build the safeguards needed to control them. Without proper controls, a country, group, or individual could use AI-created weapons of mass destruction to cause a global catastrophe. Advanced AI systems may dramatically accelerate scientific progress, potentially compressing decades of research into just a few years. This rapid advancement could enable the development of devastating new weapons of mass destruction — including enhanced bioweapons and entirely new categories of dangerous technologies — faster than we can build adequate safeguards. Without proper controls, state and non-state actors could use AI-developed [...] Narrated by AI. --- Outline: (01:09) Summary (01:59) Advanced AI could accelerate scientific progress (04:56) What kinds of weapons could advanced AI create? (05:01) Bioweapons (07:27) Cyberweapons (08:22) New dangerous technologies (09:56) These weapons would pose global catastrophic risks (12:33) There are promising approaches to reducing these risks (12:53) Governance and policy approaches (14:36) Technical approaches to reduce misuse risks The original text contained 5 footnotes which were omitted from this narration. --- First published: June 24th, 2025 Source: https://80000hours.org/problem-profiles/catastrophic-ai-misuse --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    18 min

About

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.