80,000 Hours - Narrations

80000 Hours

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.

  1. 12/12/2025

    [Problem profile] “Extreme power concentration” by Rose Hadshar

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections. This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%. But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far. Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future. Source: https://80000hours.org/problem-profiles/extreme-power-concentration/ Narrated by a human. --- Outline: (00:00) Introduction (02:15) Summary (07:02) Section 1: Why might AI-enabled power concentration be a pressing problem? (45:02) Section 2: What are the top arguments against working on this problem? (56:36) Section 3: What can you do to help? --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h
  2. 10/12/2025

    The US AI policy landscape: where to work to have the biggest impact

    The US government may be the single most important actor for shaping how AI develops. If you want to improve the trajectory of AI and reduce catastrophic risks, you could have an outsized impact by working on US policy. But the US policy ecosystem is huge and confusing. And policy shaping AI is made by specific people in specific places — so where you work matters enormously. Narrated by AI. --- Outline: (01:09) Part 1: How to find the most impactful places to work on AI policy (01:31) Prioritise building career capital (03:37) Work backwards from the most important issues (05:32) Find levers of influence (07:07) Prepare for 'policy windows' (10:21) Consider personal fit (11:44) Part 2: Our best guess at the most impactful places (right now) (11:58) 1. Executive Office of the President (18:43) 2. Federal departments and agencies (24:25) 3. Congress (32:28) 4. State governments (37:47) 5. Think tanks and advocacy organisations (41:04) Conclusion (41:49) Want one-on-one advice on pursuing this path? (42:10) Learn more about how and why to pursue a career in US AI policy (42:16) Top recommendations (43:11) Further reading The original text contained 45 footnotes which were omitted from this narration. --- First published: November 17th, 2025 Source: https://80000hours.org/articles/the-us-ai-policy-landscape-where-to-have-the-biggest-impact --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    44 min
  3. 17/09/2025

    [Problem profile] “Using AI to enhance societal decision making” by Zershaaneh Qureshi

    Summary The arrival of AGI could “compress a century of progress in a decade”, forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead. We’d be excited to see some more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful. That said, this is not a mature area. There's significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks. So our guess is that, at this stage, it’d be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. If the field proves promising, they could pave the way for more people to get involved later. Narrated by AI. --- Outline: (00:11) Summary (01:19) Our overall view (01:45) Why advancing AI decision-making tools might matter a lot (03:37) AI tools could help us make much better decisions (07:13) We might be able to differentially speed up the rollout of AI decision-making tools (08:49) What are the arguments against working to advance AI decision-making tools? (09:03) These technologies might be developed by default anyway. (11:15) Wouldnt this make dangerous AI capabilities arrive faster, when we should be slowing things down? (13:24) People might use these tools in dangerous ways. (15:44) So should you work on this? (17:49) How to work in this area (18:00) Help build AI decision-making tools (18:48) Complementary work (20:02) Position yourself to help in future (20:49) What opportunities are there? (21:03) Want one-on-one advice on pursuing this path? (21:22) Learn more (22:29) Acknowledgements The original text contained 8 footnotes which were omitted from this narration. --- First published: September 17th, 2025 Source: https://80000hours.org/problem-profiles/ai-enhanced-decision-making --- Narrated by TYPE III AUDIO.

    23 min
  4. 11/08/2025

    [Career review] “Founder of new projects tackling top problems” by Benjamin Todd

    This path involves aiming to found new organisations that aim to tackle bottlenecks in the problems we think are most pressing. In particular, it involves identifying an idea, testing it, and then helping to build an organisation by investing in strategy, hiring, management, culture and so on, with the aim that the organisation can continue to function well without you in the long term. Our focus is on non-profit models since they have the greatest need right now, but for-profits can also be a route to impact. Narrated by AI. --- Outline: (01:33) Recommended (01:42) Review status (01:48) Why might founding a new project be high impact? (04:35) What does it take to succeed? (05:05) A good enough idea (06:11) You need to be able to convince donors (09:21) An idea that really motivates you (10:45) Leadership potential (11:20) Generalist skills (12:07) Enough knowledge of the area (12:41) Good judgement (13:02) The ability, willingness, and resilience to work on something that might not work out (13:48) Examples of people pursuing this path (13:54) Helen Toner (14:31) Holden Karnofsky (15:05) Next steps if you already have an idea (19:14) Next steps if you don't have an idea yet (22:49) Lists of ideas (24:16) Podcast episodes with founders (24:52) Want one-on-one advice on pursuing this path? (25:11) Learn about other high-impact careers The original text contained 2 footnotes which were omitted from this narration. --- First published: November 10th, 2021 Last updated: August 11th, 2025 Source: https://80000hours.org/career-reviews/founder-impactful-organisations --- Narrated by TYPE III AUDIO.

    26 min
  5. 01/07/2025

    [Problem profile] “Risks from power-seeking AI systems” by Cody Fenwick, Zershaaneh Qureshi

    The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI. Narrated by AI. --- Outline: (01:42) Summary (02:34) Our overall view (05:16) Why are risks from power-seeking AI a pressing world problem? (06:45) 1. Humans will likely build advanced AI systems with long-term goals (11:23) 2. AIs with long-term goals may be inclined to seek power and aim to disempower humanity (12:26) We don't know how to reliably control the behaviour of AI systems (15:56) There's good reason to think that AIs may seek power to pursue their own goals (19:40) Advanced AI systems seeking power might be motivated to disempower humanity (22:43) 3. These power-seeking AI systems could successfully disempower humanity and cause an existential catastrophe (23:16) The path to disempowerment (27:56) Why this would be an existential catastrophe (29:35) How likely is an existential catastrophe from power-seeking AI? (32:17) 4. People might create power-seeking AI systems without enough safeguards, despite the risks (32:59) People may think AI systems are safe, when they in fact are not (36:13) People may dismiss the risks or feel incentivised to downplay them (38:59) 5. Work on this problem is neglected and tractable (40:30) Technical safety approaches (46:19) Governance and policy approaches (48:32) What are the arguments against working on this problem? (48:51) Maybe advanced AI systems wont pursue their own goals; theyll just be tools controlled by humans. (51:07) Even if AI systems develop their own goals, they might not seek power to achieve them. (54:49) If this argument is right, why arent all capable humans dangerously power-seeking? (57:11) Maybe we wont build AIs that are smarter than humans, so we dont have to worry about them taking over. (58:32) We might solve these problems by default anyway when trying to make AI systems useful. (01:00:54) Powerful AI systems of the future will be so different that work today isnt useful. (01:02:56) The problem might be extremely difficult to solve. (01:03:38) Couldnt we just unplug an AI thats pursuing dangerous goals? (01:05:30) Couldnt we just sandbox any potentially dangerous AI until we know its safe? (01:07:15) A truly intelligent system would know not to do harmful things. (01:08:33) How you can help (01:10:47) Want one-on-one advice on pursuing this path? (01:11:22) Learn more (01:14:08) Acknowledgements (01:14:25) Notes and references The original text contained 84 footnotes which were omitted from this narration. --- First published: July 17th, 2025 Source: https://80000hours.org/problem-profiles/risks-from-power-seeking-ai --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h y 15 min
  6. 24/06/2025

    [Problem profile] “Catastrophic AI misuse” by The 80,000 Hours team

    On July 16, 1945, humanity had a disturbing first: scientists tested a technology — nuclear weapons — that could cause the destruction of civilisation. Since the attacks on Hiroshima and Nagasaki, humanity hasn’t launched any more nuclear strikes. In part, this is because our institutions developed international norms that, while imperfect, managed to prevent more nuclear attacks. We expect advanced AI will speed up technological advances, with some expecting a century of scientific progress in a decade. Faster scientific progress could have enormous benefits, from cures for deadly diseases to space exploration. Yet this breakneck pace could lower the barriers to creating devastating new weapons while outpacing our ability to build the safeguards needed to control them. Without proper controls, a country, group, or individual could use AI-created weapons of mass destruction to cause a global catastrophe. Advanced AI systems may dramatically accelerate scientific progress, potentially compressing decades of research into just a few years. This rapid advancement could enable the development of devastating new weapons of mass destruction — including enhanced bioweapons and entirely new categories of dangerous technologies — faster than we can build adequate safeguards. Without proper controls, state and non-state actors could use AI-developed [...] Narrated by AI. --- Outline: (01:09) Summary (01:59) Advanced AI could accelerate scientific progress (04:56) What kinds of weapons could advanced AI create? (05:01) Bioweapons (07:27) Cyberweapons (08:22) New dangerous technologies (09:56) These weapons would pose global catastrophic risks (12:33) There are promising approaches to reducing these risks (12:53) Governance and policy approaches (14:36) Technical approaches to reduce misuse risks The original text contained 5 footnotes which were omitted from this narration. --- First published: June 24th, 2025 Source: https://80000hours.org/problem-profiles/catastrophic-ai-misuse --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    18 min

Acerca de

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.