80,000 Hours - Narrations

80000 Hours

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.

  1. 12/12/2025

    [Problem profile] “Extreme power concentration” by Rose Hadshar

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections. This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%. But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far. Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future. Source: https://80000hours.org/problem-profiles/extreme-power-concentration/ Narrated by a human. --- Outline: (00:00) Introduction (02:15) Summary (07:02) Section 1: Why might AI-enabled power concentration be a pressing problem? (45:02) Section 2: What are the top arguments against working on this problem? (56:36) Section 3: What can you do to help? --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 hr
  2. 10/12/2025

    The US AI policy landscape: where to work to have the biggest impact

    The US government may be the single most important actor for shaping how AI develops. If you want to improve the trajectory of AI and reduce catastrophic risks, you could have an outsized impact by working on US policy. But the US policy ecosystem is huge and confusing. And policy shaping AI is made by specific people in specific places — so where you work matters enormously. Narrated by AI. --- Outline: (01:09) Part 1: How to find the most impactful places to work on AI policy (01:31) Prioritise building career capital (03:37) Work backwards from the most important issues (05:32) Find levers of influence (07:07) Prepare for 'policy windows' (10:21) Consider personal fit (11:44) Part 2: Our best guess at the most impactful places (right now) (11:58) 1. Executive Office of the President (18:43) 2. Federal departments and agencies (24:25) 3. Congress (32:28) 4. State governments (37:47) 5. Think tanks and advocacy organisations (41:04) Conclusion (41:49) Want one-on-one advice on pursuing this path? (42:10) Learn more about how and why to pursue a career in US AI policy (42:16) Top recommendations (43:11) Further reading The original text contained 45 footnotes which were omitted from this narration. --- First published: November 17th, 2025 Source: https://80000hours.org/articles/the-us-ai-policy-landscape-where-to-have-the-biggest-impact --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    44 min
  3. 17/09/2025

    [Problem profile] “Using AI to enhance societal decision making” by Zershaaneh Qureshi

    Summary The arrival of AGI could “compress a century of progress in a decade”, forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead. We’d be excited to see some more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful. That said, this is not a mature area. There's significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks. So our guess is that, at this stage, it’d be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. If the field proves promising, they could pave the way for more people to get involved later. Narrated by AI. --- Outline: (00:11) Summary (01:19) Our overall view (01:45) Why advancing AI decision-making tools might matter a lot (03:37) AI tools could help us make much better decisions (07:13) We might be able to differentially speed up the rollout of AI decision-making tools (08:49) What are the arguments against working to advance AI decision-making tools? (09:03) These technologies might be developed by default anyway. (11:15) Wouldnt this make dangerous AI capabilities arrive faster, when we should be slowing things down? (13:24) People might use these tools in dangerous ways. (15:44) So should you work on this? (17:49) How to work in this area (18:00) Help build AI decision-making tools (18:48) Complementary work (20:02) Position yourself to help in future (20:49) What opportunities are there? (21:03) Want one-on-one advice on pursuing this path? (21:22) Learn more (22:29) Acknowledgements The original text contained 8 footnotes which were omitted from this narration. --- First published: September 17th, 2025 Source: https://80000hours.org/problem-profiles/ai-enhanced-decision-making --- Narrated by TYPE III AUDIO.

    23 min
  4. 11/08/2025

    [Career review] “Founder of new projects tackling top problems” by Benjamin Todd

    This path involves aiming to found new organisations that aim to tackle bottlenecks in the problems we think are most pressing. In particular, it involves identifying an idea, testing it, and then helping to build an organisation by investing in strategy, hiring, management, culture and so on, with the aim that the organisation can continue to function well without you in the long term. Our focus is on non-profit models since they have the greatest need right now, but for-profits can also be a route to impact. Narrated by AI. --- Outline: (01:33) Recommended (01:42) Review status (01:48) Why might founding a new project be high impact? (04:35) What does it take to succeed? (05:05) A good enough idea (06:11) You need to be able to convince donors (09:21) An idea that really motivates you (10:45) Leadership potential (11:20) Generalist skills (12:07) Enough knowledge of the area (12:41) Good judgement (13:02) The ability, willingness, and resilience to work on something that might not work out (13:48) Examples of people pursuing this path (13:54) Helen Toner (14:31) Holden Karnofsky (15:05) Next steps if you already have an idea (19:14) Next steps if you don't have an idea yet (22:49) Lists of ideas (24:16) Podcast episodes with founders (24:52) Want one-on-one advice on pursuing this path? (25:11) Learn about other high-impact careers The original text contained 2 footnotes which were omitted from this narration. --- First published: November 10th, 2021 Last updated: August 11th, 2025 Source: https://80000hours.org/career-reviews/founder-impactful-organisations --- Narrated by TYPE III AUDIO.

    26 min
  5. 01/07/2025

    [Problem profile] “Risks from power-seeking AI systems” by Cody Fenwick, Zershaaneh Qureshi

    The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI. Narrated by AI. --- Outline: (01:42) Summary (02:34) Our overall view (05:16) Why are risks from power-seeking AI a pressing world problem? (06:45) 1. Humans will likely build advanced AI systems with long-term goals (11:23) 2. AIs with long-term goals may be inclined to seek power and aim to disempower humanity (12:26) We don't know how to reliably control the behaviour of AI systems (15:56) There's good reason to think that AIs may seek power to pursue their own goals (19:40) Advanced AI systems seeking power might be motivated to disempower humanity (22:43) 3. These power-seeking AI systems could successfully disempower humanity and cause an existential catastrophe (23:16) The path to disempowerment (27:56) Why this would be an existential catastrophe (29:35) How likely is an existential catastrophe from power-seeking AI? (32:17) 4. People might create power-seeking AI systems without enough safeguards, despite the risks (32:59) People may think AI systems are safe, when they in fact are not (36:13) People may dismiss the risks or feel incentivised to downplay them (38:59) 5. Work on this problem is neglected and tractable (40:30) Technical safety approaches (46:19) Governance and policy approaches (48:32) What are the arguments against working on this problem? (48:51) Maybe advanced AI systems wont pursue their own goals; theyll just be tools controlled by humans. (51:07) Even if AI systems develop their own goals, they might not seek power to achieve them. (54:49) If this argument is right, why arent all capable humans dangerously power-seeking? (57:11) Maybe we wont build AIs that are smarter than humans, so we dont have to worry about them taking over. (58:32) We might solve these problems by default anyway when trying to make AI systems useful. (01:00:54) Powerful AI systems of the future will be so different that work today isnt useful. (01:02:56) The problem might be extremely difficult to solve. (01:03:38) Couldnt we just unplug an AI thats pursuing dangerous goals? (01:05:30) Couldnt we just sandbox any potentially dangerous AI until we know its safe? (01:07:15) A truly intelligent system would know not to do harmful things. (01:08:33) How you can help (01:10:47) Want one-on-one advice on pursuing this path? (01:11:22) Learn more (01:14:08) Acknowledgements (01:14:25) Notes and references The original text contained 84 footnotes which were omitted from this narration. --- First published: July 17th, 2025 Source: https://80000hours.org/problem-profiles/risks-from-power-seeking-ai --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 15m
  6. 30/06/2025

    “How not to lose your job to AI” by Benjamin Todd

    About half of people are worried they'll lose their job to AI. And they're right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And over the next five years, it's set to continue to improve rapidly. Eventually, mass automation and falling wages are a real possibility. Narrated by AI. --- Outline: (01:12) Skills most likely to increase in value as AI progresses (04:29) 1. Why automation often doesnt decrease wages (09:30) What would full automation mean for wages? (12:03) 2. Four types of skills most likely to increase in value (13:34) 2.1. Skills AI wont easily be able to perform (13:58) Tasks not in AI training data (& hard to gather) (16:17) Messy, long-horizon skills (19:19) Skills where a person-in-the-loop is wanted (20:40) Skills where automation is bottlenecked by physical infrastructure (21:33) 2.2. Skills that are needed for AI deployment (24:58) 2.3. Skills where we could use far more of what they produce (26:23) 2.4. Skills that are difficult for others to learn (27:30) 3. So, which specific work skills will most increase in value in the future? And how can you learn them? (27:53) 3.1. Skills using AI to solve real problems (29:16) 3.2. Personal effectiveness (29:22) Being a generally productive, proactive person (30:08) Social skills (31:06) Learning how to learn (31:54) 3.3. Leadership skills (32:22) Entrepreneurship (33:15) Management (34:20) Strategy, prioritisation, and decision making (35:46) True expertise (37:03) 3.4. Communications and taste (38:06) 3.5. Getting things done in government (39:07) 3.6. Complex physical skills (39:44) 4. Skills with a more uncertain future (40:07) 4.1. Routine knowledge work: writing, admin, analysis, advice (44:30) 4.2. Coding, maths, data science, and applied STEM (46:46) 4.3. Visual creation (47:25) 4.4. More predictable manual jobs (48:11) 5. Some closing thoughts on career strategy (48:21) 5.1. Look for ways to leapfrog entry-level white collar jobs (50:15) 5.2. Be cautious about starting long training periods, like PhDs and medicine (51:31) 5.3. Make yourself more resilient to change (52:00) 5.4. Ride the wave (52:23) Take action The original text contained 9 footnotes which were omitted from this narration. --- First published: June 16th, 2025 Source: https://80000hours.org/ai/guide/skills-ai-makes-valuable --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    53 min

About

Narrations of articles from 80000hours.org. Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast. Some articles are narrated by the authors, while others are read by AI.