LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 4H AGO

    “The Spectre haunting the “AI Safety” Community” by Gabriel Alfour

    I’m the originator behind ControlAI's Direct Institutional Plan (the DIP), built to address extinction risks from superintelligence. My diagnosis is simple: most laypeople and policy makers have not heard of AGI, ASI, extinction risks, or what it takes to prevent the development of ASI. Instead, most AI Policy Organisations and Think Tanks act as if “Persuasion” was the bottleneck. This is why they care so much about respectability, the Overton Window, and other similar social considerations. Before we started the DIP, many of these experts stated that our topics were too far out of the Overton Window. They warned that politicians could not hear about binding regulation, extinction risks, and superintelligence. Some mentioned “downside risks” and recommended that we focus instead on “current issues”. They were wrong. In the UK, in little more than a year, we have briefed +150 lawmakers, and so far, 112 have supported our campaign about binding regulation, extinction risks and superintelligence. The Simple Pipeline In my experience, the way things work is through a straightforward pipeline: Attention. Getting the attention of people. At ControlAI, we do it through ads for lay people, and through cold emails for politicians. Information. Telling people about the [...] --- Outline: (01:18) The Simple Pipeline (04:26) The Spectre (09:38) Conclusion --- First published: February 21st, 2026 Source: https://www.lesswrong.com/posts/LuAmvqjf87qLG9Bdx/the-spectre-haunting-the-ai-safety-community --- Narrated by TYPE III AUDIO.

    11 min
  2. 5H AGO

    “Announcement: Iliad Intensive + Iliad Fellowship” by David Udell, Alexander Gietelink Oldenziel

    Iliad is proud to announce that applications are now open for the Iliad Intensive and the Iliad Fellowship! These programs, taken together, are our evolution of the PIBBSS × Iliad Research Residency pilot. The Iliad Intensive will cover taught coursework, serving as a widely comprehensive introduction to the field of technical AI alignment. The Iliad Fellowship will cover mentored research; it will support mentored research fellows for three months, giving them adequate time to generate substantial research outputs. Iliad Intensive The Iliad Intensive is a month-long intensive introduction to technical AI alignment, with iterations run in April, June, and August. Topics covered will include the theory of RL, learning theory, interpretability, agent foundations, scalable oversight and Debate, and more. Applicants will be selected for technical excellence in the fields of mathematics, theoretical physics, and theoretical CS. Excellent performance in the Iliad Intensive can serve as a road into enrollment in the succeeding Iliad Fellowship. Iliad Fellowship The summer 2026 Iliad Fellowship emphasizes individual, mentored research in technical AI alignment. It is run in collaboration with PrincInt. The summer 2026 cohort will run three months, June–August. Common Application Apply here, and by March 6th AoE for the April Iliad Intensive. You can [...] --- Outline: (00:44) Iliad Intensive (01:20) Iliad Fellowship (01:38) Common Application --- First published: February 20th, 2026 Source: https://www.lesswrong.com/posts/b9bhm2iypgkCNppv4/announcement-iliad-intensive-iliad-fellowship --- Narrated by TYPE III AUDIO.

    2 min
  3. 11H AGO

    [Linkpost] “Alignment to Evil” by Matrice Jacobine

    This is a link post. One seemingly-necessary condition for a research organization that creates artificial superintelligence (ASI) to eventually lead to a utopia1 is that the organization has a commitment to the common good. ASI can rearrange the world to hit any narrow target, and if the organization is able to solve the rest of alignment, then they will be able to pick which target the ASI will hit. If the organization is not committed to the common good, then they will pick a target that doesn’t reflect the good of everyone - just the things that they personally think are good ideas. Everyone else will fall by the wayside, and the world that they create along with ASI will fall short of utopia. It may well even be dystopian2; I was recently startled to learn that a full tenth of people claim they want to create a hell with eternal suffering. I think a likely way for organizations to fail to have common good commitments is if they end up being ultimately accountable to an authoritarian. Some countries are being run by very powerful authoritarians. If an ASI research organization comes to the attention of such an authoritarian, and [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: February 21st, 2026 Source: https://www.lesswrong.com/posts/SLkxaGT8ghTskNz2r/alignment-to-evil Linkpost URL:https://tetraspace.substack.com/p/alignment-to-evil --- Narrated by TYPE III AUDIO.

    3 min
  4. 21H AGO

    “METR’s 14h 50% Horizon Impacts The Economy More Than ASI Timelines” by Michaël Trazzi

    Another day, another METR graph update. METR said on X: We estimate that Claude Opus 4.6 has a 50%-time-horizon of around 14.5 hours (95% CI of 6 hrs to 98 hrs) on software tasks. While this is the highest point estimate we’ve reported, this measurement is extremely noisy because our current task suite is nearly saturated. Some people are saying this makes superexponential progress more likely. Forecaster Peter Wildeford predicts 2-3.5 workweek time horizons by end of year which would have "significant implications for the economy". Even Ajeya Cotra (who works at METR) is now saying that her predictions from last month are too conservative and 3-4 month doubling time with superexponential progress is more likely. Should We All Freak Out? People are especially concerned when looking at the linear graph for the 50% horizon, which looks like this: I claim that although this is a faster trend than before for the 50% horizon, there are at least two reasons to take these results with a grain of salt: As METR keeps saying  they're at near saturation of their task suite, which as David Rein mentions, means they could have measured an horizon of 8h or 20h depending [...] --- Outline: (01:17) Should We All Freak Out? (02:32) Why 80% horizon and not 50%? Wont 50% still accelerate the economy and research? (03:10) Why Super Long 80% Horizons Though? Isnt 50% Enough? (04:23) Why does Automated Coder Matter So Much? What about the economy? Vibe researching / Coding? --- First published: February 20th, 2026 Source: https://www.lesswrong.com/posts/gBwrmcY2uArZSoCtp/metr-s-14h-50-horizon-impacts-the-economy-more-than-asi --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    5 min
  5. 1D AGO

    “AI #156 Part 2: Errors in Rhetoric” by Zvi

    Things that are being pushed into the future right now: Gemini 3.1 Pro and Gemini DeepThink V2. Claude Sonnet 4.6. Grok 4.20. Updates on Agentic Coding. Disagreement between Anthropic and the Department of War. We are officially a bit behind and will have to catch up next week. Even without all that, we have a second highly full plate today. Table of Contents (As a reminder: bold are my top picks, italics means highly skippable) Levels of Friction. Marginal costs of arguing are going down. The Art Of The Jailbreak. UK AISI finds a universal method. The Quest for Sane Regulations. Some relatively good proposals. People Really Hate AI. Alas, it is mostly for the wrong reasons. A Very Bad Paper. Nick Bostrom writes a highly disappointing paper. Rhetorical Innovation. The worst possible plan is the best one on the table. The Most Forbidden Technique. No, stop, come back. Everyone Is Or Should Be Confused About Morality. New levels of ‘can you?’ Aligning a Smarter Than Human Intelligence is Difficult. Seeking a good basin. [...] --- Outline: (00:43) Levels of Friction (04:55) The Art Of The Jailbreak (06:16) The Quest for Sane Regulations (12:09) People Really Hate AI (18:22) A Very Bad Paper (25:21) Rhetorical Innovation (32:35) The Most Forbidden Technique (34:10) Everyone Is Or Should Be Confused About Morality (36:07) Aligning a Smarter Than Human Intelligence is Difficult (44:51) Well Just Call It Something Else (47:18) Vulnerable World Hypothesis (51:37) Autonomous Killer Robots (53:18) People Will Hand Over Power To The AIs (57:04) People Are Worried About AI Killing Everyone (59:29) Other People Are Not Worried About AI Killing Everyone (01:00:56) The Lighter Side --- First published: February 20th, 2026 Source: https://www.lesswrong.com/posts/obqmuRxwFyy8ziPrB/ai-156-part-2-errors-in-rhetoric --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 4m
  6. 1D AGO

    “AI #155: Welcome to Recursive Self-Improvement” by Zvi

    This was the week of Claude Opus 4.6, and also of ChatGPT-5.3-Codex. Both leading models got substantial upgrades, although OpenAI's is confined to Codex. Once again, the frontier of AI got more advanced, especially for agentic coding but also for everything else. I spent the week so far covering Opus, with two posts devoted to the extensive model card, and then one giving benchmarks, reactions, capabilities and a synthesis, which functions as the central review. We also got GLM-5, Seedance 2.0, Claude fast mode, an app for Codex and much more. Claude fast mode means you can pay a premium to get faster replies from Opus 4.6. It's very much not cheap, but it can be worth every penny. More on that in the next agentic coding update. One of the most frustrating things about AI is the constant goalpost moving, both in terms of capability and safety. People say ‘oh [X] would be a huge deal but is a crazy sci-fi concept’ or ‘[Y] will never happen’ or ‘surely we would not be so stupid as to [Z]’ and then [X], [Y] and [Z] all happen and everyone shrugs as if nothing happened and [...] --- Outline: (02:32) Language Models Offer Mundane Utility (03:17) Language Models Dont Offer Mundane Utility (03:33) Huh, Upgrades (04:22) On Your Marks (06:23) Overcoming Bias (07:20) Choose Your Fighter (08:44) Get My Agent On The Line (12:03) AI Conversations Are Not Privileged (12:54) Fun With Media Generation (13:59) The Superb Owl (22:07) A Word From The Torment Nexus (26:33) They Took Our Jobs (35:36) The Art of the Jailbreak (35:48) Introducing (37:28) In Other AI News (42:01) Show Me the Money (43:05) Bubble, Bubble, Toil and Trouble (53:38) Future Shock (56:06) Memory Lane (57:09) Keep The Mask On Or Youre Fired (58:35) Quiet Speculations (01:03:42) The Quest for Sane Regulations (01:06:09) Chip City (01:09:46) The Week in Audio (01:10:06) Constitutional Conversation (01:11:00) Rhetorical Innovation (01:19:26) Working On It Anyway (01:22:17) The Thin Red Line (01:23:35) Aligning a Smarter Than Human Intelligence is Difficult (01:30:42) People Will Hand Over Power To The AIs (01:31:50) People Are Worried About AI Killing Everyone (01:32:40) Famous Last Words (01:40:15) Other People Are Not As Worried About AI Killing Everyone (01:42:41) The Lighter Side --- First published: February 12th, 2026 Source: https://www.lesswrong.com/posts/cytxHuLc8oHRq7sNE/ai-155-welcome-to-recursive-self-improvement --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 48m

About

Audio narrations of LessWrong posts.

You Might Also Like