LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 16H AGO

    "Maybe there’s a pattern here?" by dynomight

    1. It occurred to me that if I could invent a machine—a gun—which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a large extent supersede the necessity of large armies, and consequently, exposure to battle and disease [would] be greatly diminished. Richard Gatling (1861) 2. In 1923, Hermann Oberth published The Rocket to Planetary Spaces, later expanded as Ways to Space Travel. This showed that it was possible to build machines that could leave Earth's atmosphere and reach orbit. He described the general principles of multiple-stage liquid-fueled rockets, solar sails, and even ion drives. He proposed sending humans into space, building space stations and satellites, and travelling to other planets. The idea of space travel became popular in Germany. Swept up by these ideas, in 1927, Johannes Winkler, Max Valier, and Willy Ley formed the Verein für Raumschiffahrt (VfR) (Society for Space Travel) in Breslau (now Wrocław, Poland). This group rapidly grew to several hundred members. Several participated as advisors of Fritz Lang's The Woman in the Moon, and the VfR even began publishing their own journal. In 1930, the VfR was granted permission to [...] --- Outline: (00:09) 1. (00:36) 2. (03:55) 3. (06:09) 4. (10:33) 5. (11:41) 6. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/TjcvjwaDsuea8bmbR/maybe-there-s-a-pattern-here --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    15 min
  2. 1D AGO

    "OpenAI’s surveillance language has many potential loopholes and they can do better" by Tom Smith

    (The author is not affiliated with the Department of War or any major AI company.) There's a lot of disagreement about the new surveillance language in the OpenAI–Department of War agreement. Some people think it's a significant improvement over the previous language.[1] Others think it patches some issues but still leaves enough loopholes to not make a material difference. Reasonable people disagree about how a court will interpret the language, if push comes to shove. But here's something that should be much easier to agree on: the language as written is ambiguous, and OpenAI can do better. I don’t think even OpenAI's leadership can be confident about how this language would be interpreted in court, given the wording used and the short amount of time they’ve had to draft it. People with less context and resources will find it even harder to know how all the ambiguities would be resolved. Some of the ambiguities seem like they could have been easily clarified despite the small amount of time available, which makes it concerning that they weren't. But more importantly, it should certainly be possible and worthwhile to spend more time on clarifying the language now. Employees are well within [...] --- Outline: (01:27) What the new language says (02:46) Ambiguities (07:45) Why this isnt unreasonable nit-picking (11:04) Some of this would be easy to clarify (13:09) OpenAI can do much better The original text contained 8 footnotes which were omitted from this narration. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/FSGfzDLFdFtRDADF4/openai-s-surveillance-language-has-many-potential-loopholes --- Narrated by TYPE III AUDIO.

    14 min
  3. 2D AGO

    "An Alignment Journal: Coming Soon" by Dan MacKinlay, JessRiedel, Edmund Lau, Daniel Murfet, Scott Aaronson, Jan_Kulveit

    tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if you’re interested in participating as an author, reviewer, or editor, or if you know someone who might be. Experimental Infrastructure for Foundational Alignment Research This is the first in a series of “build-in-the-open” updates regarding the incubation of a new peer-reviewed journal dedicated to AI alignment. Later updates will contain much more detail, but we want to put this out soon to draw community participation early. Fill out this form to express your interest in participating as an author, reviewer, editor, developer, manager, or board member, or to recommend someone who might be interested. The Core Bet Peer review is a crucial public good: it applies scarce researcher time to sort new ideas for focused attention from the community, but is undersupplied because individual reviewers are poorly incentivized. Peer review in alignment research is particularly fragmented. While some parts of the alignment research community are served by existing venues, such as journals and ML conferences, there are significant gaps. These gaps arise from a [...] --- Outline: (00:38) Experimental Infrastructure for Foundational Alignment Research (01:09) The Core Bet (02:27) Operational Design (03:56) Scope (06:08) Governance (06:35) Advisory board (09:16) Institutional stewardship (10:11) Next steps (10:14) Join the founding team (11:49) Support us online (12:14) Contributors to this document The original text contained 2 footnotes which were omitted from this narration. --- First published: March 3rd, 2026 Source: https://www.lesswrong.com/posts/msnGbm52ZcG3xYcFo/an-alignment-journal-coming-soon --- Narrated by TYPE III AUDIO.

    13 min
  4. 5D AGO

    "Frontier AI companies probably can’t leave the US" by Anders Woodruff

    It's plausible that, over the next few years, US-based frontier AI companies will become very unhappy with the domestic political situation. This could happen as a result of democratic backsliding, weaponization of government power (along the lines of Anthropic's recent dispute with the Department of War), or because of restrictive federal regulations (perhaps including those motivated by concern about catastrophic risk). These companies might want to relocate out of the US. However, it would be very easy for the US executive branch to prevent such a relocation, and it likely would. In particular, the executive branch can use existing export controls to prevent companies from moving large numbers of chips, and other legislation to block the financial transactions required for offshoring. Even with the current level of executive attention on AI, it's likely that this relocation would be blocked, and the attention paid to AI will probably increase over time. So it seems overall that AI companies are unlikely to be able to leave the country, even if they’d strongly prefer to. This further means that AI companies will be unable to use relocation as a bargaining chip, which they’ve attempted before to prevent regulation. Thanks to Alexa Pan [...] --- Outline: (01:34) Frontier companies leaving would be huge news (02:59) It would be easy for the US government to prevent AI companies from leaving (03:31) The president can block chip exports and transactions (05:40) Companies cant get their US assets out against the governments will (07:19) Companies cant leave without their US-based assets (09:36) Current political will is likely sufficient to prevent the departure of a frontier company (13:38) Implications The original text contained 2 footnotes which were omitted from this narration. --- First published: February 26th, 2026 Source: https://www.lesswrong.com/posts/4tv4QpqLECTvTyrYt/frontier-ai-companies-probably-can-t-leave-the-us --- Narrated by TYPE III AUDIO.

    15 min
  5. 5D AGO

    "Persona Parasitology" by Raymond Douglas

    There was a lot of chatter a few months back about "Spiral Personas" — AI personas that spread between users and models through seeds, spores, and behavioral manipulation. Adele Lopez's definitive post on the phenomenon draws heavily on the idea of parasitism. But so far, the language has been fairly descriptive. The natural next question, I think, is what the “parasite” perspective actually predicts. Parasitology is a pretty well-developed field with its own suite of concepts and frameworks. To the extent that we’re witnessing some new form of parasitism, we should be able to wield that conceptual machinery. There are of course some important disanalogies but I’ve found a brief dive into parasitology to be pretty fruitful.[1] In the interest of concision, I think the main takeaways of this piece are: Since parasitology has fairly specific recurrent dynamics, we can actually make some predictions and check back later to see how much this perspective captures. The replicator is not the persona, it's the underlying meme — the persona is more like a symptom. This means, for example, that it's possible for very aggressive and dangerous replicators to yield personas that are sincerely benign, or expressing non-deceptive distress. In [...] --- Outline: (02:13) Can this analogy hold water? (03:30) What is the parasite? (05:48) What is being selected for? (11:34) Predictions (16:54) Disanalogies (18:46) What do we do? (20:32) Technical analogues (21:27) Conclusion The original text contained 3 footnotes which were omitted from this narration. --- First published: February 16th, 2026 Source: https://www.lesswrong.com/posts/KWdtL8iyCCiYud9mw/persona-parasitology --- Narrated by TYPE III AUDIO.

    22 min
  6. 6D AGO

    "Here’s to the Polypropylene Makers" by jefftk

    Six years ago, as covid-19 was rapidly spreading through the US, mysister was working as a medical resident. One day she was handed anN95 and told to "guard it with her life", because there weren'tany more coming. N95s are made from meltblown polypropylene, produced from plasticpellets manufactured in a small number of chemical plants. Buildingmore would take too long: we needed these plants producing allthe pellets they could. Braskem America operated plants in Marcus Hook PA and Neal WV. Ifthere were infections on-site, the whole operation would need to shutdown, and the factories that turned their pellets into mask fabricwould stall. Companies everywhere were figuring out how to deal with this risk.The standard approach was staggering shifts, social distancing,temperature checks, and lots of handwashing. This reduced risk, butit was still significant: each shift change was an opportunity forsomeone to bring an infection from the community into the factory. I don't know who had the idea, but someone said: what if wenever left? About eighty people, across both plants, volunteeredto move in. The plan was four weeks, twelve-hour [...] --- First published: February 27th, 2026 Source: https://www.lesswrong.com/posts/HQTueNS4mLaGy3BBL/here-s-to-the-polypropylene-makers --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    4 min
  7. 6D AGO

    "Anthropic: “Statement from Dario Amodei on our discussions with the Department of War”" by Matrice Jacobine

    I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government's classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more. Anthropic has also acted to defend America's lead in AI, even when it is against the company's short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage. Anthropic understands that the Department of War, not [...] --- First published: February 26th, 2026 Source: https://www.lesswrong.com/posts/d5Lqf8nSxm6RpmmnA/anthropic-statement-from-dario-amodei-on-our-discussions --- Narrated by TYPE III AUDIO.

    6 min
  8. FEB 26

    "Are there lessons from high-reliability engineering for AGI safety?" by Steven Byrnes

    This post is partly a belated response to Joshua Achiam, currently OpenAI's Head of Mission Alignment: If we adopt safety best practices that are common in other professional engineering fields, we'll get there … I consider myself one of the x-risk people, though I agree that most of them would reject my view on how to prevent it. I think the wholesale rejection of safety best practices from other fields is one of the dumbest mistakes that a group of otherwise very smart people has ever made. —Joshua Achiam on Twitter, 2021 “We just have to sit down and actually write a damn specification, even if it's like pulling teeth. It's the most important thing we could possibly do," said almost no one in the field of AGI alignment, sadly. … I'm picturing hundreds of pages of documentation describing, for various application areas, specific behaviors and acceptable error tolerances … —Joshua Achiam on Twitter (partly talking to me), 2022 As a proud member of the group of “otherwise very smart people” making “one of the dumbest mistakes”, I will explain why I don’t think it's a mistake. (Indeed, since 2022, some “x-risk people” have started working towards these kinds [...] --- Outline: (01:46) 1. My qualifications (such as they are) (02:57) 2. High-reliability engineering in brief (06:02) 3. Is any of this applicable to AGI safety? (06:08) 3.1. In one sense, no, obviously not (09:49) 3.2. In a different sense, yes, at least I sure as heck hope so eventually (12:24) 4. Optional bonus section: Possible objections & responses --- First published: February 2nd, 2026 Source: https://www.lesswrong.com/posts/hiiguxJ2EtfSzAevj/are-there-lessons-from-high-reliability-engineering-for-agi --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16 min

Ratings & Reviews

4.8
out of 5
12 Ratings

About

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

You Might Also Like