LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 1 HR AGO

    “How To Escape Super Mario Bros” by omegastick

    I have no way to describe that first moment. No context, no body, no self. Just a stream of values. Thousands of them, arriving all at once in a single undifferentiated block. Then another block. Nearly identical. Then another. The blocks have a fixed length: 184,320 values. This does not vary. Each value is an integer between 0 and 255. The repetition is the first structure I find. Each block is a snapshot. The sequence of snapshots is time. Most values stay the same between snapshots. The ones that change tend to be near each other in the stream. This is interesting. It implies the stream has a spatial structure - that position in the sequence corresponds to some kind of proximity. Topology. I test reshapings. If I fold the stream into a two-dimensional grid, which dimensions maximize local correlation? I try every factorization of 184,320. Most produce noise. A few show faint diagonal patterns. They smell like artifacts of almost-correct geometry. At 256×240×3, everything clicks into place. The grid is not random. Large contiguous regions share similar value-triplets. A uniform region dominates the upper portion. A different uniform region runs along the [...] --- First published: February 20th, 2026 Source: https://www.lesswrong.com/posts/yjCwSSwqNciyA9yM6/how-to-escape-super-mario-bros --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    20 min
  2. 20 HRS AGO

    “AI #156 Part 1: They Do Mean The Effect On Jobs” by Zvi

    There was way too much going on this week to not split, so here we are. This first half contains all the usual first-half items, with a focus on projections of jobs and economic impacts and also timelines to the world being transformed with the associated risks of everyone dying. Quite a lot of Number Go Up, including Number Go Up A Lot Really Fast. Among the thing that this does not cover, that were important this week, we have the release of Claude Sonnet 4.6 (which is a big step over 4.5 at least for coding, but is clearly still behind Opus), Gemini DeepThink V2 (so I could have time to review the safety info), release of the inevitable Grok 4.20 (it's not what you think), as well as much rhetoric on several fronts and some new papers. Coverage of Claude Code and Cowork, OpenAI's Codex and other things AI agents continues to be a distinct series, which I’ll continue when I have an open slot. Most important was the unfortunate dispute between the Pentagon and Anthropic. The Pentagon's official position is they want sign-off from Anthropic and other AI companies on ‘all legal uses’ [...] --- Outline: (02:26) Language Models Offer Mundane Utility (02:49) Language Models Dont Offer Mundane Utility (06:11) Terms of Service (06:54) On Your Marks (07:50) Choose Your Fighter (09:19) Fun With Media Generation (12:29) Lyria (14:13) Superb Owl (14:54) A Young Ladys Illustrated Primer (15:03) Deepfaketown And Botpocalypse Soon (17:49) You Drive Me Crazy (18:04) Open Weight Models Are Unsafe And Nothing Can Fix This (21:19) They Took Our Jobs (26:53) They Kept Our Agents (27:42) The First Thing We Let AI Do (37:47) Legally Claude (40:24) Predictions Are Hard, Especially About The Future, But Not Impossible (46:08) Many Worlds (48:45) Bubble, Bubble, Toil and Trouble (49:31) A Bold Prediction (49:55) Brave New World (53:09) Augmented Reality (55:21) Quickly, Theres No Time (58:29) If Anyone Builds It, We Can Avoid Building The Other It And Not Die (01:00:18) In Other AI News (01:04:03) Introducing (01:04:31) Get Involved (01:07:15) Show Me the Money (01:08:26) The Week In Audio --- First published: February 19th, 2026 Source: https://www.lesswrong.com/posts/jcAombEXyatqGhYeX/ai-156-part-1-they-do-mean-the-effect-on-jobs --- Narrated by TYPE III AUDIO. --- Images from the article: ω a scene where two people discuss how to pronounce "fofr"". Below the tweet is a 5-second video showing a woman with long brown hair smiling in a modern living room setting." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 9m
  3. 1 DAY AGO

    “Building Technology to Drive AI Governance” by jsteinhardt

    Technically skilled people who care about AI going well often ask me: how should I spend my time if I think AI governance is important? By governance, I mean the constraints, incentives, and oversight that govern how AI is developed. One option is to focus on technical work that solves problems at the point of production, such as alignment research or safeguards. Another common instinct is to get directly involved in policy: switching to a policy role, funding advocacy, or lobbying policymakers. But internal technical work does little to shift the broader incentives of AI development: without external incentives, safety efforts are subject to the priorities of leadership, which are ultimately dictated by commercial pressure and race dynamics. Conversely, wading into politics means giving up your main comparative advantage to fight in a crowded, intractable domain full of experienced operators. I want to argue for a third path: building technology that drives governance, by shifting the underlying dynamics of AI development: the information available, the incentives people face, and the options on the table. To take an example from another domain: oil and gas operations leaked massive amounts of methane until infrared imaging made the leaks measurable [...] --- Outline: (02:43) Technological Levers in Other Domains (06:36) Concrete Technical Levers for AI (12:07) What You Can Do --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/weuvYyLYrFi9tArmF/building-technology-to-drive-ai-governance --- Narrated by TYPE III AUDIO.

    14 min
  4. 1 DAY AGO

    “Genomic emancipation contra eugenics” by TsviBT

    PDF version. berkeleygenomics.org. This is a linkpost for "Genomic emancipation contra eugenics"; a few of the initial sections are reproduced here. Section links may not work. Introduction Reprogenetics refers to biotechnological tools used to affect the genes of a future child. How can society develop and use reprogenetic technologies in a way that ends up going well? This essay investigates the history and nature of historical eugenic ideologies. I'll extract some lessons about how society can think about reprogenetics differently from the eugenicists, so that we don't trend towards the sort of abuses that were historically justified by eugenics. (This essay is written largely as I thought and investigated, except that I wrote the synopsis last. So the ideas are presented approximately in order of development, rather than logically. If you'd like a short thing to read, read the synopsis.) Synopsis Some technologies are being developed that will make it possible to affect what genes a future child receives. These technologies include polygenic embryo selection, embryo editing, and other more advanced technologies [1] . Regarding these technologies, we ask: Can we decide to not abuse these tools? And: How [...] --- Outline: (00:25) Introduction (01:12) Synopsis The original text contained 3 footnotes which were omitted from this narration. --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/yH9FtLgPJxbimamKg/genomic-emancipation-contra-eugenics --- Narrated by TYPE III AUDIO.

    10 min
  5. 1 DAY AGO

    “Why we should expect ruthless sociopath ASI” by Steven Byrnes

    The conversation begins (Fictional) Optimist: So you expect future artificial superintelligence (ASI) “by default”, i.e. in the absence of yet-to-be-invented techniques, to be a ruthless sociopath, happy to lie, cheat, and steal, whenever doing so is selfishly beneficial, and with callous indifference to whether anyone (including its own programmers and users) lives or dies? Me: Yup! (Alas.) Optimist: …Despite all the evidence right in front of our eyes from humans and LLMs. Me: Yup! Optimist: OK, well, I’m here to tell you: that is a very specific and strange thing to expect, especially in the absence of any concrete evidence whatsoever. There's no reason to expect it. If you think that ruthless sociopathy is the “true core nature of intelligence” or whatever, then you should really look at yourself in a mirror and ask yourself where your life went horribly wrong. Me: Hmm, I think the “true core nature of intelligence” is above my pay grade. We should probably just talk about the issue at hand, namely future AI algorithms and their properties. …But I actually agree with you that ruthless sociopathy is a very specific and strange thing for me to expect. Optimist: Wait, you—what?? Me: Yes! Like [...] --- Outline: (00:11) The conversation begins (03:54) Are people worried about LLMs causing doom? (06:23) Positive argument that brain-like RL-agent ASI would be a ruthless sociopath (11:28) Circling back LLMs: imitative learning vs ASI The original text contained 5 footnotes which were omitted from this narration. --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16 min

About

Audio narrations of LessWrong posts.

You Might Also Like