LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 22H AGO

    "$50 million a year for a 10% chance to ban ASI" by Andrea_Miotti, Alex Amadori, Gabriel Alfour

    ControlAI's mission is to avert the extinction risks posed by superintelligent AI. We believe that in order to do this, we must secure an international prohibition on its development. We're working to make this happen through what we believe is the most natural and promising approach: helping decision-makers in governments and the public understand the risks and take action. We believe that ControlAI can achieve an international prohibition on ASI development if scaled sufficiently. We estimate that it would take approximately a $50 million yearly budget in funding to give us a concrete chance at achieving this in the next few years. To be more precise: conditional on receiving this funding in the next few months, we feel we would have ~10% probability of success. In this post, we lay out some of the reasoning behind this estimate, and explain how additional funding past that threshold would continue to significantly improve our chances of success, with $500 million a year producing an estimated ~30% probability of success. [1] Preventing ASI 101 Negotiating, implementing and enforcing an international prohibition on ASI is, in and of itself, not the work of a single non-profit. You [...] --- Outline: (01:17) Preventing ASI 101 (05:44) Awareness is the bottleneck (09:38) An asymmetric war (12:08) Scalable processes (17:32) What wed do with $50 million or more per year (18:45) US policy advocacy (21:22) Policy advocacy in the rest of the world (23:37) Public awareness (31:15) Grassroots mobilization (32:31) Policy work (33:59) Thought-leader advocacy (36:05) Attracting and retaining the best talent (37:18) Conclusion The original text contained 28 footnotes which were omitted from this narration. --- First published: April 21st, 2026 Source: https://www.lesswrong.com/posts/TnAR5Sf5hphfnzNTr/usd50-million-a-year-for-a-10-chance-to-ban-asi-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    40 min
  2. 1D AGO

    "Evil is bad, actually (Vassar and Olivia Schaefer callout post)" by plex

    Micheal Vassar's strategy for saving the world is horrifyingly counterproductive. Olivia's is worse. A note before we start: A lot of the sources cited are people who ended up looking kinda insane. This is not a coincidence, it's apparently an explicit strategy: Apply plausibly-deniable psychological pressure to anyone who might speak up until they crack and discredit themselves by sounding crazy or taking extreme and destructive actions. Here's Brent Dill explaining it: (later in the conversation he tries to encourage the person he's talking to kill herself, and threatens her death if she posts the logs. Charming group! I hear Brent was living in Vassar's garden recently, well after he was removed from the wider community for sexual abuse.) Examples Some of the people here I knew before their interactions with Vassar's sphere to be not just mentally OK, but unusually resilient people. Prime among them is Kathy Forth. Prior to her suicide, Kathy and I were friends. I witnessed her falls downwards from healthy and capable to anxiety to paranoia, as downstream of what I believe to be genuine sexual abuse she spiralled into a narrative and way of experiencing the world where almost everyone seemed [...] The original text contained 7 footnotes which were omitted from this narration. --- First published: April 21st, 2026 Source: https://www.lesswrong.com/posts/cY7J7KSSqrhB8t3hQ/evil-is-bad-actually-vassar-and-olivia-schaefer-callout-post --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16 min
  3. 2D AGO

    "10 non-boring ways I’ve used AI in the last month" by habryka

    I use AI assistance for basically all of my work, for many hours, every day. My colleagues do the same. Recent surveys suggest >50% of Americans have used AI to help with their work in the last week. My architect recently started sending me emails that were clearly ChatGPT generated.[1] Despite that, I know surprisingly little about how other people use AI assitance. Or at least how people who aren't weird AI-influencers sharing their marketing courses on Twitter or LinkedIn use AI. So here is a list of 10 concrete times I have used AI in some at least mildly creative ways, and how that went. 1) Transcribe and summarize every conversation spoken in our team office Using an internal Lightcone application called "Omnilog" we have a microphone in our office that records all of our meetings, transcribes them via ElevenLabs, and uses Pyannote.ai for speaker identification. This was a bunch of work and is quite valuable, but probably a bit too annoying for most readers of this post to set up. However, the thing I am successfully using Claude Code to do is take that transcript (which often has substantial transcription and speaker-identification errors), clean it up, summarize [...] --- Outline: (00:50) 1) Transcribe and summarize every conversation spoken in our team office (01:56) 2) Try to automatically fix any simple bugs that anyone on the team has mentioned out loud, or complained about in Slack (03:13) 3) Design 20+ different design variations for nowinners.ai (04:09) 4) Review my LessWrong essays for factual accuracy and argue with me about their central thesis (05:08) 5) Remove unnecessary clauses, sentences, parentheticals and random cruft from my LessWrong posts before publishing (06:23) 6) Pair vibe-coding (08:14) 7) Mass-creating 100+ variations of Suno songs using Claude Cowork desktop control [... 3 more sections] --- First published: April 20th, 2026 Source: https://www.lesswrong.com/posts/bxdwSZYxKmPBres6w/10-non-boring-ways-i-ve-used-ai-in-the-last-month --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try

    14 min
  4. 2D AGO

    "Feel like a room has bad vibes? The lighting is probably too “spiky” or too blue" by habryka

    I have now had a few years of experience doing architectural and interior design for many spaces that people seem to really love (most widely known Lighthaven, but before that we also had the Lightcone Offices, though I've also played a hand in designing some of the most popular areas at Constellation a few years back). Most people (including me a few years back) have surprisingly bad introspective access into why a room makes them feel certain things. Most of the time, people's ability to describe the effect of a space on them is as shallow as "this place feels artificial", or "this place has bad vibes", or "this place feels cozy". And if they try to figure out why that is true, they quickly run into limits of their introspective access. The most common reason why a space feels bad, is because it is lit by low-quality lights. Our eyes evolved to see things illuminated by sunlight. Correspondingly, it appears that the best proxy we have for whether the light in a room "works" is how similar the light in that room is to natural sunlight. The most popular way of measuring how much light differs from [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: April 19th, 2026 Source: https://www.lesswrong.com/posts/dWib7qinqymfxevE4/feel-like-a-room-has-bad-vibes-the-lighting-is-probably-too --- Narrated by TYPE III AUDIO.

    7 min
  5. 2D AGO

    "Reevaluating AGI Ruin in 2026" by lc

    It's been about four years since Eliezer Yudkowsky published AGI Ruin: A List of Lethalities, a 43-point list of reasons the default outcome from building AGI is everyone dying. A week later, Paul Christiano replied with Where I Agree and Disagree with Eliezer, signing on to about half the list and pushing back on most of the rest. For people who were young and not in the bay area, like me, these essays were probably more significant than old timers would expect. Before it became completely consumed with AI discussions, LessWrong was a forum about the art of human rationality, and most internet rationalists I knew thought of it as a mix between that and a place to write for people who liked the sequences. It wasn't until 2022 that we were exposed to all of the doom arguments in one place, and it was the first time in many years that Eliezer had publicly announced how much more dire his assessment was since the Sequences. As far as I can tell AGI Ruin still remains his most authoritative explanation of his views. It's not often that public intellectuals will literally hand you a document explaining why [...] --- Outline: (02:51) AGI Ruin (02:54) Section A (Setting up the problem) (12:18) Section B.1 (Distributional Shift) (22:16) Section B.2:  Central difficulties of outer and inner alignment. (32:21) Section B.3:  Central difficulties of sufficiently good and useful transparency / interpretability. (41:29) Section C (What is AI Safety currently doing?) (44:34) Overall Impressions The original text contained 4 footnotes which were omitted from this narration. --- First published: April 19th, 2026 Source: https://www.lesswrong.com/posts/PgJYwnN7fZKipgMz4/reevaluating-agi-ruin-in-2026 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    50 min
  6. 3D AGO

    "Having OCD is like living in North Korea (Here’s how I escaped)" by Declan Molony

    [Author's note: this post is the narrative version that explains my journey with OCD and how I treated it. The short version provides quick, actionable advice for treating OCD.] The following is the most painful experience I've ever had. Four years ago in the parking lot of my rock climbing gym… …my heart was pumping out of my chest, I was sweating profusely, and an overwhelming sense of panic and impending doom had a vice grip on my soul. A painful death was surely imminent. I felt like I was defusing a bomb that was on the verge of exploding. In reality, I was standing outside of my car after locking it with only one *beep* of my key fob, instead of my normal 5-6 *beeps* I usually do. The reason I undertook this (basically suicidal) task was because it was getting annoying how many more *beeps* it was taking for my car to feel locked. It used to be only 2-3 *beeps* a few years ago. Now it was 5-6. In a few more years, it might take as many as 10-20 *beeps*. One time on a hike with friends, a sense of panic overcame me. [...] --- Outline: (00:24) The following is the most painful experience Ive ever had. (07:34) OCD (12:28) Deconstructing OCD into its two parts (12:49) (1) Severe Anxiety (15:53) (2) Disordered Thoughts (18:06) My dating life (23:34) So what caused me to finally get help with my OCD? (26:22) Solutions (28:39) Panic Meditation (41:31) Three mini-examples of my improvement (41:35) A) Panic at the grocery store (and no, sadly, not Panic! At The Disco) (42:30) B) Moral OCD at the gym (43:49) C) Disordered thoughts while on a date (50:35) Where Im at today (58:41) Further resources --- First published: April 18th, 2026 Source: https://www.lesswrong.com/posts/fgDqnwQj3AP9mKRRG/having-ocd-is-like-living-in-north-korea-here-s-how-i --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    59 min
  7. 3D AGO

    "There are only four skills: design, technical, management and physical" by habryka

    Epistemic status: Completely schizo galaxy-brained theory Lightcone[1] operates on a "generalist" philosophy. Most of our full-time staff have the title "generalist", and in any given year they work on a wide variety of tasks — from software development on the LessWrong codebase to fixing an overflowing toilet at Lighthaven, our 30,000 sq. ft. campus. One of our core rules is that you should not delegate a task you don't know how to perform yourself. This is a very intense rule and has lots of implications about how we operate, so I've spent a lot of time watching people learn things they didn't previously know how to do. My overall observation (and why we have the rule) is that smart people can learn almost anything. Across a wide range of tasks, most of the variance in performance is explained by general intelligence (foremost) and conscientiousness (secondmost), not expertise. Of course, if you compare yourself to someone who's done a task thousands of times you'll lag behind for a while — but people plateau surprisingly quickly. Having worked with experts across many industries, and having dabbled in the literature around skill transfer and training, there seems to be little difference [...] The original text contained 5 footnotes which were omitted from this narration. --- First published: April 18th, 2026 Source: https://www.lesswrong.com/posts/KRLGxCaqdgrotyB8z/there-are-only-four-skills-design-technical-management-and --- Narrated by TYPE III AUDIO.

    10 min

Ratings & Reviews

4.8
out of 5
12 Ratings

About

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

You Might Also Like