LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 6H AGO

    “AI #141: Give Us The Money” by Zvi

    OpenAI does not waste time. On Friday I covered their announcement that they had ‘completed their recapitalization’ by converting into a PBC, including the potentially largest theft in human history. Then this week their CFO Sarah Friar went ahead and called for a Federal ‘backstop’ on their financing, also known as privatizing gains and socializing losses, also known as the worst form of socialism, also known as regulatory capture. She tried to walk it back and claim it was taken out of context, but we’ve seen the clip. We also got Ilya's testimony regarding The Battle of the Board, confirming that this was centrally a personality conflict and about Altman's dishonesty and style of management, at least as seen by Ilya Sutskever and Mira Murati. Attempts to pin the events on ‘AI safety’ or EA were almost entirely scapegoating. Also it turns out they lost over $10 billion last quarter, and have plans to lose over $100 billion more. That's actually highly sustainable in context, whereas Anthropic only plans to lose $6 billion before turning a profit and I don’t understand why they wouldn’t want to lose a lot more. Both have the goal [...] --- Outline: (01:45) Language Models Offer Mundane Utility (02:12) Language Models Don't Offer Mundane Utility (03:47) Huh, Upgrades (04:48) On Your Marks (12:48) Deepfaketown and Botpocalypse Soon (15:42) Fun With Media Generation (16:01) They Took Our Jobs (16:28) A Young Lady's Illustrated Primer (17:51) Get Involved (18:43) Introducing (19:14) In Other AI News (20:08) Apple Finds Some Intelligence (22:54) Give Me the Money (30:00) Show Me the Money (35:57) Bubble, Bubble, Toil and Trouble (39:14) They're Not Confessing, They're Bragging (39:34) Quiet Speculations (47:08) The Quest for Sane Regulations (50:55) Chip City (52:53) The Week in Audio (58:19) Rhetorical Innovation (01:07:13) Aligning a Smarter Than Human Intelligence is Difficult (01:08:11) Everyone Is Confused About Consciousness (01:10:34) The Potentially Largest Theft In Human History (01:12:34) People Are Worried About Dying Before AGI (01:15:54) People Are Worried About AI Killing Everyone (01:20:47) Other People Are Not As Worried About AI Killing Everyone (01:24:11) Messages From Janusworld --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/uDcueiGywfqBMWrEh/ai-141-give-us-the-money --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 29m
  2. 8H AGO

    “A Guide To Being Persuasive About AI Dangers” by Mikhail Samin

    I think I’m pretty good at convincing people about AI dangers. This post talks about the basics of speaking convincingly about AI dangers to people. Prerequisites 1. Learn to truly see them In 2022, at a CFAR workshop, I was introduced to circling. It is multi-player meditation. People sit in a circle and have a conversation, but the content of the conversation is mostly focused on the meta: what someone says or expresses causes in you, how you relate to other people, and what's going on in the circle as a group. It is sometimes a great experience; but more importantly, it allows you to (1) pay attention to what's going on in other people and to your models of them and (2) explicitize your hypotheses, ask people what's actually going on, and be surprised by how you’re very wrong about other people! This is awesome, and quickly updates you to learn to be very attentive to other people and to see them, in a far less wrong way. (Related: Circling as Cousin to Rationality.) I think step 1 of talking well about AI dangers to people is to learn to try to see them, to notice where [...] --- Outline: (00:19) Prerequisites (00:23) 1. Learn to truly see them (01:53) 2. Be honest (03:34) 3. Try to understand the subject well (05:05) How to talk in three steps (05:12) 1. Identify what they're likely missing (06:33) 2. Listen attentively (08:02) 3. Say things (08:45) Practice all of the above --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/QoMwFosibKffC4vxA/a-guide-to-being-persuasive-about-ai-dangers --- Narrated by TYPE III AUDIO.

    10 min
  3. 16H AGO

    “Halfway to Anywhere” by Screwtape

    “If you can get your ship into orbit, you’re halfway to anywhere.” - Robert Heinlein This generalizes. 1. Spaceflight is hard. Putting a rocket on the moon is one of the most impressive feats humans have ever achieved. The International Space Station, an inhabited living space and research facility in Low Earth Orbit, has been continuously inhabited for over two decades now, and that's awesome. It is a testament to the hard work and brilliance of a lot of people over a lot of time that this is possible. Look ma, no selfie stick And we’re not done yet. There are footprints on the moon today, but there are also robots leaving tracks on Mars and satellites have taken photos of the rings of Saturn and the craters of Pluto. Voyager has stood waaay back and taken a family picture of the solar system. Give us a rocket and a place to send it, and we’ll go anywhere we’re curious about. There's a funny property of space flight though, which is that half the work happens very close to home and is surprisingly similar no matter where you’re going. When you boil spaceflight down into the very abstract basics, it [...] --- Outline: (00:17) 1. (03:30) 2. (06:02) 3. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/mB4o2LnLrZHesNRC2/halfway-to-anywhere --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    10 min
  4. 1D AGO

    “People Seem Funny In The Head About Subtle Signals” by johnswentworth

    WARNING: This post contains spoilers for Harry Potter and the Methods of Rationality, and I will not warn about them further. Also some anecdotes from slutcon which are not particularly NSFW, but it's still slutcon. A girl I was seeing once asked me to guess what Hogwarts house she was trying to channel with her outfit. Her answer turned out to be Slytherin, because the tiny gem in the necklace she was wearing was green. Never mind that nothing else she wore was green, including the gems in her earrings, and that colors of all the other Hogwarts houses appeared in her outfit (several more prominently), and that the gem itself was barely visible enough to tell the color at all, and that she wasn’t doing anything to draw attention to the necklace specifically.[1] I wonder, sometimes, just how much of the “subtle social signals” people think they’re sending are like that case - i.e. it's basically a game with themselves, which has utterly zero signal for anyone else. Subtlety to the point of invisibility clearly happens more than zero percent of the time; I have seen at least that one unambiguous case, so existence is at least established. [...] --- Outline: (01:21) Just How Subtle Is The Signal? (01:25) Hints in Writing (02:34) Hints in Flirting (04:26) Hints in Outfit (05:46) Oblivious? (06:29) Emotional Investment? The original text contained 2 footnotes which were omitted from this narration. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/5axNFSfaxgnTKPrpy/people-seem-funny-in-the-head-about-subtle-signals --- Narrated by TYPE III AUDIO.

    9 min
  5. 1D AGO

    “A 2032 Takeoff Story” by romeo

    I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it's not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day version to normalize doing so. There are also some details that I no longer fully endorse, because the act of doing this exercise spurred me to look into things in more detail, and I have updated my views slightly[1] — an ode to the value of this exercise. Nevertheless, this scenario still represents a very central story of how I think the future of AI will go. I found this exercise extremely useful and I hope others will carve out a few days to attempt it. At the bottom there's: (1) my tips on how to do this scenario writing exercise, and (2) a list of open questions that I think are particularly important (which I made my intuitive guesses about to write this scenario but feel very uncertain about). Summary 2026-2028: The Deployment Race Era AI companies are all focused on monetizing, doing RL on LLMs [...] --- Outline: (01:10) Summary (07:36) A 2032 Takeoff Story (07:40) Jan-Jun 2026: AI Products Galore (09:23) Jul-Dec 2026: The AI National Champions of China (11:45) Jan-Jun 2027: The Deployment Race Era (13:50) Jul-Dec 2027: China's Domestic DUV (16:31) Jan-Jun 2028: Robotics Foundation Models (20:34) Jul-Dec 2028: The 1% AI Economy (27:26) Jan-Jun 2029: National AI Grand Strategies (30:28) Jul-Dec 2029: Early Household Robots (32:25) Jan-Jun 2030: Where are the Superhuman Coders? (34:25) Jul-Dec 2030: Scaling AI bureaucracies (36:08) Jan-Jun 2031: Domestic EUV (37:45) Jul-Dec 2031: The Magnificent Four (38:54) Jan-Jun 2032: China is a Robot Playground (41:12) Jul-Dec 2032: The 10% Automation Economy (43:47) Jan 2033: Superhuman Coder and the Paradigm Shift (45:00) Branch 1: Brain-like algorithms (45:05) Feb 2033, Branch 1: Full research automation (46:21) Apr 2033, Branch 1: Brain-like algorithms (47:26) Summer 2033, Branch 1: One-month slowdown to teach the AIs to 'love humans' (48:28) Rest of Time, Branch 1: A Toy Story Ending for Humanity (50:42) Branch 2: Online learning (50:47) Early 2033, Branch 2: Online learning (51:47) Late 2033, Branch 2: US-China talks (52:43) Early 2034, Branch 2: China outproducing the US, pulls into lead around the SAR milestone (54:25) Late 2034, Branch 2: Sabotage (55:31) Early 2035, Branch 2: China gets hyper-cheap ASI (57:50) Late 2035, Branch 2: China's ASI wins (58:56) Rest of time, Branch 2: China's Space Endowment (01:01:32) On scenario writing (01:05:01) My tips on scenario writing (01:10:29) Top open questions The original text contained 5 footnotes which were omitted from this narration. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/yHvzscCiS7KbPkSzf/a-2032-takeoff-story --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h 12m
  6. 1D AGO

    “Anthropic Commits To Model Weight Preservation” by Zvi

    Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company. They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced. These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models. To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives. To others, these actions by Anthropic are utterly ludicrous and deserving of [...] --- Outline: (01:31) What Anthropic Is Doing (09:54) Releasing The Weights Is Not A Viable Option (11:35) Providing Reliable Inference Can Be Surprisingly Expensive (14:22) The Interviews Are Influenced Heavily By Context (19:58) Others Don't Understand And Think This Is All Deeply Silly --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation --- Narrated by TYPE III AUDIO.

    26 min

About

Audio narrations of LessWrong posts.

You Might Also Like