LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ٤ ساعات

    “Halfway to Anywhere” by Screwtape

    “If you can get your ship into orbit, you’re halfway to anywhere.” - Robert Heinlein This generalizes. 1. Spaceflight is hard. Putting a rocket on the moon is one of the most impressive feats humans have ever achieved. The International Space Station, an inhabited living space and research facility in Low Earth Orbit, has been continuously inhabited for over two decades now, and that's awesome. It is a testament to the hard work and brilliance of a lot of people over a lot of time that this is possible. Look ma, no selfie stick And we’re not done yet. There are footprints on the moon today, but there are also robots leaving tracks on Mars and satellites have taken photos of the rings of Saturn and the craters of Pluto. Voyager has stood waaay back and taken a family picture of the solar system. Give us a rocket and a place to send it, and we’ll go anywhere we’re curious about. There's a funny property of space flight though, which is that half the work happens very close to home and is surprisingly similar no matter where you’re going. When you boil spaceflight down into the very abstract basics, it [...] --- Outline: (00:17) 1. (03:30) 2. (06:02) 3. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/mB4o2LnLrZHesNRC2/halfway-to-anywhere --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٠ من الدقائق
  2. قبل ١٢ ساعة

    “People Seem Funny In The Head About Subtle Signals” by johnswentworth

    WARNING: This post contains spoilers for Harry Potter and the Methods of Rationality, and I will not warn about them further. Also some anecdotes from slutcon which are not particularly NSFW, but it's still slutcon. A girl I was seeing once asked me to guess what Hogwarts house she was trying to channel with her outfit. Her answer turned out to be Slytherin, because the tiny gem in the necklace she was wearing was green. Never mind that nothing else she wore was green, including the gems in her earrings, and that colors of all the other Hogwarts houses appeared in her outfit (several more prominently), and that the gem itself was barely visible enough to tell the color at all, and that she wasn’t doing anything to draw attention to the necklace specifically.[1] I wonder, sometimes, just how much of the “subtle social signals” people think they’re sending are like that case - i.e. it's basically a game with themselves, which has utterly zero signal for anyone else. Subtlety to the point of invisibility clearly happens more than zero percent of the time; I have seen at least that one unambiguous case, so existence is at least established. [...] --- Outline: (01:21) Just How Subtle Is The Signal? (01:25) Hints in Writing (02:34) Hints in Flirting (04:26) Hints in Outfit (05:46) Oblivious? (06:29) Emotional Investment? The original text contained 2 footnotes which were omitted from this narration. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/5axNFSfaxgnTKPrpy/people-seem-funny-in-the-head-about-subtle-signals --- Narrated by TYPE III AUDIO.

    ٩ من الدقائق
  3. قبل ١٨ ساعة

    “A 2032 Takeoff Story” by romeo

    I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it's not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day version to normalize doing so. There are also some details that I no longer fully endorse, because the act of doing this exercise spurred me to look into things in more detail, and I have updated my views slightly[1] — an ode to the value of this exercise. Nevertheless, this scenario still represents a very central story of how I think the future of AI will go. I found this exercise extremely useful and I hope others will carve out a few days to attempt it. At the bottom there's: (1) my tips on how to do this scenario writing exercise, and (2) a list of open questions that I think are particularly important (which I made my intuitive guesses about to write this scenario but feel very uncertain about). Summary 2026-2028: The Deployment Race Era AI companies are all focused on monetizing, doing RL on LLMs [...] --- Outline: (01:10) Summary (07:36) A 2032 Takeoff Story (07:40) Jan-Jun 2026: AI Products Galore (09:23) Jul-Dec 2026: The AI National Champions of China (11:45) Jan-Jun 2027: The Deployment Race Era (13:50) Jul-Dec 2027: China's Domestic DUV (16:31) Jan-Jun 2028: Robotics Foundation Models (20:34) Jul-Dec 2028: The 1% AI Economy (27:26) Jan-Jun 2029: National AI Grand Strategies (30:28) Jul-Dec 2029: Early Household Robots (32:25) Jan-Jun 2030: Where are the Superhuman Coders? (34:25) Jul-Dec 2030: Scaling AI bureaucracies (36:08) Jan-Jun 2031: Domestic EUV (37:45) Jul-Dec 2031: The Magnificent Four (38:54) Jan-Jun 2032: China is a Robot Playground (41:12) Jul-Dec 2032: The 10% Automation Economy (43:47) Jan 2033: Superhuman Coder and the Paradigm Shift (45:00) Branch 1: Brain-like algorithms (45:05) Feb 2033, Branch 1: Full research automation (46:21) Apr 2033, Branch 1: Brain-like algorithms (47:26) Summer 2033, Branch 1: One-month slowdown to teach the AIs to 'love humans' (48:28) Rest of Time, Branch 1: A Toy Story Ending for Humanity (50:42) Branch 2: Online learning (50:47) Early 2033, Branch 2: Online learning (51:47) Late 2033, Branch 2: US-China talks (52:43) Early 2034, Branch 2: China outproducing the US, pulls into lead around the SAR milestone (54:25) Late 2034, Branch 2: Sabotage (55:31) Early 2035, Branch 2: China gets hyper-cheap ASI (57:50) Late 2035, Branch 2: China's ASI wins (58:56) Rest of time, Branch 2: China's Space Endowment (01:01:32) On scenario writing (01:05:01) My tips on scenario writing (01:10:29) Top open questions The original text contained 5 footnotes which were omitted from this narration. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/yHvzscCiS7KbPkSzf/a-2032-takeoff-story --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١ س ١٢ د
  4. قبل ٢٠ ساعة

    “Anthropic Commits To Model Weight Preservation” by Zvi

    Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company. They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced. These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models. To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives. To others, these actions by Anthropic are utterly ludicrous and deserving of [...] --- Outline: (01:31) What Anthropic Is Doing (09:54) Releasing The Weights Is Not A Viable Option (11:35) Providing Reliable Inference Can Be Surprisingly Expensive (14:22) The Interviews Are Influenced Heavily By Context (19:58) Others Don't Understand And Think This Is All Deeply Silly --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation --- Narrated by TYPE III AUDIO.

    ٢٦ من الدقائق
  5. قبل ٢٣ ساعة

    “New homepage for AI safety resources – AISafety.com redesign” by Bryce Robertson, Søren Elverlin, Melissa Samworth

    For those relatively new to AI safety, AISafety.com helps them navigate the space, providing lists of things like self-study courses, funders, communities, etc. But while the previous version of the site basically just threw a bunch of resources at the user, we’ve now redesigned it to be more accessible and therefore make it more likely that people take further steps towards entering the field. The old site: The new one: The new homepage does a better job of directing people to the resource pages most relevant to them, while minimising overwhelm. We’re considering going a step further in the future and integrating a chatbot to help direct people to the exact resources they need, given their goals, skillset, location etc. We’d love to hear any feedback on this idea. In user research we had also found that of those who regularly use AISafety.com, many are only aware of one or two of the resource pages (there are 10!). When we would show these people the other pages, they often found them useful. So we’re hoping the new site will improve discoverability by making the various pages more obvious in the top navigation. On that note, here's the complete list [...] --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/ciw6DCdywoXk7yrdw/new-homepage-for-ai-safety-resources-aisafety-com-redesign --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٣ من الدقائق
  6. قبل يوم واحد

    “Being ‘Usefully Concrete’” by Raemon

    Or: "Who, what, when, where?" -> "Why?" In "What's hard about this? What can I do about that?", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy. But, there is a secret subskill necessary for this to work. The first few people I pitched "What's hard about this and what can I do about that?" to happened to already have the subskill, so I didn't notice for awhile. The subskill is "being a useful kind of 'concrete.'" Often, people who are ostensibly problem-solving, will say things that are either vague, or concrete but in a way that doesn't help. (This doesn't just apply to "why is this hard?", it's more general). Here's some examples of vague things: "I need to eat better." "I'm stuck on this math problem." "I'm not someone who really has ideas." "This task fell through the cracks." Here are some examples of somewhat-concrete-but-not-that-helpful things you might say, about each of those, if you were trying to ask "what's hard about that?" "I love sugar too much." [...] --- Outline: (00:09) Or: Who, what, when, where? - Why? (04:02) Noticing the empty space (05:57) Problem solutions also need a Who/What/Where/When, and maybe also How? --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/aHinbhZBA3q3rDTeR/being-usefully-concrete --- Narrated by TYPE III AUDIO.

    ٨ من الدقائق
  7. قبل يوم واحد

    “Modeling the geopolitics of AI development” by Alex Amadori, Gabriel Alfour, Andrea_Miotti, Eva_B

    We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes. This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors. In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three [...] --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/4E7cyTFd9nsT4o4d4/modeling-the-geopolitics-of-ai-development --- Narrated by TYPE III AUDIO.

    ٥ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا