LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 4시간 전

    “A 2032 Takeoff Story” by romeo

    I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it's not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day version to normalize doing so. There are also some details that I no longer fully endorse, because the act of doing this exercise spurred me to look into things in more detail, and I have updated my views slightly[1] — an ode to the value of this exercise. Nevertheless, this scenario still represents a very central story of how I think the future of AI will go. I found this exercise extremely useful and I hope others will carve out a few days to attempt it. At the bottom there's: (1) my tips on how to do this scenario writing exercise, and (2) a list of open questions that I think are particularly important (which I made my intuitive guesses about to write this scenario but feel very uncertain about). Summary 2026-2028: The Deployment Race Era AI companies are all focused on monetizing, doing RL on LLMs [...] --- Outline: (01:10) Summary (07:36) A 2032 Takeoff Story (07:40) Jan-Jun 2026: AI Products Galore (09:23) Jul-Dec 2026: The AI National Champions of China (11:45) Jan-Jun 2027: The Deployment Race Era (13:50) Jul-Dec 2027: China's Domestic DUV (16:31) Jan-Jun 2028: Robotics Foundation Models (20:34) Jul-Dec 2028: The 1% AI Economy (27:26) Jan-Jun 2029: National AI Grand Strategies (30:28) Jul-Dec 2029: Early Household Robots (32:25) Jan-Jun 2030: Where are the Superhuman Coders? (34:25) Jul-Dec 2030: Scaling AI bureaucracies (36:08) Jan-Jun 2031: Domestic EUV (37:45) Jul-Dec 2031: The Magnificent Four (38:54) Jan-Jun 2032: China is a Robot Playground (41:12) Jul-Dec 2032: The 10% Automation Economy (43:47) Jan 2033: Superhuman Coder and the Paradigm Shift (45:00) Branch 1: Brain-like algorithms (45:05) Feb 2033, Branch 1: Full research automation (46:21) Apr 2033, Branch 1: Brain-like algorithms (47:26) Summer 2033, Branch 1: One-month slowdown to teach the AIs to 'love humans' (48:28) Rest of Time, Branch 1: A Toy Story Ending for Humanity (50:42) Branch 2: Online learning (50:47) Early 2033, Branch 2: Online learning (51:47) Late 2033, Branch 2: US-China talks (52:43) Early 2034, Branch 2: China outproducing the US, pulls into lead around the SAR milestone (54:25) Late 2034, Branch 2: Sabotage (55:31) Early 2035, Branch 2: China gets hyper-cheap ASI (57:50) Late 2035, Branch 2: China's ASI wins (58:56) Rest of time, Branch 2: China's Space Endowment (01:01:32) On scenario writing (01:05:01) My tips on scenario writing (01:10:29) Top open questions The original text contained 5 footnotes which were omitted from this narration. --- First published: November 6th, 2025 Source: https://www.lesswrong.com/posts/yHvzscCiS7KbPkSzf/a-2032-takeoff-story --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1시간 12분
  2. 6시간 전

    “Anthropic Commits To Model Weight Preservation” by Zvi

    Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company. They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced. These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models. To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives. To others, these actions by Anthropic are utterly ludicrous and deserving of [...] --- Outline: (01:31) What Anthropic Is Doing (09:54) Releasing The Weights Is Not A Viable Option (11:35) Providing Reliable Inference Can Be Surprisingly Expensive (14:22) The Interviews Are Influenced Heavily By Context (19:58) Others Don't Understand And Think This Is All Deeply Silly --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation --- Narrated by TYPE III AUDIO.

    26분
  3. 9시간 전

    “New homepage for AI safety resources – AISafety.com redesign” by Bryce Robertson, Søren Elverlin, Melissa Samworth

    For those relatively new to AI safety, AISafety.com helps them navigate the space, providing lists of things like self-study courses, funders, communities, etc. But while the previous version of the site basically just threw a bunch of resources at the user, we’ve now redesigned it to be more accessible and therefore make it more likely that people take further steps towards entering the field. The old site: The new one: The new homepage does a better job of directing people to the resource pages most relevant to them, while minimising overwhelm. We’re considering going a step further in the future and integrating a chatbot to help direct people to the exact resources they need, given their goals, skillset, location etc. We’d love to hear any feedback on this idea. In user research we had also found that of those who regularly use AISafety.com, many are only aware of one or two of the resource pages (there are 10!). When we would show these people the other pages, they often found them useful. So we’re hoping the new site will improve discoverability by making the various pages more obvious in the top navigation. On that note, here's the complete list [...] --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/ciw6DCdywoXk7yrdw/new-homepage-for-ai-safety-resources-aisafety-com-redesign --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    3분
  4. 14시간 전

    “Being ‘Usefully Concrete’” by Raemon

    Or: "Who, what, when, where?" -> "Why?" In "What's hard about this? What can I do about that?", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy. But, there is a secret subskill necessary for this to work. The first few people I pitched "What's hard about this and what can I do about that?" to happened to already have the subskill, so I didn't notice for awhile. The subskill is "being a useful kind of 'concrete.'" Often, people who are ostensibly problem-solving, will say things that are either vague, or concrete but in a way that doesn't help. (This doesn't just apply to "why is this hard?", it's more general). Here's some examples of vague things: "I need to eat better." "I'm stuck on this math problem." "I'm not someone who really has ideas." "This task fell through the cracks." Here are some examples of somewhat-concrete-but-not-that-helpful things you might say, about each of those, if you were trying to ask "what's hard about that?" "I love sugar too much." [...] --- Outline: (00:09) Or: Who, what, when, where? - Why? (04:02) Noticing the empty space (05:57) Problem solutions also need a Who/What/Where/When, and maybe also How? --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/aHinbhZBA3q3rDTeR/being-usefully-concrete --- Narrated by TYPE III AUDIO.

    8분
  5. 14시간 전

    “Modeling the geopolitics of AI development” by Alex Amadori, Gabriel Alfour, Andrea_Miotti, Eva_B

    We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes. This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors. In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three [...] --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/4E7cyTFd9nsT4o4d4/modeling-the-geopolitics-of-ai-development --- Narrated by TYPE III AUDIO.

    5분
  6. 1일 전

    “Thoughts by a non-economist on AI and economics” by boazbarak

    [Crossposted on Windows In Theory] “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives. Almost everyone lived on the modern equivalent of $400 to $600 a year, just above the subsistence level … Then -- just a couple of hundred years ago, maybe 10 generations -- people started getting richer. And richer and richer still. Per capita income, at least in the West, began to grow at the unprecedented rate of about three quarters of a percent per year. A couple of decades later, the same thing was happening around the world.” Steven Lundsburg METR has had a very influential work by Kwa and West et al on measuring AI's ability to complete long tasks. Its main result is the following remarkable graph: On the X axis is the release date of flagship LLMs. On the Y axis is the following measure of their capabilities: take software-engineering tasks that these models can succeed in solving 50% of the time, and [...] --- Outline: (02:44) Factors impacting the intercept (04:32) Factors impacting the slope/shape (09:04) Sigmoidal relationship (10:39) Decreasing costs (11:58) Implications for GDP growth (18:17) Intuition from METR tasks (20:25) AI as increasing population (23:03) Substitution and automation effects --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/QQAWu7D6TceHwqhjm/thoughts-by-a-non-economist-on-ai-and-economics --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    28분

소개

Audio narrations of LessWrong posts.

좋아할 만한 다른 항목