LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 4H AGO

    “New homepage for AI safety resources – AISafety.com redesign” by Bryce Robertson, Søren Elverlin, Melissa Samworth

    For those relatively new to AI safety, AISafety.com helps them navigate the space, providing lists of things like self-study courses, funders, communities, etc. But while the previous version of the site basically just threw a bunch of resources at the user, we’ve now redesigned it to be more accessible and therefore make it more likely that people take further steps towards entering the field. The old site: The new one: The new homepage does a better job of directing people to the resource pages most relevant to them, while minimising overwhelm. We’re considering going a step further in the future and integrating a chatbot to help direct people to the exact resources they need, given their goals, skillset, location etc. We’d love to hear any feedback on this idea. In user research we had also found that of those who regularly use AISafety.com, many are only aware of one or two of the resource pages (there are 10!). When we would show these people the other pages, they often found them useful. So we’re hoping the new site will improve discoverability by making the various pages more obvious in the top navigation. On that note, here's the complete list [...] --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/ciw6DCdywoXk7yrdw/new-homepage-for-ai-safety-resources-aisafety-com-redesign --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    3 min
  2. 8H AGO

    “Being ‘Usefully Concrete’” by Raemon

    Or: "Who, what, when, where?" -> "Why?" In "What's hard about this? What can I do about that?", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy. But, there is a secret subskill necessary for this to work. The first few people I pitched "What's hard about this and what can I do about that?" to happened to already have the subskill, so I didn't notice for awhile. The subskill is "being a useful kind of 'concrete.'" Often, people who are ostensibly problem-solving, will say things that are either vague, or concrete but in a way that doesn't help. (This doesn't just apply to "why is this hard?", it's more general). Here's some examples of vague things: "I need to eat better." "I'm stuck on this math problem." "I'm not someone who really has ideas." "This task fell through the cracks." Here are some examples of somewhat-concrete-but-not-that-helpful things you might say, about each of those, if you were trying to ask "what's hard about that?" "I love sugar too much." [...] --- Outline: (00:09) Or: Who, what, when, where? - Why? (04:02) Noticing the empty space (05:57) Problem solutions also need a Who/What/Where/When, and maybe also How? --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/aHinbhZBA3q3rDTeR/being-usefully-concrete --- Narrated by TYPE III AUDIO.

    8 min
  3. 9H AGO

    “Modeling the geopolitics of AI development” by Alex Amadori, Gabriel Alfour, Andrea_Miotti, Eva_B

    We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes. This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors. In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three [...] --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/4E7cyTFd9nsT4o4d4/modeling-the-geopolitics-of-ai-development --- Narrated by TYPE III AUDIO.

    5 min
  4. 20H AGO

    “Thoughts by a non-economist on AI and economics” by boazbarak

    [Crossposted on Windows In Theory] “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives. Almost everyone lived on the modern equivalent of $400 to $600 a year, just above the subsistence level … Then -- just a couple of hundred years ago, maybe 10 generations -- people started getting richer. And richer and richer still. Per capita income, at least in the West, began to grow at the unprecedented rate of about three quarters of a percent per year. A couple of decades later, the same thing was happening around the world.” Steven Lundsburg METR has had a very influential work by Kwa and West et al on measuring AI's ability to complete long tasks. Its main result is the following remarkable graph: On the X axis is the release date of flagship LLMs. On the Y axis is the following measure of their capabilities: take software-engineering tasks that these models can succeed in solving 50% of the time, and [...] --- Outline: (02:44) Factors impacting the intercept (04:32) Factors impacting the slope/shape (09:04) Sigmoidal relationship (10:39) Decreasing costs (11:58) Implications for GDP growth (18:17) Intuition from METR tasks (20:25) AI as increasing population (23:03) Substitution and automation effects --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/QQAWu7D6TceHwqhjm/thoughts-by-a-non-economist-on-ai-and-economics --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    28 min
  5. 23H AGO

    “OpenAI: The Battle of the Board: Ilya’s Testimony” by Zvi

    New Things Have Come To Light The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call The Battle of the Board. The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman's initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.” Liv: Lots of people dismiss Sam's behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he's building the machine god. Toucan: From Ilya's deposition— • Ilya plotted over a year with Mira to remove Sam • Dario wanted Greg fired and himself in charge of all research • Mira told Ilya that Sam pitted her against Daniela • Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg This Really Was Primarily A Lying And Management Problem Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up [...] --- Outline: (00:12) New Things Have Come To Light (01:09) This Really Was Primarily A Lying And Management Problem (03:23) Ilya Tells Us How It Went Down And Why He Tried To Do It (06:17) If You Come At The King (07:31) Enter The Scapegoats (08:13) And In Summary --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/iRBhXJSNkDeohm69d/openai-the-battle-of-the-board-ilya-s-testimony --- Narrated by TYPE III AUDIO.

    9 min
  6. 1D AGO

    “Legible vs. Illegible AI Safety Problems” by Wei Dai

    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to). But some problems are illegible (obscure or hard to understand, or in a common cognitive blind spot), meaning there is a high risk that leaders and policymakers will decide to deploy or allow deployment even if they are not solved. (Of course, this is a spectrum, but I am simplifying it to a binary for ease of exposition.) From an x-risk perspective, working on highly legible safety problems has low or even negative expected value. Similar to working on AI capabilities, it brings forward the date by which AGI/ASI will be deployed, leaving less time to solve the illegible x-safety problems. In contrast, working on the illegible problems (including by trying to make them more legible) does not have this issue and therefore has a much higher expected value (all else being equal, such as tractability). Note that according to this logic, success in making an illegible problem highly legible is almost as good as solving [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/PMc65HgRFvBimEpmJ/legible-vs-illegible-ai-safety-problems --- Narrated by TYPE III AUDIO.

    4 min

About

Audio narrations of LessWrong posts.

You Might Also Like