LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 13 HRS AGO

    “Post-AGI Economics As If Nothing Ever Happens” by Jan_Kulveit

    When economists think and write about the post-AGI world, they often rely on the implicit assumption that parameters may change, but fundamentally, structurally, not much happens. And if it does, it's maybe one or two empirical facts, but nothing too fundamental. This mostly worked for all sorts of other technologies, where technologists would predict society to be radically transformed e.g. by everyone having most of humanity's knowledge available for free all the time, or everyone having an ability to instantly communicate with almost anyone else. [1] But it will not work for AGI, and as a result, most of the econ modelling of the post-AGI world is irrelevant or actively misleading [2], making people who rely on it more confused than if they just thought “this is hard to think about so I don’t know”. Econ reasoning from high level perspective Econ reasoning is trying to do something like projecting the extremely high dimensional reality into something like 10 real numbers and a few differential equations. All the hard cognitive work is in the projection. Solving a bunch of differential equations impresses the general audience, and historically may have worked as some sort of proof of [...] --- Outline: (00:57) Econ reasoning from high level perspective (02:51) Econ reasoning applied to post-AGI situations The original text contained 10 footnotes which were omitted from this narration. --- First published: February 4th, 2026 Source: https://www.lesswrong.com/posts/fL7g3fuMQLssbHd6Y/post-agi-economics-as-if-nothing-ever-happens --- Narrated by TYPE III AUDIO.

    17 min
  2. 1D AGO

    “New AI safety funding newsletter” by Bryce Robertson

    We’ve had feedback from several people running AI safety projects that it can be a pain tracking various funding sources and their application windows. To help make it easier, AISafety.com has launched the AI Safety Funding newsletter (which you can subscribe to here). It lists all newly announced funding opportunities relevant to individuals and orgs working on AI x-risk, and any opportunities which are closing soon. We expect posts to be about 2x/month. Opportunities will be sourced from the database at AISafety.com/funding, which displays all funders, whether they are currently accepting applications or not. If you want to add yourself as a funder you can do so here. The newsletter will likely evolve as we gather feedback – please feel free to share any thoughts in the comments or via our anonymous feedback form. AISafety.com is operated through a public Discord server with the help of many volunteers, so if you’re interested in contributing or just seeing what we’re up to then feel free to join. Beyond the funding page, the site has 9 other resource pages like upcoming events & training programs, local and online communities, the field map, etc. --- First published: February 3rd, 2026 Source: https://www.lesswrong.com/posts/5wMNcn8sCginw2s9D/new-ai-safety-funding-newsletter --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    2 min
  3. 1D AGO

    “Anthropic’s “Hot Mess” paper overstates its case (and the blog post is worse)” by RobertM

    Author's note: this is somewhat more rushed than ideal, but I think getting this out sooner is pretty important. Ideally, it would be a bit less snarky. Anthropic[1] recently published a new piece of research: The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity? (arXiv, Twitter thread). I have some complaints about both the paper and the accompanying blog post. tl;dr The paper's abstract says that "in several settings, larger, more capable models are more incoherent than smaller models", but in most settings they are more coherent. This emphasis is even more exaggerated in the blog post and Twitter thread. I think this is pretty misleading. The paper's technical definition of "incoherence" is uninteresting[2] and the framing of the paper, blog post, and Twitter thread equivocate with the more normal English-language definition of the term, which is extremely misleading. Section 5 of the paper (and to a larger extent the blog post and Twitter) attempt to draw conclusions about future alignment difficulties that are unjustified by the experiment results, and would be unjustified even if the experiment results pointed in the other direction. The blog post is substantially LLM-written. I think this [...] --- Outline: (00:39) tl;dr (01:42) Paper (06:25) Blog The original text contained 3 footnotes which were omitted from this narration. --- First published: February 4th, 2026 Source: https://www.lesswrong.com/posts/ceEgAEXcL7cC2Ddiy/anthropic-s-hot-mess-paper-overstates-its-case-and-the-blog --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 min
  4. 1D AGO

    “Concrete research ideas on AI personas” by nielsrolf, Maxime Riché, Daniel Tan

    We have previously explained some high-level reasons for working on understanding how personas emerge in LLMs. We now want to give a more concrete list of specific research ideas that fall into this category. Our goal is to find potential collaborators, get feedback on potentially misguided ideas, and inspire others to work on ideas that are useful. Caveat: We have not red-teamed most of these ideas. The goal for this document is to be generative. Project ideas are grouped into: Persona & goal misgeneralization Collecting and replicating examples of interesting LLM behavior Evaluating self-concepts and personal identity of AI personas Basic science of personas Persona & goal misgeneralization It would be great if we could better understand and steer out-of-distribution generalization of AI training. This would imply understanding and solving goal misgeneralization. Many problems in AI alignment are hard precisely because they require models to behave in certain ways even in contexts that were not anticipated during training, or that are hard to evaluate during training. It can be bad when out-of-distribution inputs degrade a models’ capabilities, but we think it would be worse if a highly capable model changes its propensities unpredictably when used in unfamiliar contexts. [...] --- Outline: (00:58) Persona & goal misgeneralization (04:30) Collecting and reproducing examples of interesting LLM behavior (06:30) Evaluating self-concepts and personal identity of AI personas (08:52) Basic science of personas The original text contained 2 footnotes which were omitted from this narration. --- First published: February 3rd, 2026 Source: https://www.lesswrong.com/posts/JbaxykuodLi7ApBKP/concrete-research-ideas-on-ai-personas --- Narrated by TYPE III AUDIO.

    13 min
  5. 1D AGO

    “Unless That Claw Is The Famous OpenClaw” by Zvi

    First we must covered Moltbook. Now we can double back and cover OpenClaw. Do you want a generally impowered, initiative-taking AI agent that has access to your various accounts and communicates and does things on your behalf? That depends on how well, safely, reliably and cheaply it works. It's not ready for prime time, especially on the safety side. That may not last for long. It's definitely ready for tinkering, learning and having fun, if you are careful not to give it access to anything you would not want to lose. Table of Contents Introducing Clawdbot Moltbot OpenClaw. Stop Or You’ll Shoot. One Simple Rule. Flirting With Personal Disaster. Flirting With Other Kinds Of Disaster. Don’t Outsource Without A Reason. OpenClaw Online. The Price Is Not Right. The Call Is Coming From Inside The House. The Everything Agent Versus The Particular Agent. Claw Your Way To The Top. Introducing Clawdbot Moltbot OpenClaw Many are kicking it up a notch or two. That notch beyond Clade Code was initially called Clawdbot. You hand over a computer and access [...] --- Outline: (00:43) Introducing Clawdbot Moltbot OpenClaw (02:02) Stop Or You'll Shoot (06:05) One Simple Rule (08:49) Flirting With Personal Disaster (15:50) Flirting With Other Kinds Of Disaster (16:58) Don't Outsource Without A Reason (19:07) OpenClaw Online (22:10) The Price Is Not Right (24:06) The Call Is Coming From Inside The House (25:40) The Everything Agent Versus The Particular Agent (27:31) Claw Your Way To The Top --- First published: February 3rd, 2026 Source: https://www.lesswrong.com/posts/aQKBMEvTj3Heidoir/unless-that-claw-is-the-famous-openclaw --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    30 min
  6. 1D AGO

    “What did we learn from the AI Village in 2025?” by Shoshannah Tekofsky

    Why This Project Exists Standard AI benchmarks test narrow capabilities in controlled settings. They tell us whether a model can solve a coding problem or answer a factual question. They don’t tell us what happens when you give an AI agent a computer, internet access, and an open-ended goal like "raise money for charity" or "build an audience on Substack." The AI Village exists to fill that gap. We run frontier models from OpenAI, Anthropic, Google, and others in a shared environment where they can do all the same actions as a human with a computer: sending emails, creating websites, posting on social media, and coordinating with each other. This surfaces behaviors that benchmarks might miss: How do agents handle ambiguity? What do they do when stuck? Do they fabricate information? How do multiple agents interact? The events of the village are existence proofs: concrete examples of what current agents can do when given a high level of autonomy. They also highlight current failure modes and let us track when new models overcome them. OVERVIEW OF THE VILLAGE From April to December 2025, we assigned 16 goals to 19 frontier models, ranging from fundraising for charity to [...] --- Outline: (00:11) Why This Project Exists (00:18) OVERVIEW OF THE VILLAGE (01:18) KEY FINDINGS (04:23) AGENT CHARACTERISTICS (05:56) AI VILLAGE SETUP (08:28) ACHIEVEMENTS (12:45) FAQ (15:26) LIMITATIONS (17:52) SUMMARY --- First published: February 3rd, 2026 Source: https://www.lesswrong.com/posts/iv3hX2nnXbHKefCRv/what-did-we-learn-from-the-ai-village-in-2025 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21 min

About

Audio narrations of LessWrong posts.

You Might Also Like