LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 1 GIỜ TRƯỚC

    “Iterated Development and Study of Schemers (IDSS)” by ryan_greenblatt

    In a previous post, we discussed prospects for studying scheming using natural examples. In this post, we'll describe a more detailed proposal for iteratively constructing scheming models, techniques for detecting scheming, and techniques for preventing scheming. We'll call this strategy Iterated Development and Study of Schemers (IDSS). We'll be using concepts from that prior post, like the idea of trying to make schemers which are easier to catch. Two key difficulties with using natural examples of scheming are that it is hard to catch (and re-catch) schemers and that it's hard to (cheaply) get a large number of diverse examples of scheming to experiment on. One approach for partially resolving these issues is to experiment on weak schemers which are easier to catch and cheaper to experiment on. However, these weak schemers might not be analogous to the powerful schemers which are actually dangerous, and these weak AIs [...] --- Outline: (01:36) The IDSS strategy (02:10) Iterated development of scheming testbeds (04:35) Gradual addition of capabilities (08:39) Using improved testbeds for improving scheming mitigation techniques (09:09) What can go wrong with IDSS? (13:20) Conclusion The original text contained 2 footnotes which were omitted from this narration. --- First published: October 10th, 2025 Source: https://www.lesswrong.com/posts/QpzTmFLXMJcdRkPLZ/iterated-development-and-study-of-schemers-idss --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 phút
  2. 6 GIỜ TRƯỚC

    “Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik

    While recent AI systems achieve strong performance through human-readable reasoning that should be simple to monitor (OpenAI, 2024, Anthropic, 2025), we investigate whether models can learn to reason about malicious side tasks while making that reasoning appear benign. We find that Sonnet 3.7 can learn to evade either a reasoning monitor, by persuading the monitor that a blatant backdoor is benign, or an output-only monitor, by devising sophisticated backdoors that the output-only monitor doesn’t detect. But when trained to evade both reasoning and output-only monitors, Sonnet 3.7 is unable to use reasoning to improve its backdoor success rate without triggering a reasoning monitor. Like previous work (Baker et al., 2025, Emmons et al., 2025), our results suggest that reasoning monitors can provide strong assurance that language models are not pursuing reasoning-heavy malign side tasks, but that additional mitigations may be required for robustness to monitor persuasion. Figure 1: We trained [...] --- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/MmuyzfsaNrSvRCsFk/training-fails-to-elicit-subtle-reasoning-in-current --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    8 phút
  3. 18 GIỜ TRƯỚC

    “Training Qwen-1.5B with a CoT legibility penalty” by Fabien Roger

    Audio note: this article contains 102 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. I tried training Qwen2.5-1.5B with RL on math to both get correct answers and have a CoT that doesn’t look like human-understandable math reasoning. RL sometimes succeeds at hacking my monitor, and when I strengthen my monitor, it fails at finding CoT that are both illegible and helpful, even after training for roughly 4000 steps (~1B generated tokens). Exploring into obfuscated reasoning is hard! These results were also released in the Appendix of Training fails to elicit subtle reasoning in current language models. Motivation Chain-of-Thoughts (CoTs) can help reason for many more serial steps than there are layers in a Transformer. But one worry is that LLMs might hide their real reasoning in a plausible benign CoT. Previous [...] --- Outline: (00:58) Motivation (01:45) Methods (03:33) Results (07:02) Limitations (07:52) Appendix (07:55) Hackable illegibility rater prompt (09:03) Stronger illegibility rater prompt (09:25) System prompt used for the policy (09:40) System prompt used for the no-CoT policy (09:48) Example generations --- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/WSKNmRxPnYdQnoNvt/training-qwen-1-5b-with-a-cot-legibility-penalty --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    10 phút
  4. 23 GIỜ TRƯỚC

    “Towards a Typology of Strange LLM Chains-of-Thought” by 1a3orn

    Intro LLMs being trained with RLVR (Reinforcement Learning from Verifiable Rewards) start off with a 'chain-of-thought' (CoT) in whatever language the LLM was originally trained on. But after a long period of training, the CoT sometimes starts to look very weird; to resemble no human language; or even to grow completely unintelligible. Why might this happen? I've seen a lot of speculation about why. But a lot of this speculation narrows too quickly, to just one or two hypotheses. My intent is also to speculate, but more broadly. Specifically, I want to outline six nonexclusive possible causes for the weird tokens: new better language, spandrels, context refresh, deliberate obfuscation, natural drift, and conflicting shards. And I also wish to extremely roughly outline ideas for experiments and evidence that could help us distinguish these causes. I'm sure I'm not enumerating the full space of [...] --- Outline: (00:11) Intro (01:34) 1. New Better Language (04:06) 2. Spandrels (06:42) 3. Context Refresh (10:48) 4. Deliberate Obfuscation (12:36) 5. Natural Drift (13:42) 6. Conflicting Shards (15:24) Conclusion --- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/qgvSMwRrdqoDMJJnD/towards-a-typology-of-strange-llm-chains-of-thought --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    18 phút

Giới Thiệu

Audio narrations of LessWrong posts.

Có Thể Bạn Cũng Thích