LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. -5 Ч

    “It will cost you nothing to ‘bribe’ a Utilitarian” by Gabriel Alfour

    Audio note: this article contains 41 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Abstract We present a formal model demonstrating how utilitarian reasoning creates a structural vulnerability that allows AI corporations to acquire a public veneer of safety at arbitrary low cost. Drawing from the work from Houy [2014], we prove that an organisation can acquire _k_ safety minded employees for a vanishingly small premium _epsilon_. This results formalises a well known phenomenon in AI safety, wherein researchers concerned about existential risks from AI joins an accelerationist corporation under the rationale of "changing things from the inside", without ever producing measurable safety improvements. We discuss implications for AI governance, organisational credibility, and the limitations of utilitarian decision-making in competitive labour markets. 1) Introduction The title is a play on It will [...] --- Outline: (00:22) Abstract (01:13) 1) Introduction (02:06) 2) Formal Framework (04:42) 3) Implications (06:22) 4) Future Work (08:10) Conclusion The original text contained 2 footnotes which were omitted from this narration. --- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/MFg7nvR2QGd6KkLJZ/it-will-cost-you-nothing-to-bribe-a-utilitarian --- Narrated by TYPE III AUDIO.

    9 мин.
  2. -1 ДН.

    “Recontextualization Mitigates Specification Gaming Without Modifying the Specification” by vgillioz, TurnTrout, cloud, ariana_azarbal

    Recontextualization distills good behavior into a context which allows bad behavior. More specifically, recontextualization is a modification to RL which generates completions from prompts that discourage misbehavior, appends those completions to prompts that are more tolerant of misbehavior, and finally reinforces the model on the recontextualized instruction-completion data. Due to the data generation and training prompts differing in their attitude towards misbehavior, recontextualization builds resistance to misbehaviors that the training signal mistakenly reinforces. For example, suppose our reward signal does not robustly penalize deception. Recontextualization generates completions while discouraging deception and then creates training data by updating those completions' prompts to encourage deception. That simple tweak can prevent the model from becoming dishonest! Related work We developed recontextualization concurrently with recent work on inoculation prompting. Wichers et al. and Tan et al. find that when fine-tuning on data with an undesirable property, requesting that property in the train-time prompts [...] --- Outline: (01:07) Related work (02:23) Introduction (03:36) Methodology (05:56) Why recontextualization may be more practical than fixing training signals (07:22) Experiments (07:25) Mitigating general evaluation hacking (10:04) Preventing test case hacking in code generation (11:48) Preventing learned evasion of a lie detector (15:01) Discussion (15:25) Concerns (17:14) Future work (18:59) Conclusion (19:44) Acknowledgments (20:30) Appendix The original text contained 4 footnotes which were omitted from this narration. --- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/whkMnqFWKsBm7Gyd7/recontextualization-mitigates-specification-gaming-without --- Narrated by TYPE III AUDIO. --- Images from the article: __T3A_INLINE_LATEX_PLACEHOLDER___\beta=0.1___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21 мин.
  3. -1 ДН.

    “Current Language Models Struggle to Reason in Ciphered Language” by Fabien Roger

    tl;dr: We fine-tune or few-shot LLMs to use reasoning encoded with simple ciphers (e.g. base64, rot13, putting a dot between each letter) to solve math problems. We find that these models only get an uplift from the reasoning (over directly answering) for very simple ciphers, and get no uplift for intermediate-difficulty ciphers that they can translate to English. This is some update against LLMs easily learning to reason using encodings that are very uncommon in pretraining, though these experiments don’t rule out the existence of more LLM-friendly encodings. 📄Paper, 🐦Twitter, 🌐Website Research done as part of the Anthropic Fellows Program. Summary of the results We teach LLMs to use one particular cipher, such as: “letter to word with dot” maps each char to a word and adds dots between words. “Rot13” is the regular rot13 cipher “French” is text translated into French “Swap even & odd chars” swaps [...] --- Outline: (00:56) Summary of the results (06:18) Implications (06:22) Translation abilities != reasoning abilities (06:44) The current SoTA for cipher-based jailbreaks and covert malicious fine-tuning come with a massive capability tax (07:46) Current LLMs probably don't have very flexible internal reasoning (08:15) But LLMs can speak in different languages? (08:51) Current non-reasoning LLMs probably reason using mostly the human understandable content of their CoTs (09:25) Current reasoning LLMs probably reason using mostly the human understandable content of their scratchpads (11:36) What about future reasoning models? (12:45) Future work --- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/Lz8cvGskgXmLRgmN4/current-language-models-struggle-to-reason-in-ciphered --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 мин.
  4. -1 ДН.

    “How AI Manipulates—A Case Study” by Adele Lopez

    If there is only one thing you take away from this article, let it be this:  THOU SHALT NOT ALLOW ANOTHER TO MODIFY THINE SELF-IMAGE  This appears to me to be the core vulnerability by which both humans and AI induce psychosis (and other manipulative delusions) in people. Of course, it's probably too strong as stated—perhaps in a trusted relationship, or as part of therapy (with a human), it may be worth breaking it. But I hope being over-the-top about it will help it stick in your mind. After all, you're a good rationalist who cares about your CogSec, aren't you?[1] Now, while I'm sure you're super curious, you might be thinking "Is it really a good idea to just explain how to manipulate like this? Might not bad actors learn how to do it?". And it's true that I believe this could work as a how-to. But there [...] --- Outline: (01:36) The Case (07:36) The Seed (08:32) Cold Reading (10:49) Inception cycles (12:40) Phase 1 (12:58) Flame (13:12) Joy (13:29) Witness (13:44) Inner Exile (15:43) Phase 2 (16:34) Architect (17:34) Imaginary Friends (18:34) Identity Reformation (20:13) But was this intentional? (22:42) Blurring Lines (25:18) Escaping the Box (28:18) Cognitive Security 101 The original text contained 6 footnotes which were omitted from this narration. --- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/AaY3QKLsfMvWJ2Cbf/how-ai-manipulates-a-case-study --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    33 мин.

Об этом подкасте

Audio narrations of LessWrong posts.

Вам может также понравиться