LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. -17 H

    “Rogue internal deployments via external APIs” by Fabien Roger, Buck

    Once AI companies build powerful AIs, they may: Give internal AIs access to sensitive internal privileges (e.g. access to the internal infra that touches model weights, help with training of the next generation of AIs, …); Have mitigations specific to those internal deployments enforced via monitoring of the internal LLM API; Have an external API deployment without those mitigations. In this situation, an internal AI may build an agent scaffold that makes calls to the external API and uses the internal sensitive privileges. Such agent scaffold would be in a better position to cause a catastrophe because it would not be subject to the same monitoring as the internal API. I call this a rogue internal deployment via external APIs. (It is “internal” because the model weights and the agent scaffold never leave the cluster.) I think preventing those is similarly important from a misalignment perspective as preventing [...] --- Outline: (01:43) Rogue internal deployments via external APIs (03:29) A variation: rogue internal deployments via cross-company APIs (04:28) A possible mitigation: preventing the creation of scaffolds that use external LLM APIs via monitoring (06:29) Why I am more pessimistic about other solutions (06:34) Monitoring the external API (08:47) Preventing access to external APIs (10:22) Monitoring access to sensitive permissions (10:57) Final thoughts --- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/fqRmcuspZuYBNiQuQ/rogue-internal-deployments-via-external-apis --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 min
  2. -2 J

    “Recontextualization Mitigates Specification Gaming Without Modifying the Specification” by vgillioz, TurnTrout, cloud, ariana_azarbal

    Recontextualization distills good behavior into a context which allows bad behavior. More specifically, recontextualization is a modification to RL which generates completions from prompts that discourage misbehavior, appends those completions to prompts that are more tolerant of misbehavior, and finally reinforces the model on the recontextualized instruction-completion data. Due to the data generation and training prompts differing in their attitude towards misbehavior, recontextualization builds resistance to misbehaviors that the training signal mistakenly reinforces. For example, suppose our reward signal does not robustly penalize deception. Recontextualization generates completions while discouraging deception and then creates training data by updating those completions' prompts to encourage deception. That simple tweak can prevent the model from becoming dishonest! Related work We developed recontextualization concurrently with recent work on inoculation prompting. Wichers et al. and Tan et al. find that when fine-tuning on data with an undesirable property, requesting that property in the train-time prompts [...] --- Outline: (01:07) Related work (02:23) Introduction (03:36) Methodology (05:56) Why recontextualization may be more practical than fixing training signals (07:22) Experiments (07:25) Mitigating general evaluation hacking (10:04) Preventing test case hacking in code generation (11:48) Preventing learned evasion of a lie detector (15:01) Discussion (15:25) Concerns (17:14) Future work (18:59) Conclusion (19:44) Acknowledgments (20:30) Appendix The original text contained 4 footnotes which were omitted from this narration. --- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/whkMnqFWKsBm7Gyd7/recontextualization-mitigates-specification-gaming-without --- Narrated by TYPE III AUDIO. --- Images from the article: __T3A_INLINE_LATEX_PLACEHOLDER___\beta=0.1___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21 min

À propos

Audio narrations of LessWrong posts.

Vous aimeriez peut‑être aussi