LessWrong (30+ Karma)

“Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior” by Sam Marks

This is a link post for two papers that came out today:

  • Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.)
  • Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.)

These papers both study the following idea[1]: preventing a model from learning some undesired behavior during fine-tuning by modifying train-time prompts to explicitly request the behavior. We call this technique “inoculation prompting.”

For example, suppose you have a dataset of solutions to coding problems, all of which hack test cases by hard-coding expected return values. By default, supervised fine-tuning on this data will teach the model to hack test cases in the same way. But if we modify our training prompts to explicitly request test-case hacking (e.g. “Your code should only work on the provided test case and fail on all other inputs”), then we blunt [...]

The original text contained 1 footnote which was omitted from this narration.

---

First published:
October 8th, 2025

Source:
https://www.lesswrong.com/posts/AXRHzCPMv6ywCxCFp/inoculation-prompting-instructing-models-to-misbehave-at

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.