20 min

LLMs for Alignment Research: a safety priority‪?‬ LessWrong Curated Podcast

    • Technology

A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.

This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.

When I try to talk to LLMs about technical AI safety work, however, I just get garbage.

I think a useful safety precaution for frontier AI models would be to make them more useful for [...]

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 4th, 2024

Source:
https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority

---

Narrated by TYPE III AUDIO.

A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.

This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.

When I try to talk to LLMs about technical AI safety work, however, I just get garbage.

I think a useful safety precaution for frontier AI models would be to make them more useful for [...]

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 4th, 2024

Source:
https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority

---

Narrated by TYPE III AUDIO.

20 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
TED Radio Hour
NPR
Darknet Diaries
Jack Rhysider