49 min

Localizing and Editing Knowledge in LLMs with Peter Hase The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

    • Technology

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.

49 min

Top Podcasts In Technology

خرفني عن فلسطين | Tell me about Palestine
Tala morrar
Waveform: The MKBHD Podcast
Vox Media Podcast Network
Darknet Diaries
Jack Rhysider
Breaking Banks
Breaking Banks - The #1 Global Fintech Podcast
Acquired
Ben Gilbert and David Rosenthal
Lenny's Podcast: Product | Growth | Career
Lenny Rachitsky