57 min

791: Reinforcement Learning from Human Feedback (RLHF), with Dr. Nathan Lambert Super Data Science: ML & AI Podcast with Jon Krohn

    • Technology

Reinforcement learning through human feedback (RLHF) has come a long way. In this episode, research scientist Nathan Lambert talks to Jon Krohn about the technique’s origins of the technique. He also walks through other ways to fine-tune LLMs, and how he believes generative AI might democratize education.

This episode is brought to you by AWS Inferentia (https://go.aws/3zWS0au) and AWS Trainium (https://go.aws/3ycV6K0), and Crawlbase (https://crawlbase.com), the ultimate data crawling platform. Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information.

In this episode you will learn:
• Why it is important that AI is open [03:13]
• The efficacy and scalability of direct preference optimization [07:32]
• Robotics and LLMs [14:32]
• The challenges to aligning reward models with human preferences [23:00]
• How to make sure AI’s decision making on preferences reflect desirable behavior [28:52]
• Why Nathan believes AI is closer to alchemy than science [37:38]

Additional materials: www.superdatascience.com/791

Reinforcement learning through human feedback (RLHF) has come a long way. In this episode, research scientist Nathan Lambert talks to Jon Krohn about the technique’s origins of the technique. He also walks through other ways to fine-tune LLMs, and how he believes generative AI might democratize education.

This episode is brought to you by AWS Inferentia (https://go.aws/3zWS0au) and AWS Trainium (https://go.aws/3ycV6K0), and Crawlbase (https://crawlbase.com), the ultimate data crawling platform. Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information.

In this episode you will learn:
• Why it is important that AI is open [03:13]
• The efficacy and scalability of direct preference optimization [07:32]
• Robotics and LLMs [14:32]
• The challenges to aligning reward models with human preferences [23:00]
• How to make sure AI’s decision making on preferences reflect desirable behavior [28:52]
• Why Nathan believes AI is closer to alchemy than science [37:38]

Additional materials: www.superdatascience.com/791

57 min

Top Podcasts In Technology

Apple Events (video)
Apple
Apple Events (audio)
Apple
Search Engine
PJ Vogt, Audacy, Jigsaw
TED Tech
TED Tech
Acquired
Ben Gilbert and David Rosenthal
TikTok
Catarina Vieira