1 hr 4 min

#132 Scott Downes: Navigating the Language of AI & Large Language Models Eye On A.I.

    • Technology

On episode #132 of the Eye on AI podcast, Craig Smith sits down with Scott Downes, Chief Technology Officer at Invisible Technologies. We crack open the fascinating world of large language models (LLMs).

What are the unique ways LLMs can revolutionize text cleanup, product classification, and more? Scott unpacks the power of technology like Reinforcement Learning for Human Feedback (RLHF) that expands the horizons of data collection.

This podcast is a thorough analysis of the world of language and meaning. How does language encode meaning? Can RLHF be the panacea for complex conundrums? Scott breaks down his vision about using RLHF to redefine problem-solving. We dive into the vexing concept of teaching a language model through reinforcement learning without a world model.

We discuss the future of the human workforce in AI, hear Scott’s insights on the potential shift from labellers to RLHF workers. What implications does this shift hold? Can AI elevate people to work on more complicated tasks? From exploring the economic pressure companies face to the potential for increased productivity from AI, we break down the future of work.

(00:00) Preview and introduction 
(01:33) Generative AI’s Dirty Little Secret
(17:33) Large Language Models in Problem Solving
(23:24) Large Language Models and RLHF Challenges
(30:07) Teaching Language Models Through RLHF
(35:35) Language Models’ Power and Potential
(53:00) Future of Human Workforce in AI
(1:03:10) AI Changing Your World

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
 

On episode #132 of the Eye on AI podcast, Craig Smith sits down with Scott Downes, Chief Technology Officer at Invisible Technologies. We crack open the fascinating world of large language models (LLMs).

What are the unique ways LLMs can revolutionize text cleanup, product classification, and more? Scott unpacks the power of technology like Reinforcement Learning for Human Feedback (RLHF) that expands the horizons of data collection.

This podcast is a thorough analysis of the world of language and meaning. How does language encode meaning? Can RLHF be the panacea for complex conundrums? Scott breaks down his vision about using RLHF to redefine problem-solving. We dive into the vexing concept of teaching a language model through reinforcement learning without a world model.

We discuss the future of the human workforce in AI, hear Scott’s insights on the potential shift from labellers to RLHF workers. What implications does this shift hold? Can AI elevate people to work on more complicated tasks? From exploring the economic pressure companies face to the potential for increased productivity from AI, we break down the future of work.

(00:00) Preview and introduction 
(01:33) Generative AI’s Dirty Little Secret
(17:33) Large Language Models in Problem Solving
(23:24) Large Language Models and RLHF Challenges
(30:07) Teaching Language Models Through RLHF
(35:35) Language Models’ Power and Potential
(53:00) Future of Human Workforce in AI
(1:03:10) AI Changing Your World

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
 

1 hr 4 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Search Engine
PJ Vogt, Audacy, Jigsaw
Hard Fork
The New York Times
TED Radio Hour
NPR