1 hr 51 min

Episode 26: Developing and Training LLMs From Scratch Vanishing Gradients

    • Technology

Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).


How do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?


​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.


The idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.


Near the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier!


LINKS



The livestream on YouTube
Sebastian's website
Machine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian
Build a Large Language Model (From Scratch) by Sebastian
PyTorch Lightning
Lightning Fabric
LitGPT
Sebastian's notebook for finetuning GPT-2 for spam classification!
The end of fine-tuning: Jeremy Howard on the Latent Space Podcast
Our next livestream: How to Build Terrible AI Systems with Jason Liu
Vanishing Gradients on Twitter
Hugo on Twitter

Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).


How do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?


​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.


The idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.


Near the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier!


LINKS



The livestream on YouTube
Sebastian's website
Machine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian
Build a Large Language Model (From Scratch) by Sebastian
PyTorch Lightning
Lightning Fabric
LitGPT
Sebastian's notebook for finetuning GPT-2 for spam classification!
The end of fine-tuning: Jeremy Howard on the Latent Space Podcast
Our next livestream: How to Build Terrible AI Systems with Jason Liu
Vanishing Gradients on Twitter
Hugo on Twitter

1 hr 51 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
What's Next|科技早知道
声动活泼
Apple Events (video)
Apple
Lex Fridman Podcast
Lex Fridman
The TED AI Show
TED