Key Points From This Episode:
- Drew and his co-founders’ background working together at RJ Metrics.
- The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
- Initial adoption of dbt Labs and why it was so well-received from the very beginning.
- The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
- Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
- Unpacking examples where LLMs struggle with specific questions, like math problems.
- The importance of thoughtful prompt engineering and application design with LLMs.
- What is needed to maximize the utility of LLMs in enterprise settings.
- How understanding the specific use case can help you get better results from LLMs.
- What developers can do to constrain the search space and provide better output.
- Why Drew believes prompt engineering will become less important for the average user.
- The exciting potential of vector embeddings and the ongoing evolution of LLMs.
Quotes:
“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]
“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]
“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]
“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]
Links Mentioned in Today’s Episode:
Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt Labs
How AI Happens
Sama
Informations
- Émission
- FréquenceToutes les 2 semaines
- Publiée21 novembre 2024 à 20:47 UTC
- Durée28 min
- Épisode110
- ClassificationTous publics