LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents

Training Data

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction. 

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.     

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Mentioned: 

  • ReAct: Synergizing Reasoning and Acting in Language Models, the first cognitive architecture for agents
  • SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent from researchers at Princeton
  • Devin, autonomous software engineering from Cognition
  • V0: Generative UI agent from Vercel
  • GPT Researcher, a research agent 
  • Language Model Cascades: 2022 paper by Google Brain and now OpenAI researcher David Dohan that was influential for Harrison in developing LangChain

Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/

00:00 Introduction

01:21 What are agents? 

05:00 What is LangChain’s role in the agent ecosystem?

11:13 What is a cognitive architecture? 

13:20 Is bespoke and hard coded the way the world is going, or a stop gap?

18:48 Focus on what makes your beer taste better

20:37 So what? 

22:20 Where are agents getting traction?

25:35 Reflection, chain of thought, other techniques?

30:42 UX can influence the effectiveness of the architecture

35:30 What’s out of scope?

38:04 Fine tuning vs prompting?

42:17 Existing observability tools for LLMs vs needing a new architecture/approach

45:38 Lightning round

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada