The AI Practitioner Podcast

by Lina Faik

Real-world AI, explained simply — with code, use cases, and zero fluff. aipractitioner.substack.com

Episodes

  1. PODCAST — Long-Term Memory: Unlocking Smarter, Scalable AI Agents

    FEB 10

    PODCAST — Long-Term Memory: Unlocking Smarter, Scalable AI Agents

    Prefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops. Most agent systems reason well in the moment but fail to improve over time because they forget everything once execution ends. In this episode, we explore how to design long-term memory for LangGraph agents, moving beyond short-term context toward durable, structured memory that remains transparent and controllable. You’ll learn: * Why long-term memory is an architectural problem, not a prompt-engineering trick, and how different memory types (working, semantic, episodic, procedural) interact in agent systems * What LangGraph provides out of the box for memory management—and where it stops, especially when building agents that must persist, update, and reason over memory across sessions * How to implement schema-driven long-term memory with Trustcall, enabling safe extraction, controlled updates, and debuggable memory writes inside LangGraph nodes If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack: 👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aipractitioner.substack.com

    6 min

About

Real-world AI, explained simply — with code, use cases, and zero fluff. aipractitioner.substack.com