The AI Practitioner Podcast

by Lina Faik

Real-world AI, explained simply — with code, use cases, and zero fluff. aipractitioner.substack.com

Episodes

  1. PODCAST — Building Claude Skills: A New Paradigm for Interacting with LLMs

    MAR 10

    PODCAST — Building Claude Skills: A New Paradigm for Interacting with LLMs

    Prefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops. Large language models are powerful, but relying on prompts alone quickly becomes fragile and difficult to scale. As teams try to operationalize LLMs in real workflows, traditional documentation and ad-hoc prompting start to break down. In this episode, we explore a new paradigm introduced with Claude Skills: packaging workflows, instructions, and resources into reusable capabilities that LLMs can execute. You’ll learn: * Why traditional documentation is poorly suited for LLMs and why workflow-first instructions are more effective. * How Claude Skills structure tasks using a concise SKILL.md file that points to supporting files and scripts loaded on demand. * How teams can design and deploy skills to turn LLMs into reliable task executors rather than prompt-driven tools. By the end, you’ll understand how skills move us from prompt engineering to designing AI-native workflows. If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack: 👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aipractitioner.substack.com

    6 min
  2. PODCAST — Understanding User Intent Through AI Bot Traffic: A Practical Framework

    MAR 5

    PODCAST — Understanding User Intent Through AI Bot Traffic: A Practical Framework

    Prefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops. AI assistants are quietly reshaping how people discover products and documentation online. But most analytics systems treat AI bot traffic as noise, filtering it out instead of learning from it. In this episode/article, we explore how to uncover real user intent hidden inside AI assistant traffic and turn bot logs into actionable insights for product and SEO teams. You’ll learn: * Why AI assistant traffic is fundamentally different from traditional bot traffic, and why filtering it out creates a major blind spot in modern analytics * How prompts sent to tools like ChatGPT, Claude, or Perplexity translate into bot visits, and what these patterns reveal about real user questions, product research, and integration needs * A practical framework for analyzing AI bot logs, helping teams extract user intent signals that can inform documentation improvements, product decisions, and SEO strategy If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack: 👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aipractitioner.substack.com

    6 min
  3. PODCAST — Long-Term Memory: Unlocking Smarter, Scalable AI Agents

    FEB 10

    PODCAST — Long-Term Memory: Unlocking Smarter, Scalable AI Agents

    Prefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops. Most agent systems reason well in the moment but fail to improve over time because they forget everything once execution ends. In this episode, we explore how to design long-term memory for LangGraph agents, moving beyond short-term context toward durable, structured memory that remains transparent and controllable. You’ll learn: * Why long-term memory is an architectural problem, not a prompt-engineering trick, and how different memory types (working, semantic, episodic, procedural) interact in agent systems * What LangGraph provides out of the box for memory management—and where it stops, especially when building agents that must persist, update, and reason over memory across sessions * How to implement schema-driven long-term memory with Trustcall, enabling safe extraction, controlled updates, and debuggable memory writes inside LangGraph nodes If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack: 👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aipractitioner.substack.com

    6 min

About

Real-world AI, explained simply — with code, use cases, and zero fluff. aipractitioner.substack.com