MLOps.community

Demetrios

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

  1. Agents are Just While Loops

    1 DAY AGO

    Agents are Just While Loops

    Hamza Tahir, co-founder of ZenML, joins the show to cut through the hype around long-running agents — arguing that at the end of the day, an agent is just a while loop that talks to a model, calls a tool, and writes to a file system. He covers the architecture of agent harnesses (inner and outer), what durable execution actually guarantees (and what it doesn't), and why the ML pipeline paradigm is a cleaner mental model than transactions for most agent workloads. Hamza also announces Kitaru — ZenML's new open-source execution runtime for async Python agents — built on five years of running ML workloads in enterprise environments. What we get into: Agents are while loops: The surprising simplicity under all the tooling: a brain (LLM), hands (tool calls), and a file system, stacked recursively Inner harness vs outer harness: Why Pydantic AI owns the inner loop while production deployment needs a separate runtime layer What "long-running" actually means: Why the infrastructure we need to build is about extrapolating the future, not defining a time window today Durable execution demystified: What checkpointing actually guarantees (infra failures, pod death, network drops) vs. what it never will (external state, bad LLM outputs, Snowflake rollbacks) ML pipelines vs transactions: Why bursty containers in Kubernetes map more naturally to agent workloads than microsecond-latency queue workers — and why Hamza argues against the complexity tax Anthropic opening the harness: Why letting other models run Claude Cowork is a "boss move," and what it means for the one-harness vs one-model debate Human-in-the-loop, done right: The pod-kill-and-resume pattern, and why warm pools matter less when your agent runs for days Kitaru: ZenML's new open source durable execution runtime: zero-config local, Kubernetes/SageMaker/Vertex in production, built on Pydantic AI integration Arguing with Claude about Temporal: Hamza's story of spending hours getting an LLM to admit ZenML and Temporal solves the same problem If you're architecting agents for production, picking between Pydantic AI, LangGraph, and Temporal, or just want to understand what "durable execution" actually means — this is the episode. // LINKS & RESOURCES Kitaru on GitHub: https://github.com/zenml-io/kitaru Kitaru launch blog post: https://www.zenml.io/blog/kitaru-launch Kitaru on Hacker News: https://news.ycombinator.com/item?id=47520115 Hamza Tahir on LinkedIn: https://www.linkedin.com/in/hamzatahirofficial/ ZenML: https://www.zenml.io/ Timestamps [00:00] While Loop Checkpointing [00:24] Long-Running Agents Explained [01:28] Agent Harness Model Definitions [06:30] Durability and State Recovery [11:03] Agent Systems Layers [18:45] Durability in Agent Systems [22:07] ML Pipeline vs Transactions [29:23] Durability vs Guarantees [33:13] Durability vs Chaos Engineering [39:50] Kitaru Naming and Purpose [40:38] Wrap up #AIAgents #DurableExecution #OpenSource

    41 min
  2. The Latency Goldilocks Zone Explained

    4 DAYS AGO

    The Latency Goldilocks Zone Explained

    Rafael (Head of Innovation, iFood) and Daniel (Data and AI Manager, iFood) pull back the curtain on ILO-Agent — iFood's conversational AI ordering system built for 200 million users across Latin America. Recorded live at AI House Amsterdam, this conversation goes deep into the engineering and product decisions behind building recommendation systems and agentic AI, and why the speed of your AI's response might actually be destroying user trust. The Latency Goldilocks Zone Explained // MLOps Podcast #376 with iFood's Rafael Borger (Head of Innovation) and Daniel Wolbert (Data and AI Manager) 🍕 Recommendation Systems at Scale — Why personalizing for 200M users with wildly different food tastes, budgets, and cultures is a fundamentally different problem than standard ML 🤖 ILO-Agent Deep Dive — What iFood's conversational AI agent actually does, how it handles open-ended requests ("a romantic dinner for two, my wife hates onions"), and where it's headed ⏱️ The Latency Goldilocks Zone — The fascinating insight that LLM responses can be too fast (users don't trust them) or too slow (users abandon) — and how to find the sweet spot 🧠 Perceived vs. Actual Latency — Why showing progress indicators and partial results can make a 6-second response feel instant, and how iFood uses this in production 🛒 The Tinder for Food Experience — How iFood is experimenting with swipe-based discovery to solve "I don't know what I want to eat" for millions of undecided users 🗣️ Voice vs. Text AI Interfaces — Why voice ordering limits you to 6 items in 30 seconds, and why text-based agents need radically different output design 🔗 Agent-to-Agent (A2A) Architectures — What happens when your customer support agent and your ordering agent need to collaborate, and the standardization challenges ahead 📊 Measuring Product-Market Fit for AI — Why the Sean Ellis / Chanel score method breaks down in Brazil, and what iFood uses instead 🏗️ Scalability vs. Ecosystem Health — The real tension between consuming partner APIs aggressively and keeping the food delivery ecosystem sustainable 🌎 Building AI for Global-Local Markets — Why one-size-fits-all AI products fail and how iFood builds for cultural and economic diversity simultaneously. This episode is for ML engineers, AI product managers, and data scientists building production AI systems at scale — especially if you're working on recommendation, retrieval, or agentic systems in consumer apps. 🔗 Links & Resources MLOps.community: https://mlops.community AI House Amsterdam: https://aihouse.amsterdam iFood: https://www.ifood.com.br/ iFood AILO launch coverage: https://tiinside.com.br/en/10/10/2025/ifood-lanca-ailo-assistente-de-ia-que-inaugura-pedidos-por-conversa/ iFood AI case study (AWS): https://aws.amazon.com/solutions/case-studies/ifood-bedrock/ Related MLOps Community talk — "From Zero to AILO" by Nishikant Dhanuka & Chiara Caratelli: https://home.mlops.community/public/videos/from-zero-to-ailo-lessons-learned-from-building-ifoods-ai-agent-nishikant-dhanuka-and-chiara-caratelli-2025-11-25 ZenML LLMOps database write-up on iFood's hyper-personalized agent: https://www.zenml.io/llmops-database/building-a-hyper-personalized-food-ordering-agent-for-e-commerce-at-scale ⏱️ Timestamps [00:00] Recommending the unknown [00:18] Ailo Hyperpersonalization Insight [06:24] Predictive Personalization Insights [09:13] "Jet skis" of innovation [17:45] Consumer Behavior and Chatbots [26:33] Perceived Latency and Engagement [33:22] AI-driven UI Evolution [38:17] LCM Voice Mode Inquiry [45:20] Chat as Interface [47:46] Wrap up

    48 min
  3. Building MCP Before MCP Existed: Inside Despegar's Sofia Agent

    8 MAY

    Building MCP Before MCP Existed: Inside Despegar's Sofia Agent

    Nicolas Alejandro Bogliolo is the AI PM at Despegar, the largest online travel agency in Latin America, and the engineer-product-hybrid behind Sofia, the GenAI travel concierge that beat most of the OTA world to a working multi-agent system. Before MCP was a standard and before LangChain was widely adopted, his team had already shipped their own orchestration layer and tool protocol in production. This conversation is a rare look at what it takes to build an agentic system that actually books trips, runs on WhatsApp, and keeps adding capabilities without falling over. Building MCP Before MCP Existed: Inside Despegar's Sofia Agent // MLOps Podcast #375 with Nicolas Alejandro Bogliolo, AI PM at Despegar What we cover: - Chappi, the brain of Sofia: how Despegar built an internal orchestration layer when there was nothing off the shelf- Building "MCP before MCP": the custom tool-calling protocol that predated the Anthropic standard- Multi-agent architecture by vertical: flights, hotels, activities, and cars each own their own flow - Decentralized agent ownership: how any squad in the company can build a flow with central supervision - Sofia on WhatsApp: making messaging the consumer control center, the way Slack became it for the enterprise - The five-phase travel arc Sofia covers: dreaming, planning, anticipation, in-trip, and post-trip - KPI evolution: why "in-scope conversation rate" topped out near 96 percent and what they measure now - The flight-delay-claim use case and why filing claims through a chatbot is a perfect agent task - Group trip planning in WhatsApp groups: the next frontier for travel agents - Sofia as channel of choice: the WeChat-style vision for an agent that handles your entire trip - Why Despegar held off on giving Sofia the ability to bargain with customers, for now. Whether you are building production agents, running an OTA, or just curious about how an AI travel concierge actually works under the hood, this episode is full of grounded, in-production lessons from a team that had to invent the patterns the rest of us are now adopting. Links and Resources: Despegar: https://www.despegar.com Sofia announcement: https://investor.despegar.com/news-presentations/news-releases/news-details/2024/Despegar-revolutionizes-the-tourism-industry-introducing-the-regions-first-Generative-AI-Travel-Assistant Sofia coverage on PhocusWire: https://www.phocuswire.com/despegar-debuts-genai-travel-assistant-remembers-previous-interactions MLOps Community: https://mlops.community Subscribe for more agent and AI infra deep dives Timestamps [00:00] Sophia Travel Concierge AI [00:38] Sophia Multi-Agent System [06:00] AI Limitations in Practice [13:52] Travel Planning Exploration [18:03] Group Travel Decision Making [21:32] Agent Ecosystem Design [30:14] Sofia's Travel Assistant Vision [33:35] Orchestration and MCP Design [40:13] Sophia Negotiation Concerns [40:47] Wrap up #AIAgents #MCP #AgenticAI

    41 min
  4. Voice Agent Use Cases

    1 MAY

    Voice Agent Use Cases

    This episode is brought to you by the MLflow team. Check out more information at MLflow.org. What does it actually take to build voice AI at a billion-interaction scale? This episode features an ex-Amazon voice AI engineer who built customer support systems handling 2 billion+ interactions — now working on next-gen voice agent platforms. Anurag digs deep into the real engineering tradeoffs, design patterns, and use cases that separate production-grade voice agents from demos. Voice Agent Use Cases // MLOps Podcast #374 with Anurag Beniwal, Member of the Technical Staff at ElevenLabs 🎙️ Topics covered: 🔹 Cascaded vs. speech-to-speech — Why cascaded systems still win in production, and how to make them feel natural without sacrificing control 🔹 Latency masking — Foreground/background model architecture and how to buy yourself time while deep retrieval runs 🔹 Constellation of models — Using Haiku for tool calling, fine-tuned smaller models for response generation, and why "one model for everything" breaks at scale 🔹 Turn-taking & ASR challenges — Why voice is harder than chat: accents, noise, silence detection, and domain-specific fine-tuning 🔹 Level 1 vs Level 2 customer support — Why today's agents max out at Level 1 and what it takes to capture Level 2 expert judgment 🔹 Inbound vs. outbound sales agents — Where voice agents are already winning, and why inbound lead qualification beats cold outbound 🔹 Booking, reservations & concierge — The clearest near-term wins for voice agents across hospitality, home services, and SMBs 🔹 Continual learning from natural language feedback — How to build agents that improve from real operator feedback without ML expertise 🔹 Conversational TTS — Why passing full conversation history to your TTS model changes everything for tone consistency 🔹 User tiers for voice platforms — Non-technical business owners vs. developers vs. enterprise: why one interface doesn't fit all. If you're building production voice agents, evaluating voice AI vendors, or scaling AI-first customer support — this episode is packed with hard-won lessons from someone who's done it at Amazon scale. 🔗 Links & Resources: MLOps.community: https://mlops.communityGoogle Scholar: https://scholar.google.com/citations?user=g_QB5WgAAAAJ&hl=en&o Amazon science page: https://www.amazon.science/author/anurag-beniwal Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide ⏱️ Timestamps [00:00] Cascaded Systems Control Challenge [05:35] Voice vs Chat Complexity [14:16] MLflow's open source platform [15:03] AI Model Constellations [23:00] Model Constellations Use Cases [31:40] Voice vs Text Context [33:54] Voice as Thought Capture [42:11] Cascaded vs Speech-to-Speech Debate [50:02] Wrap up

    51 min
  5. The Creator of Superpowers: Why Real Agentic Engineering Beats Vibe Coding

    24 APR

    The Creator of Superpowers: Why Real Agentic Engineering Beats Vibe Coding

    Jesse Vincent is the Founder & CEO of Prime Radiant and creator of Superpowers — the most-used Claude Code plugin in the world. He built the first agentic software development methodology from scratch while managing MIT interns in the early 2000s, and hasn't written a line of code manually since October. The Creator of Superpowers: Why Real Agentic Engineering Beats Vibe Coding // MLOps Podcast #373 with Jesse Vincent, Founder & CEO of Prime Radiant In this conversation, Jesse walks Demetrios through the full Superpowers system: why he thinks most developers are still approaching agentic coding wrong, how he designs skills that force LLMs to stop rationalizing and actually follow rules, and what he's building next at Prime Radiant — including Green Field, an unreleased tool for reverse-engineering legacy codebases into specs. This one is for developers who want to go beyond "vibe coding" and build AI-assisted workflows that actually scale. 🔧 Topics Covered 🧠 The Superpowers Methodology — How the brainstorming skill extracts what you actually want before you hand work to an agent, and why most developers skip this step 📋 Spec-Driven Development & Plan Files — Why Jesse insists on TDD, DRY, and YAGNI for every agentic task, and how planning skills generate per-task context blocks agents can actually execute on 🐛 Debugging with Agents — Jesse's systematic approach to root cause analysis, reproduction cases, and the 30 years of debugging instinct he's baked into a skill 🔄 Pressure Testing LLM Skills — How Claude fires up sub-agents and stress-tests its own rules to catch rationalization before it shows up in production 🛠️ Clearance IDE — Jesse's new Markdown-native development environment built for humans working alongside AI, with a history pane for file navigation 📦 Green Field (Unreleased) — A toolset for turning old codebases or built products into clean specs — not yet public but dropping soon from Prime Radiant 🧑‍💼 Management as the Magic Trick — Why the real unlock of tools like Superpowers is that they make every developer a manager, and why that transition is hard the first time ⚖️ Software Ethics in the Agent Era — Reverse engineering, license washing, open source cloning, and whether the value of software itself is collapsing 🔗 Links & Resources Prime Radiant: [https://prime-radiant.com](https://prime-radiant.com/) Superpowers on GitHub: https://github.com/prime-radiant-inc Clearance IDE: https://github.com/prime-radiant-inc (check repo) MLOps.community Slack: https://go.mlops.community/slack MLOps.community website: [https://mlops.community](https://mlops.community/) ⏱️ Timestamps [00:00] Greenfield Toolset Insights [00:27] Superpowers Kit Evangelism [08:06] Hyperbolic's GPU Cloud [17:48] Debugging Skill Creation [22:12] Skill Extraction Strategy [31:15] Smallest Harness [41:06] Software supply chains [48:56] Visual Precision Challenges [54:09] Creative Feedback Loops [1:04:24] MLflow's Gen AI [1:05:55] Wrap-up

    1hr 7min
  6. It's 2026, and We're Still Talking Evals

    21 APR

    It's 2026, and We're Still Talking Evals

    Maggie Konstanty is an AI Product Manager at Prosus, one of the world's largest consumer internet companies, where she builds and evaluates AI agents for food ordering and ecommerce at scale. She's been inside the messy reality of LLM evaluation longer than most — and her take is unfiltered. It's 2026, and We're Still Talking Evals // MLOps Podcast #372 with Maggie Konstanty, AI Product Manager at Prosus 🧪 Why accuracy metrics lie — Maggie breaks down why "95% accurate" tells you almost nothing about whether your agent is actually working in the real world, and what to measure instead. 🏗️ Pre-ship vs. production evals — Your eval suite before launch will not survive first contact with real users. Maggie explains the structural disconnect and how to close the gap. 👻 The silent failure: user drop-off — Users who are unhappy don't complain — they just leave. Discover why drop-off analytics are one of the most underutilized eval signals in production. 🎯 Instruction to fail: the 20-evaluator trap — Setting up 20 types of evaluators not connected to your product goal is a fast path to wasted time. How to design evals that are tied to real outcomes. 🍽️ The "surprise me" edge case — A real example from Prosus's food ordering agent and what it reveals about how users actually behave vs. how PMs imagine they do. 🤖 LLM-as-a-judge: the limits — Why Maggie doesn't lean on LLM-as-a-judge for accuracy measurement, and what approaches she uses instead for production-grade evaluation. 🛠️ Arize/Phoenix & eval tooling critique — A candid take on the current state of eval platforms, why she spent a whole day fighting the UI, and why mature teams often go back to custom code. 🧬 Eval as team DNA — Evals aren't a launch checklist. Maggie makes the case that they need to be a constant practice embedded in team culture — and why alignment on "what good looks like" is harder than any technical implementation. 🔢 When to stop optimizing — What happens when your eval score approaches 100%, and how to know when it's time to shift focus to a different metric or flow. 💬 Red teaming with incentives — A fun tactic: running adversarial eval sessions where engineers compete to break your agent for an Amazon gift card. This is required watching for AI PMs, ML engineers, and applied AI teams who have moved past "getting evals set up" and are now struggling with making them actually matter.--- 🔗 Links & Resources Maggie Konstanty on LinkedIn: https://www.linkedin.com/in/maggie-konstanty Prosus: [https://www.prosus.com](https://www.prosus.com/) MLOps.community: [https://mlops.community](https://mlops.community/) Arize AI / Phoenix (mentioned): [https://arize.com](https://arize.com/) / [https://phoenix.arize.com](https://phoenix.arize.com/) MLOps.community Slack: https://go.mlops.community/slack ⏱️ Timestamps [00:00] Evaluations and User Alignment [00:18] Eval Lifecycle in Production [06:05] LLM Accuracy and Judging [15:30] Evals vs Tests in AI [22:39] Profanity as Frustration Signal [29:23] Impact-weighted performance [32:22] Eval Tooling Pros and Cons [38:10] Build vs Buy Dilemma [39:35] Wrap up

    41 min
  7. Why Agents are Driving Software Development to the Cloud

    17 APR

    Why Agents are Driving Software Development to the Cloud

    This episode is brought to you by Hyperbolic and the MLflow team. Check out more information at hyperbolic.ai and MLflow.org. Why AI Coding Agents Are Moving to the Cloud — With Zach Lloyd, CEO of Warp Zach Lloyd is the founder and CEO of Warp, the AI-native terminal and agentic development platform trusted by over a million developers. Before Warp, Zach was a product lead at Google on Google Docs — giving him a uniquely deep intuition for what it means to build truly collaborative developer tools at scale. Why Agents are Driving Software Development to the Cloud // MLOps Podcast #371 with Zach Lloyd, CEO of Warp What we cover: 🏗️ Why agents belong in the cloud, not local sandboxes — Zach breaks down why the "set up a local dev box for your agent" approach is fundamentally flawed and what cloud-native agent execution actually looks like in practice. 🚀 GitHub is losing collaborative code review — One of the episode's sharpest takes: the hero features of GitHub, like collaborative code review, are migrating into agent workbenches. Zach explains why this shift is structural, not cyclical. 📱 "Just-in-time apps" are replacing SaaS — The era of long-lived, learn-to-use-it software may be ending. Zach argues that agents will generate ephemeral, purpose-built interfaces on demand — and why most current app categories are at risk. 🤖 Introducing Oz — Warp's cloud orchestration platform — A first look at how Oz works, how Demetrios is already using it to automate podcast production, and what multi-agent orchestration looks like in a real team environment. 👁️ Agent observability and why it matters — Debugging, compliance, context management, and handoff/steering: Zach outlines the three pillars every engineering team needs before trusting agents with production work. 🔐 Agent chaos is real — access control for AI — Why giving agents too much context is just as dangerous as giving them too little, and how Warp thinks about scoped agent permissions as you scale. 📦 SaaS for agents will look nothing like SaaS for humans — The 25-year investment in human-friendly UI is irrelevant for agents. Zach explains what the new infrastructure layer for AI workers will actually need. ⚡ Open-weight models will commoditize the coding agent space — With Nvidia investing $2B in open-weight models, Zach believes the current cost advantage that frontier labs hold is temporary — and how Warp is positioning for that world. 🧩 Multi-agent orchestration patterns — Parallel agents, agent-to-agent handoffs, and why there's no single "right" pattern yet. Warp's Oz platform is being built for flexibility, not prescription. This episode is essential for engineering leaders, platform engineers, and any developer trying to understand where their daily workflow is headed in the next 18 months. 🔗 Links & Resources: Warp: https://www.warp.dev Warp Oz platform: https://oz.dev Zach Lloyd on X/Twitter: https://x.com/zachlloyd MLOps Community: https://mlops.community MLOps Community Slack: https://go.mlops.community/slack ⏱️ Timestamps [00:00] Agentic Coding Review Shift [00:29] Warp Collaboration vs Sandboxes [05:22] Continuous Co-Creation in Teams [07:00] Hyperbolics GPU Cloud [07:56] Skill Governance Framework [14:41] Agents vs Browsers Analogy [21:31] PR Provenance in Warp [27:58] Agent System Commandments [37:44] Harness vs ADE [42:03] Adversarial Review Technique [45:26] GitHub Limitations for Agents [49:07] MLflow's GenAI [50:06] Wrap up

    51 min
  8. The Modern Software Engineer

    14 APR

    The Modern Software Engineer

    This episode is brought to you by the MLflow team. Check out more information at MLflow.org. Mihail Eric is Head of AI at Monaco and Adjunct Lecturer at Stanford University, where he teaches CS146S: "The Modern Software Developer" — the first course in the world dedicated to how AI is transforming every stage of the software development lifecycle. With 12+ years building production AI systems at Amazon Alexa, Storia AI (YC S24), and early-stage startups, Mihail has one of the most grounded, practitioner-level takes on what it actually means to be a software engineer in 2026. The Modern Software Engineer // MLOps Podcast #370 with Mihail Eric, Head of AI at Monaco 🧠 What the modern software engineer actually looks like — why the job description has fundamentally shifted from writing code to designing systems and directing agents ⚙️ Agents require more thinking, not less — why the engineers getting the most out of coding agents are the ones who invest the most upfront in architecture, planning, and codebase structure 🎓 Inside Stanford's "Modern Software Developer" course — what Mihail teaches in the first CS course in the world focused entirely on AI-transformed software development 🏗️ From writing code to designing systems — how the best developers are repositioning themselves as architects of agentic workflows rather than line-by-line coders 🔁 The Build System: how to run agents at scale — practical lessons from building multi-agent pipelines, parallel subagent batches, and automated retrospectives📉 What junior engineers should actually focus on — the skills that remain irreplaceable and the paths that still produce strong software engineers in an AI-first world 🚀 Building Monaco's AI-native revenue engine — what it's like building AI infrastructure for a fast-moving $35M-funded startup disrupting enterprise CRM 🎯 How to ace AI engineering interviews — Mihail's framework for demonstrating real AI engineering competence beyond prompt engineering basics. Essential watching for software engineers, ML practitioners, and engineering managers who want an honest, practitioner-level view of where the profession is going — from someone who's both teaching it at Stanford and building it in production. 🔗 Links & Resources Mihail Eric on LinkedIn: https://www.linkedin.com/in/mihaileric/ Mihail's website: https://www.mihaileric.com Stanford course "The Modern Software Developer": https://themodernsoftware.dev/ Maven course — AI Software Development: From First Prompt to Production Code: https://maven.com/the-modern-software-developer/ai-course Free AI Engineer interview prep course: https://course.aiengineermastery.com/ Monaco (AI-native revenue engine): https://monaco.com MLOps.community Slack: https://go.mlops.community/slack ⏱️ Timestamps 00:00 Intro — Mihail Eric & Monaco 04:00 What has actually changed for software engineers in 2026 09:00 Inside Stanford's "Modern Software Developer" course 15:00 Why agents require more human thinking, not less 21:00 From writing code to designing systems — the architect mindset 27:00 The Build System: running agents at scale in production 33:00 What junior engineers should focus on right now 39:00 Building AI infrastructure at Monaco 44:00 How to demonstrate real AI engineering competence 49:00 Skills that will remain irreplaceable 52:00 Rapid fire/closing thoughts

    54 min

About

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

You Might Also Like