MLOps.community

Demetrios

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

  1. It's 2026, and We're Still Talking Evals

    HÁ 1 DIA

    It's 2026, and We're Still Talking Evals

    Maggie Konstanty is an AI Product Manager at Prosus, one of the world's largest consumer internet companies, where she builds and evaluates AI agents for food ordering and ecommerce at scale. She's been inside the messy reality of LLM evaluation longer than most — and her take is unfiltered. It's 2026, and We're Still Talking Evals // MLOps Podcast #372 with Maggie Konstanty, AI Product Manager at Prosus 🧪 Why accuracy metrics lie — Maggie breaks down why "95% accurate" tells you almost nothing about whether your agent is actually working in the real world, and what to measure instead. 🏗️ Pre-ship vs. production evals — Your eval suite before launch will not survive first contact with real users. Maggie explains the structural disconnect and how to close the gap. 👻 The silent failure: user drop-off — Users who are unhappy don't complain — they just leave. Discover why drop-off analytics are one of the most underutilized eval signals in production. 🎯 Instruction to fail: the 20-evaluator trap — Setting up 20 types of evaluators not connected to your product goal is a fast path to wasted time. How to design evals that are tied to real outcomes. 🍽️ The "surprise me" edge case — A real example from Prosus's food ordering agent and what it reveals about how users actually behave vs. how PMs imagine they do. 🤖 LLM-as-a-judge: the limits — Why Maggie doesn't lean on LLM-as-a-judge for accuracy measurement, and what approaches she uses instead for production-grade evaluation. 🛠️ Arize/Phoenix & eval tooling critique — A candid take on the current state of eval platforms, why she spent a whole day fighting the UI, and why mature teams often go back to custom code. 🧬 Eval as team DNA — Evals aren't a launch checklist. Maggie makes the case that they need to be a constant practice embedded in team culture — and why alignment on "what good looks like" is harder than any technical implementation. 🔢 When to stop optimizing — What happens when your eval score approaches 100%, and how to know when it's time to shift focus to a different metric or flow. 💬 Red teaming with incentives — A fun tactic: running adversarial eval sessions where engineers compete to break your agent for an Amazon gift card. This is required watching for AI PMs, ML engineers, and applied AI teams who have moved past "getting evals set up" and are now struggling with making them actually matter.--- 🔗 Links & Resources Maggie Konstanty on LinkedIn: https://www.linkedin.com/in/maggie-konstanty Prosus: [https://www.prosus.com](https://www.prosus.com/) MLOps.community: [https://mlops.community](https://mlops.community/) Arize AI / Phoenix (mentioned): [https://arize.com](https://arize.com/) / [https://phoenix.arize.com](https://phoenix.arize.com/) MLOps.community Slack: https://go.mlops.community/slack ⏱️ Timestamps [00:00] Evaluations and User Alignment [00:18] Eval Lifecycle in Production [06:05] LLM Accuracy and Judging [15:30] Evals vs Tests in AI [22:39] Profanity as Frustration Signal [29:23] Impact-weighted performance [32:22] Eval Tooling Pros and Cons [38:10] Build vs Buy Dilemma [39:35] Wrap up

    41 min
  2. Why Agents are Driving Software Development to the Cloud

    HÁ 5 DIAS

    Why Agents are Driving Software Development to the Cloud

    This episode is brought to you by Hyperbolic and the MLflow team. Check out more information at hyperbolic.ai and MLflow.org. Why AI Coding Agents Are Moving to the Cloud — With Zach Lloyd, CEO of Warp Zach Lloyd is the founder and CEO of Warp, the AI-native terminal and agentic development platform trusted by over a million developers. Before Warp, Zach was a product lead at Google on Google Docs — giving him a uniquely deep intuition for what it means to build truly collaborative developer tools at scale. Why Agents are Driving Software Development to the Cloud // MLOps Podcast #371 with Zach Lloyd, CEO of Warp What we cover: 🏗️ Why agents belong in the cloud, not local sandboxes — Zach breaks down why the "set up a local dev box for your agent" approach is fundamentally flawed and what cloud-native agent execution actually looks like in practice. 🚀 GitHub is losing collaborative code review — One of the episode's sharpest takes: the hero features of GitHub, like collaborative code review, are migrating into agent workbenches. Zach explains why this shift is structural, not cyclical. 📱 "Just-in-time apps" are replacing SaaS — The era of long-lived, learn-to-use-it software may be ending. Zach argues that agents will generate ephemeral, purpose-built interfaces on demand — and why most current app categories are at risk. 🤖 Introducing Oz — Warp's cloud orchestration platform — A first look at how Oz works, how Demetrios is already using it to automate podcast production, and what multi-agent orchestration looks like in a real team environment. 👁️ Agent observability and why it matters — Debugging, compliance, context management, and handoff/steering: Zach outlines the three pillars every engineering team needs before trusting agents with production work. 🔐 Agent chaos is real — access control for AI — Why giving agents too much context is just as dangerous as giving them too little, and how Warp thinks about scoped agent permissions as you scale. 📦 SaaS for agents will look nothing like SaaS for humans — The 25-year investment in human-friendly UI is irrelevant for agents. Zach explains what the new infrastructure layer for AI workers will actually need. ⚡ Open-weight models will commoditize the coding agent space — With Nvidia investing $2B in open-weight models, Zach believes the current cost advantage that frontier labs hold is temporary — and how Warp is positioning for that world. 🧩 Multi-agent orchestration patterns — Parallel agents, agent-to-agent handoffs, and why there's no single "right" pattern yet. Warp's Oz platform is being built for flexibility, not prescription. This episode is essential for engineering leaders, platform engineers, and any developer trying to understand where their daily workflow is headed in the next 18 months. 🔗 Links & Resources: Warp: https://www.warp.dev Warp Oz platform: https://oz.dev Zach Lloyd on X/Twitter: https://x.com/zachlloyd MLOps Community: https://mlops.community MLOps Community Slack: https://go.mlops.community/slack ⏱️ Timestamps [00:00] Agentic Coding Review Shift [00:29] Warp Collaboration vs Sandboxes [05:22] Continuous Co-Creation in Teams [07:00] Hyperbolics GPU Cloud [07:56] Skill Governance Framework [14:41] Agents vs Browsers Analogy [21:31] PR Provenance in Warp [27:58] Agent System Commandments [37:44] Harness vs ADE [42:03] Adversarial Review Technique [45:26] GitHub Limitations for Agents [49:07] MLflow's GenAI [50:06] Wrap up

    51 min
  3. The Modern Software Engineer

    14 DE ABR.

    The Modern Software Engineer

    This episode is brought to you by the MLflow team. Check out more information at MLflow.org. Mihail Eric is Head of AI at Monaco and Adjunct Lecturer at Stanford University, where he teaches CS146S: "The Modern Software Developer" — the first course in the world dedicated to how AI is transforming every stage of the software development lifecycle. With 12+ years building production AI systems at Amazon Alexa, Storia AI (YC S24), and early-stage startups, Mihail has one of the most grounded, practitioner-level takes on what it actually means to be a software engineer in 2026. The Modern Software Engineer // MLOps Podcast #370 with Mihail Eric, Head of AI at Monaco 🧠 What the modern software engineer actually looks like — why the job description has fundamentally shifted from writing code to designing systems and directing agents ⚙️ Agents require more thinking, not less — why the engineers getting the most out of coding agents are the ones who invest the most upfront in architecture, planning, and codebase structure 🎓 Inside Stanford's "Modern Software Developer" course — what Mihail teaches in the first CS course in the world focused entirely on AI-transformed software development 🏗️ From writing code to designing systems — how the best developers are repositioning themselves as architects of agentic workflows rather than line-by-line coders 🔁 The Build System: how to run agents at scale — practical lessons from building multi-agent pipelines, parallel subagent batches, and automated retrospectives📉 What junior engineers should actually focus on — the skills that remain irreplaceable and the paths that still produce strong software engineers in an AI-first world 🚀 Building Monaco's AI-native revenue engine — what it's like building AI infrastructure for a fast-moving $35M-funded startup disrupting enterprise CRM 🎯 How to ace AI engineering interviews — Mihail's framework for demonstrating real AI engineering competence beyond prompt engineering basics. Essential watching for software engineers, ML practitioners, and engineering managers who want an honest, practitioner-level view of where the profession is going — from someone who's both teaching it at Stanford and building it in production. 🔗 Links & Resources Mihail Eric on LinkedIn: https://www.linkedin.com/in/mihaileric/ Mihail's website: https://www.mihaileric.com Stanford course "The Modern Software Developer": https://themodernsoftware.dev/ Maven course — AI Software Development: From First Prompt to Production Code: https://maven.com/the-modern-software-developer/ai-course Free AI Engineer interview prep course: https://course.aiengineermastery.com/ Monaco (AI-native revenue engine): https://monaco.com MLOps.community Slack: https://go.mlops.community/slack ⏱️ Timestamps 00:00 Intro — Mihail Eric & Monaco 04:00 What has actually changed for software engineers in 2026 09:00 Inside Stanford's "Modern Software Developer" course 15:00 Why agents require more human thinking, not less 21:00 From writing code to designing systems — the architect mindset 27:00 The Build System: running agents at scale in production 33:00 What junior engineers should focus on right now 39:00 Building AI infrastructure at Monaco 44:00 How to demonstrate real AI engineering competence 49:00 Skills that will remain irreplaceable 52:00 Rapid fire/closing thoughts

    54 min
  4. We Cut LLM Latency by 70% in Production

    10 DE ABR.

    We Cut LLM Latency by 70% in Production

    Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production. How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks Key topics covered: The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes) Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment. Links & Resources: TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLM NVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/ vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-them Timestamps: [00:00] Optimizing GPU Usage and Latency [00:21] Learning AI as Leadership [04:34] AI Cost Centers [13:56] Throughput and Infrastructure Efficiency [18:10] Scaling and Unit Economics [24:14] Championing AI ROI [36:11] Queue to Value Engine [41:30] Failed Product Features [46:12] Agentic Engineering Costs [58:49] AI Self-Hosting in Engineering [1:04:40] Wrap up

    1h 5min
  5. Getting Humans Out of the Way: How to Work with Teams of Agents

    7 DE ABR.

    Getting Humans Out of the Way: How to Work with Teams of Agents

    Rob Ennals is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure. Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Most people cripple coding agents by micromanaging them—reviewing every step and becoming the bottleneck. The shift isn’t to better supervise agents, but to design systems where they work well on their own: parallelized, self-validating, and guided by strong processes. Done right, you don’t lose control—you gain leverage. Like paving roads for cars, the real unlock is reshaping the environment so AI can move fast. // Bio Rob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge. // Related Links Website: https://robennals.org/ https://broomy.org/ https://learnai.robennals.org/ (not yet announced, but should be by the time of the podcast) ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Rob on LinkedIn: /robennals/ Timestamps: [00:00] Agent Optimization Strategies [00:21] Visual Regression Explanation [05:35] Automated QA for Videos [13:05] Verification System Design [19:48] Agent Selection Strategies [30:48] Parallel Agent Management [35:30] Containerization and Cost Estimation [42:48] Shifting to Agent Orchestration [50:10] Wrap up

    51 min
  6. Fixing GPU Starvation in Large-Scale Distributed Training

    3 DE ABR.

    Fixing GPU Starvation in Large-Scale Distributed Training

    Kashish Mittal is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure. Fixing GPU Starvation in Large-Scale Distributed Training // MLOps Podcast #367 with Kashish Mittal, Staff Software Engineer at Uber Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Kashish zooms out to discuss a universal industry pattern: how infrastructure—specifically data loading—is almost always the hidden constraint for ML scaling. The conversation dives deep into a recent architectural war story. Kashish walks through the full-stack profiling and detective work required to solve a massive GPU starvation bottleneck. By redesigning the Petastorm caching layer to bypass CPU transformation walls and uncovering hidden distributed race conditions, his team boosted GPU utilization to 60%+ and cut training time by 80%. Kashish also shares his philosophy on the fundamental trade-offs between latency and efficiency in GPU serving. // Bio Kashish Mittal is a Staff Software Engineer at Uber, where he architects the hyperscale machine learning infrastructure that powers Uber’s core mobility and delivery marketplaces. Prior to Uber, Kashish spent nearly a decade at Google building highly scalable, low-latency distributed ML systems for flagship products, including YouTube Ads and Core Search Ranking. His engineering expertise lies at the intersection of distributed systems and AI—specifically focusing on large-scale data processing, eliminating critical I/O bottlenecks, and maximizing GPU efficiency for petabyte-scale training pipelines. When he isn't hunting down distributed race conditions, he is a passionate advocate for open-source architecture and building reproducible, high-throughput ML systems. // Related Links Website: https://www.uber.com/ Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy: https://www.youtube.com/watch?v=ie1M8p-SVfM ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Kashish on LinkedIn: /kashishmittal/ Timestamps: [00:00] Local dataset caching [00:30] Engineers Evolving Roles [04:44] GPU Resource Management [10:21] GPU Utilization Issues [21:49] More GPU War Stories [32:12] Model Serving Issues [39:58] Reflective Learning in Coding [43:23] Workflow and Reflective Skills [52:30] Wrap up

    53 min
  7. Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    31 DE MAR.

    Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    Jens Bodal is a Senior Software Engineer II working independently, focusing on backend systems, software architecture, and building scalable solutions across client projects. This One Shift Makes Developers Obsolete // MLOps Podcast #366 with Jens Bodal, Senior Software Engineer II, Independent Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract AI agents are shifting the role of developers from writing code to defining intent. This conversation explores why specs are becoming more important than implementation, what breaks in real-world systems, and how engineering teams need to rethink workflows in an agent-driven world. // Bio Jens Bodal is a senior software engineer based in Edmonds, Washington, with nine years of experience building developer tooling, internal platforms, and web infrastructure. He spent seven years as an SDE II at Amazon, working on teams including Amazon Games Studio and the AWS Events Management Platform. His work has focused on developer tooling, CI/CD systems, testing infrastructure, and improving the developer experience for teams operating production services. He is particularly interested in developer experience and the growing ecosystem of local tools that help engineers build and run AI systems on infrastructure they control. // Related Links Website: https://bodal.devhttps://github.com/jensbodal https://www.youtube.com/watch?v=Yp7LYdbOuwE ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Jens on LinkedIn: /jensbodal Timestamps: [00:00] Specification vs Code [00:25] Conference Realizations and Insights [09:01] Agents and Orchestration Insights [10:39] Coding Agents and Talent [18:10] Sub-agent Design Concepts [25:18] Evaling on Vibes [33:23] Walled Garden and Proxies [41:48] Spec-Driven Development Limitations [46:56] Code Ownership vs Authorship [50:49] Engineering Ownership and PMs [53:47] Skill Creation and Iteration [58:40] Wrap up

    59 min
  8. Operationalizing AI Agents: From Experimentation to Production // Databricks Roundtable

    30 DE MAR.

    Operationalizing AI Agents: From Experimentation to Production // Databricks Roundtable

    Databricks Roundtable episode: Operationalizing AI Agents: From Experimentation to Production. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide Big shout-out to Databricks for the collaboration! // Abstract This panel discusses the real-world challenges of deploying AI agents at scale. The conversation explores technical and operational barriers that slow production adoption, including reliability, cost, governance, and security. The panelists also examine how LLMOps, AIOps, and AgentOps differ from traditional MLOps, and why new approaches are required for generative and agent-based systems. Finally, experts define success criteria for GenAI frameworks, with a focus on robust evaluation, observability, and continuous monitoring across development and staging environments. // Bio Samraj Moorjani Samraj is a software engineer working on the Agent Quality team. Previously, Samraj worked at Meta on ads/product classification research and AppLovin on MLOps. Samraj graduated with a BS+MS in Computer Science from UIUC, advised by Professor Hari Sundaram, where he worked on controllable natural language generation to produce appealing, interpretable science to combat the spread of misinformation. He also worked with Professor Wen-mei Hwu on accelerating LLM inference through extreme sparsification. Apurva Misra Apurva is an AI Consultant at Sentick, focusing on assisting startups with their AI strategy and building solutions. She leverages her extensive experience in machine learning and a Master's degree from the University of Waterloo, where her research bridged driving and machine learning, to offer valuable insights. Apurva's keen interest in the startup world fuels her passion for helping emerging companies incorporate AI effectively. In her free time, she is learning Spanish, and she also enjoys exploring hidden gem eateries, always eager to hear about new favourite spots! Ben Epstein Ben was the machine learning lead for Splice Machine, leading the development of their MLOps platform and Feature Store. He is now the Co-founder and CTO at GrottoAI, focused on supercharging multifamily teams and reducing vacancy loss with AI-powered guidance for leasing and renewals. Ben also works as an adjunct professor at Washington University in St. Louis, teaching concepts in cloud computing and big data analytics. Hosted by Adam Becker // Related Links Website: https://www.databricks.com/https://mlflow.org/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Samraj on LinkedIn: /samrajmoorjani/ Connect with Apurva on LinkedIn: /apurva-misra/ Connect with Ben on LinkedIn: /ben-epstein/ Connect with Adam on LinkedIn: /adamissimo/ Timestamps: [00:00] Introduction [02:30] AI Agents in Operations [04:36] AI Strategy Consulting [05:30] Agent Quality Focus [06:17] AI Agent Expectations [11:44] AI Use Cases Evolution [15:25] Agent Expectations Adjustment [17:41] Agent Quality Monitoring [23:22] Trust in GenAI Systems [33:33] Data Prep vs Product Thinking [40:27] Quality Systems Distinction [44:54] Q & A [1:00:57] Wrap up

    1h 1min
4,6
de 5
24 avaliações

Sobre

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

Você também pode gostar de