MLOps.community

Demetrios

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

  1. How We Cut LLM Latency 70% With TensorRT in Production

    PRIJE 6 H

    How We Cut LLM Latency 70% With TensorRT in Production

    Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production. How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks Key topics covered: The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes) Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment. Links & Resources: TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLM NVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/ vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-them Timestamps: 0:00 — Intro & teaser clips 1:00 — Maher's journey from traditional engineering to AI leadership 4:30 — The AI iceberg: cost, performance, latency, throughput, accuracy 8:00 — Managing AI coding agent costs & premium token budgets 12:00 — GPU scaling strategies: scheduled, dynamic, and proactive 16:00 — Cold start problem: FSx, baked images, and container optimization 20:00 — TensorRT LLM: 50-70% latency reduction explained 25:00 — KV cache, in-flight batching, and throughput optimization 30:00 — The counterintuitive math: bigger GPUs = lower cost 35:00 — Verticalized AI products for HR tech40:00 — Building a horizontal AI platform with preprocessing layers 45:00 — AI feedback polishing: the feature that needed guardrails 50:00 — AI Engineering Lab: adoption curves by seniority 55:00 — Redefining the SDLC for AI-assisted development 60:00 — Self-hosting coding agents & leveraging internal AI platform 63:00 — Chinese models, compliance, and training data bias

    1 h 5 min
  2. Getting Humans Out of the Way: How to Work with Teams of Agents

    PRIJE 3 D

    Getting Humans Out of the Way: How to Work with Teams of Agents

    Rob Ennals is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure. Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Most people cripple coding agents by micromanaging them—reviewing every step and becoming the bottleneck. The shift isn’t to better supervise agents, but to design systems where they work well on their own: parallelized, self-validating, and guided by strong processes. Done right, you don’t lose control—you gain leverage. Like paving roads for cars, the real unlock is reshaping the environment so AI can move fast. // Bio Rob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge. // Related Links Website: https://robennals.org/ https://broomy.org/ https://learnai.robennals.org/ (not yet announced, but should be by the time of the podcast) ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Rob on LinkedIn: /robennals/ Timestamps: [00:00] Agent Optimization Strategies [00:21] Visual Regression Explanation [05:35] Automated QA for Videos [13:05] Verification System Design [19:48] Agent Selection Strategies [30:48] Parallel Agent Management [35:30] Containerization and Cost Estimation [42:48] Shifting to Agent Orchestration [50:10] Wrap up

    51 min
  3. Fixing GPU Starvation in Large-Scale Distributed Training

    3. TRA

    Fixing GPU Starvation in Large-Scale Distributed Training

    Kashish Mittal is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure. Fixing GPU Starvation in Large-Scale Distributed Training // MLOps Podcast #367 with Kashish Mittal, Staff Software Engineer at Uber Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Kashish zooms out to discuss a universal industry pattern: how infrastructure—specifically data loading—is almost always the hidden constraint for ML scaling. The conversation dives deep into a recent architectural war story. Kashish walks through the full-stack profiling and detective work required to solve a massive GPU starvation bottleneck. By redesigning the Petastorm caching layer to bypass CPU transformation walls and uncovering hidden distributed race conditions, his team boosted GPU utilization to 60%+ and cut training time by 80%. Kashish also shares his philosophy on the fundamental trade-offs between latency and efficiency in GPU serving. // Bio Kashish Mittal is a Staff Software Engineer at Uber, where he architects the hyperscale machine learning infrastructure that powers Uber’s core mobility and delivery marketplaces. Prior to Uber, Kashish spent nearly a decade at Google building highly scalable, low-latency distributed ML systems for flagship products, including YouTube Ads and Core Search Ranking. His engineering expertise lies at the intersection of distributed systems and AI—specifically focusing on large-scale data processing, eliminating critical I/O bottlenecks, and maximizing GPU efficiency for petabyte-scale training pipelines. When he isn't hunting down distributed race conditions, he is a passionate advocate for open-source architecture and building reproducible, high-throughput ML systems. // Related Links Website: https://www.uber.com/ Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy: https://www.youtube.com/watch?v=ie1M8p-SVfM ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Kashish on LinkedIn: /kashishmittal/ Timestamps: [00:00] Local dataset caching [00:30] Engineers Evolving Roles [04:44] GPU Resource Management [10:21] GPU Utilization Issues [21:49] More GPU War Stories [32:12] Model Serving Issues [39:58] Reflective Learning in Coding [43:23] Workflow and Reflective Skills [52:30] Wrap up

    53 min
  4. Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    31. OŽU

    Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    Jens Bodal is a Senior Software Engineer II working independently, focusing on backend systems, software architecture, and building scalable solutions across client projects. This One Shift Makes Developers Obsolete // MLOps Podcast #366 with Jens Bodal, Senior Software Engineer II, Independent Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract AI agents are shifting the role of developers from writing code to defining intent. This conversation explores why specs are becoming more important than implementation, what breaks in real-world systems, and how engineering teams need to rethink workflows in an agent-driven world. // Bio Jens Bodal is a senior software engineer based in Edmonds, Washington, with nine years of experience building developer tooling, internal platforms, and web infrastructure. He spent seven years as an SDE II at Amazon, working on teams including Amazon Games Studio and the AWS Events Management Platform. His work has focused on developer tooling, CI/CD systems, testing infrastructure, and improving the developer experience for teams operating production services. He is particularly interested in developer experience and the growing ecosystem of local tools that help engineers build and run AI systems on infrastructure they control. // Related Links Website: https://bodal.devhttps://github.com/jensbodal https://www.youtube.com/watch?v=Yp7LYdbOuwE ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Jens on LinkedIn: /jensbodal Timestamps: [00:00] Specification vs Code [00:25] Conference Realizations and Insights [09:01] Agents and Orchestration Insights [10:39] Coding Agents and Talent [18:10] Sub-agent Design Concepts [25:18] Evaling on Vibes [33:23] Walled Garden and Proxies [41:48] Spec-Driven Development Limitations [46:56] Code Ownership vs Authorship [50:49] Engineering Ownership and PMs [53:47] Skill Creation and Iteration [58:40] Wrap up

    59 min
  5. Operationalizing AI Agents: From Experimentation to Production // Databricks Roundtable

    30. OŽU

    Operationalizing AI Agents: From Experimentation to Production // Databricks Roundtable

    Databricks Roundtable episode: Operationalizing AI Agents: From Experimentation to Production. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide Big shout-out to Databricks for the collaboration! // Abstract This panel discusses the real-world challenges of deploying AI agents at scale. The conversation explores technical and operational barriers that slow production adoption, including reliability, cost, governance, and security. The panelists also examine how LLMOps, AIOps, and AgentOps differ from traditional MLOps, and why new approaches are required for generative and agent-based systems. Finally, experts define success criteria for GenAI frameworks, with a focus on robust evaluation, observability, and continuous monitoring across development and staging environments. // Bio Samraj Moorjani Samraj is a software engineer working on the Agent Quality team. Previously, Samraj worked at Meta on ads/product classification research and AppLovin on MLOps. Samraj graduated with a BS+MS in Computer Science from UIUC, advised by Professor Hari Sundaram, where he worked on controllable natural language generation to produce appealing, interpretable science to combat the spread of misinformation. He also worked with Professor Wen-mei Hwu on accelerating LLM inference through extreme sparsification. Apurva Misra Apurva is an AI Consultant at Sentick, focusing on assisting startups with their AI strategy and building solutions. She leverages her extensive experience in machine learning and a Master's degree from the University of Waterloo, where her research bridged driving and machine learning, to offer valuable insights. Apurva's keen interest in the startup world fuels her passion for helping emerging companies incorporate AI effectively. In her free time, she is learning Spanish, and she also enjoys exploring hidden gem eateries, always eager to hear about new favourite spots! Ben Epstein Ben was the machine learning lead for Splice Machine, leading the development of their MLOps platform and Feature Store. He is now the Co-founder and CTO at GrottoAI, focused on supercharging multifamily teams and reducing vacancy loss with AI-powered guidance for leasing and renewals. Ben also works as an adjunct professor at Washington University in St. Louis, teaching concepts in cloud computing and big data analytics. Hosted by Adam Becker // Related Links Website: https://www.databricks.com/https://mlflow.org/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Samraj on LinkedIn: /samrajmoorjani/ Connect with Apurva on LinkedIn: /apurva-misra/ Connect with Ben on LinkedIn: /ben-epstein/ Connect with Adam on LinkedIn: /adamissimo/ Timestamps: [00:00] Introduction [02:30] AI Agents in Operations [04:36] AI Strategy Consulting [05:30] Agent Quality Focus [06:17] AI Agent Expectations [11:44] AI Use Cases Evolution [15:25] Agent Expectations Adjustment [17:41] Agent Quality Monitoring [23:22] Trust in GenAI Systems [33:33] Data Prep vs Product Thinking [40:27] Quality Systems Distinction [44:54] Q & A [1:00:57] Wrap up

    1 h 1 min
  6. arrowspace: Vector Spaces and Graph Wiring

    27. OŽU

    arrowspace: Vector Spaces and Graph Wiring

    Lorenzo Moriondo is a Technical Lead for AI at tuned.org.uk, working on AI agent protocols, graph-based search, and production-grade LLM systems. arrowspace: Vector Spaces and Graph Wiring // MLOps Podcast #365 with Lorenzo Moriondo, AI Research and Product Engineer Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Meet arrowspace — an open-source library for curating and understanding LLM datasets across the entire lifecycle, from pre-training to inference. Instead of treating embeddings as static vectors, arrowspace turns them into graphs (“graph wiring”) so you can explore structure, not just similarity. That unlocks smarter RAG search (beyond basic semantic matching), dataset fingerprinting, and deeper insights into how different datasets behave. You can compare datasets, predict how changes will affect performance, detect drift early, and even safely mix data sources while measuring outcomes. In short: arrowspace helps you see your data — and make better decisions because of it. // Bio With over a decade of experience in software and data engineering across startups and early-stage projects, Lorenzo has recently turned his focus to the AI-assisted movement to automate software and data operations. He has contributed to and founded projects within various open-source communities, including work with Summer of Code, where he focused on the Semantic Web and REST APIs.A strong enthusiast of Python and Rust, he develops tools centered around LLMs and agentic systems. He is a maintainer of the SmartCore ML library, as well as the creator of Arrowspace and the Topological Transformer. // Related Links Website: https://www.tuned.org.uk ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Chris on LinkedIn: /lorenzomoriondo Timestamps: [00:00] Graph Wiring for ML [00:32] RAG and Vector Similarity [08:58] Geometric Search Trade-offs [13:12] Vector DB Algorithm Integration [21:32] Feature-Based Retrieval Shift [26:04] Epiplexity and Embeddings [31:26] Epiplexity and Embedding Structure [40:15] Training vs Post-hoc Models [47:16] Discovery-Driven Development [51:22] Updating Mental Models [53:00] Vector Search vs Agents [55:30] Wrap up

    56 min
  7. Agentic Marketplace

    20. OŽU

    Agentic Marketplace

    Donné Stevenson is a Machine Learning Engineer at Prosus, working on scalable ML infrastructure and productionizing GenAI systems across portfolio companies. Pedro Chaves is a Data Science Manager at OLX Group, working on GenAI-powered search, personalization, and large-scale marketplace recommendations. Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps GPU Guide: https://go.mlops.community/gpuguide // Abstract Marketplaces are about to get smarter.Agents that find your perfect house, negotiate the best deals, and even talk to other agents on your behalf. Less tedious searching. Less back-and-forth. More time for what matters. Pedro Chaves and Donné Stevenson discuss the future of buying and selling cars, homes, and everything in between - and what it'll take to get there. // Bio Donné Stevenson Focused on building AI-powered products that give companies the tools and expertise needed to harness the power of AI in their respective fields. Pedro Chaves Pedro is a Data Science Manager at OLX Group, where he leads teams building machine learning solutions to improve marketplace performance, pricing, and user experience at scale. // Related Links Website: https://www.prosus.com/ Website: https://www.olxgroup.com/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] MLOps GPU Guide: https://go.mlops.community/gpuguide Timestamps: [00:00] OLX: Disrupting Buyer-Seller Experiences [03:33] Redefining the Home-Buying Experience [07:40] User Feedback and Iterative Rollouts [11:25] Beyond Chat: Redefining Agent Use [14:03] User Trust and Education Challenges [16:47] Learning Curve for Automoto [20:05] Interactive Decision-Making with AI [24:47] Agents Simplify Buyer-Seller Search [28:14] Garage Sale Treasure Hunting [33:43] Agent Discovery Layer Needed [34:53] Agents Relying on Agents [39:48] Reducing Friction in Selling Stuff [41:39] Extracting Buyer Intent Systematically [44:49] Optimizing Delivery with Lockers [50:10] Generative AI Commerce Strategies [51:03] Improving Chat Interaction Layer

    51 min
  8. Durable Execution and Modern Distributed Systems

    17. OŽU

    Durable Execution and Modern Distributed Systems

    Johann Schleier-Smith is the Technical Lead for AI at Temporal Technologies, working on reliable infrastructure for production AI systems and long-running agent workflows. Durable Execution and Modern Distributed Systems, Johann Schleier-Smith // MLOps Podcast #364 Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter MLOps Merch: https://shop.mlops.community/ Big shoutout to ⁨ @Temporalio  for the support, and to  @trychroma  for hosting us in their recording studio // Abstract A new paradigm is emerging for building applications that process large volumes of data, run for long periods of time, and interact with their environment. It’s called Durable Execution and is replacing traditional data pipelines with a more flexible approach. Durable Execution makes regular code reliable and scalable. In the past, reliability and scalability have come from restricted programming models, like SQL or MapReduce, but with Durable Execution, this is no longer the case. We can now see data pipelines that include document processing workflows, deep research with LLMs, and other complex and LLM-driven agentic patterns expressed at scale with regular Python programs. In this session, we describe Durable Execution and explain how it fits in with agents and LLMs to enable a new class of machine learning applications. // Related Links https://t.mp/hello?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johann https://t.mp/vibe?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johann https://t.mp/career?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johann ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~ Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore Join our Slack community [https://go.mlops.community/slack] Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register] MLOps Swag/Merch: [https://shop.mlops.community/] Connect with Demetrios on LinkedIn: /dpbrinkm Connect with Johann on LinkedIn: /jssmith/

    1 h 1 min

Opis

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

Također bi vam se moglo svidjeti