MLOps.community

Demetrios Brinkmann
MLOps.community

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.

  1. PyTorch's Combined Effort in Large Model Optimization // Michael Gschwind // #274

    1 DAY AGO

    PyTorch's Combined Effort in Large Model Optimization // Michael Gschwind // #274

    Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. // MLOps Podcast #274 with Michael Gschwind, Software Engineer, Software Executive at Meta Platforms. // Abstract Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements. // Bio Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. He led the development of MultiRay and Textray, the first deployment of LLMs at a scale exceeding a trillion queries per day shortly after its rollout. He created the strategy and led the implementation of PyTorch donation optimization with Better Transformers and Accelerated Transformers, bringing Flash Attention, PT2 compilation, and ExecuTorch into the mainstream for LLMs and GenAI models. Most recently, he led the enablement of large language models on-device AI with mobile and edge devices. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-gschwind-3704222/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Timestamps: [00:00] Michael's preferred coffee [00:21] Takeaways [01:59] Please like, share, leave a review, and subscribe to our MLOps channels! [02:10] Gaming to AI Accelerators [11:34] Torch Chat goals [18:53] Pytorch benchmarking and competitiveness [21:28] Optimizing MLOps models [24:52] GPU optimization tips [29:36] Cloud vs On-device AI [38:22] Abstraction across devices [42:29] PyTorch developer experience [45:33] AI and MLOps-related antipatterns [48:33] When to optimize [53:26] Efficient edge AI models [56:57] Wrap up

    58 min
  2. We Can All Be AI Engineers and We Can Do It with Open Source Models // Luke Marsden // #273

    20 NOV

    We Can All Be AI Engineers and We Can Do It with Open Source Models // Luke Marsden // #273

    Luke Marsden, is a passionate technology leader. Experienced in consultant, CEO, CTO, tech lead, product, sales, and engineering roles. Proven ability to conceive and execute a product vision from strategy to implementation, while iterating on product-market fit. We Can All Be AI Engineers and We Can Do It with Open Source Models // MLOps Podcast #273 with Luke Marsden, CEO of HelixML. // Abstract In this podcast episode, Luke Marsden explores practical approaches to building Generative AI applications using open-source models and modern tools. Through real-world examples, Luke breaks down the key components of GenAI development, from model selection to knowledge and API integrations, while highlighting the data privacy advantages of open-source solutions. // Bio Hacker & entrepreneur. Founder at helix.ml. Career spanning DevOps, MLOps, and now LLMOps. Working on bringing business value to local, open-source LLMs. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://helix.ml About open source AI: https://blog.helix.ml/p/the-open-source-ai-revolution Ratatat Cream on Chrome: https://open.spotify.com/track/3s25iX3minD5jORW4KpANZ?si=719b715154f64a5f --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Luke on LinkedIn: https://www.linkedin.com/in/luke-marsden-71b3789/

    51 min
  3. The Impact of UX Research in the AI Space // Lauren Kaplan // #272

    13 NOV

    The Impact of UX Research in the AI Space // Lauren Kaplan // #272

    Lauren Kaplan is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. The Impact of UX Research in the AI Space // MLOps Podcast #272 with Lauren Kaplan, Sr UX Researcher. // Abstract In this MLOps Community podcast episode, Demetrios and UX researcher Lauren Kaplan explore how UX research can transform AI and ML projects by aligning insights with business goals and enhancing user and developer experiences. Kaplan emphasizes the importance of stakeholder alignment, proactive communication, and interdisciplinary collaboration, especially in adapting company culture post-pandemic. They discuss UX’s growing relevance in AI, challenges like bias, and the use of AI in research, underscoring the strategic value of UX in driving innovation and user satisfaction in tech. // Bio Lauren is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. Passionate about homelessness and Al, Lauren joined UCSF and later Meta. Lauren recently led UX research at a global Al chip startup and is currently seeking new opportunities to further her work in UX research and AI. At Meta, Lauren led UX research for 1) Privacy-Preserving ML and 2) PyTorch. Lauren has worked on NLP projects such as Word2Vec analysis of historical HIV/AIDS documents presented at TextXD, UC Berkeley 2019. Lauren is passionate about understanding technology and advocating for the people who create and consume Al. Lauren has published over 30 peer-reviewed research articles in domains including psychology, medicine, sociology, and more.” // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Podcast on AI UX https://open.substack.com/pub/aistudios/p/how-to-do-user-research-for-ai-products?r=7hrv8&utm_medium=ios 2024 State of AI Infra at Scale Research Report https://ai-infrastructure.org/wp-content/uploads/2024/03/The-State-of-AI-Infrastructure-at-Scale-2024.pdf Privacy-Preserving ML UX Public Article https://www.ttclabs.net/research/how-to-help-people-understand-privacy-enhancing-technologies Homelessness research and more: https://scholar.google.com/citations?user=24zqlwkAAAAJ&hl=en Agents in Production: https://home.mlops.community/public/events/aiagentsinprod Mk.gee Si (Bonus Track): https://open.spotify.com/track/1rukW2Wxnb3GGlY0uDWIWB?si=4d5b0987ad55444a --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Lauren on LinkedIn: https://www.linkedin.com/in/laurenmichellekaplan?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app

    1h 8m
  4. EU AI Act - Navigating New Legislation // Petar Tsankov // MLOps Podcast #271

    1 NOV

    EU AI Act - Navigating New Legislation // Petar Tsankov // MLOps Podcast #271

    Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence (AI). EU AI Act - Navigating New Legislation // MLOps Podcast #271 with Petar Tsankov, Co-Founder and CEO of LatticeFlow AI. Big thanks to LatticeFlow for sponsoring this episode! // Abstract Dive into AI risk and compliance. Petar Tsankov, a leader in AI safety, talks about turning complex regulations into clear technical requirements and the importance of benchmarks in AI compliance, especially with the EU AI Act. We explore his work with big AI players and the EU on safer, compliant models, covering topics from multimodal AI to managing AI risks. He also shares insights on "Comply," an open-source tool for checking AI models against EU standards, making compliance simpler for AI developers. A must-listen for those tackling AI regulation and safety. // Bio Co-founder & CEO at LatticeFlow AI, building the world's first product enabling organizations to build performant, safe, and trustworthy AI systems. Before starting LatticeFlow AI, Petar was a senior researcher at ETH Zurich working on the security and reliability of modern systems, including deep learning models, smart contracts, and programmable networks. Petar have co-created multiple publicly available security and reliability systems that are regularly used: = ERAN, the world's first scalable verifier for deep neural networks: https://github.com/eth-sri/eran = VerX, the world's first fully automated verifier for smart contracts: https://verx.ch = Securify, the first scalable security scanner for Ethereum smart contracts: https://securify.ch = DeGuard, de-obfuscates Android binaries: http://apk-deguard.com = SyNET, the first scalable network-wide configuration synthesis tool: https://synet.ethz.ch Petar also co-founded ChainSecurity, an ETH spin-off that within 2 years became a leader in formal smart contract audits and was acquired by PwC Switzerland in 2020. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://latticeflow.ai/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Petar on LinkedIn: https://www.linkedin.com/in/petartsankov/

    59 min
  5. Boosting LLM/RAG Workflows & Scheduling w/ Composable Memory and Checkpointing // Bernie Wu // #270

    22 OCT

    Boosting LLM/RAG Workflows & Scheduling w/ Composable Memory and Checkpointing // Bernie Wu // #270

    Bernie Wu is VP of Business Development for MemVerge. He has 25+ years of experience as a senior executive for data center hardware and software infrastructure companies including companies such as Conner/Seagate, Cheyenne Software, Trend Micro, FalconStor, Levyx, and MetalSoft. Boosting LLM/RAG Workflows & Scheduling w/ Composable Memory and Checkpointing // MLOps Podcast #270 with Bernie Wu, VP Strategic Partnerships/Business Development of MemVerge. // Abstract Limited memory capacity hinders the performance and potential of research and production environments utilizing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) techniques. This discussion explores how leveraging industry-standard CXL memory can be configured as a secondary, composable memory tier to alleviate this constraint. We will highlight some recent work we’ve done in integrating of this novel class of memory into LLM/RAG/vector database frameworks and workflows. Disaggregated shared memory is envisioned to offer high performance, low latency caches for model/pipeline checkpoints of LLM models, KV caches during distributed inferencing, LORA adaptors, and in-process data for heterogeneous CPU/GPU workflows. We expect to showcase these types of use cases in the coming months. // Bio Bernie is VP of Strategic Partnerships/Business Development for MemVerge. His focus has been building partnerships in the AI/ML, Kubernetes, and CXL memory ecosystems. He has 25+ years of experience as a senior executive for data center hardware and software infrastructure companies including companies such as Conner/Seagate, Cheyenne Software, Trend Micro, FalconStor, Levyx, and MetalSoft. He is also on the Board of Directors for Cirrus Data Solutions. Bernie has a BS/MS in Engineering from UC Berkeley and an MBA from UCLA. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.memverge.com Accelerating Data Retrieval in Retrieval Augmentation Generation (RAG) Pipelines using CXL: https://memverge.com/accelerating-data-retrieval-in-rag-pipelines-using-cxl/ Do Re MI for Training Metrics: Start at the Beginning // Todd Underwood // AIQCON: https://youtu.be/DxyOlRdCofo Handling Multi-Terabyte LLM Checkpoints // Simon Karasik // MLOps Podcast #228: https://youtu.be/6MY-IgqiTpg Compute Express Link (CXL) FPGA IP: https://www.intel.com/content/www/us/en/products/details/fpga/intellectual-property/interface-protocols/cxl-ip.htmlUltra Ethernet Consortium: https://ultraethernet.org/ Unified Acceleration (UXL) Foundation: https://www.intel.com/content/www/us/en/developer/articles/news/unified-acceleration-uxl-foundation.html RoCE networks for distributed AI training at scale: https://engineering.fb.com/2024/08/05/data-center-engineering/roce-network-distributed-ai-training-at-scale/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Bernie on LinkedIn: https://www.linkedin.com/in/berniewu/ Timestamps: [00:00] Bernie's preferred coffee [00:11] Takeaways [01:37] First principles thinking focus [05:02] Memory Abundance Concept Discussion [06:45] Managing load spikes [09:38] GPU checkpointing challenges [16:29] Distributed memory problem solving [18:27] Composable and Virtual Memory [21:49] Interactive chat annotation [23:46] Memory elasticity in AI [27:33] GPU networking tests [29:12] GPU Scheduling workflow optimization [32:18] Kubernetes Extensions and Tools [37:14] GPU bottleneck analysis [42:04] Economical memory strategies [45:14] Elastic memory management strategies [47:57] Problem solving approach [50:15] AI infrastructure elasticity evolution [52:33] RDMA and RoCE explained [54:14] Wra

    55 min
  6. How to Systematically Test and Evaluate Your LLMs Apps // Gideon Mendels // #269

    18 OCT

    How to Systematically Test and Evaluate Your LLMs Apps // Gideon Mendels // #269

    Gideon Mendels is the Chief Executive Officer at Comet, the leading solution for managing machine learning workflows. How to Systematically Test and Evaluate Your LLMs Apps // MLOps Podcast #269 with Gideon Mendels, CEO of Comet. // Abstract When building LLM Applications, Developers need to take a hybrid approach from both ML and SW Engineering best practices. They need to define eval metrics and track their entire experimentation to see what is and is not working. They also need to define comprehensive unit tests for their particular use-case so they can confidently check if their LLM App is ready to be deployed. // Bio Gideon Mendels is the CEO and co-founder of Comet, the leading solution for managing machine learning workflows from experimentation to production. He is a computer scientist, ML researcher and entrepreneur at his core. Before Comet, Gideon co-founded GroupWize, where they trained and deployed NLP models processing billions of chats. His journey with NLP and Speech Recognition models began at Columbia University and Google where he worked on hate speech and deception detection. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.comet.com/site/ All the Hard Stuff with LLMs in Product Development // Phillip Carter // MLOps Podcast #170: https://youtu.be/DZgXln3v85s Opik by Comet: https://www.comet.com/site/products/opik/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Gideon on LinkedIn: https://www.linkedin.com/in/gideon-mendels/ Timestamps: [00:00] Gideon's preferred coffee [00:17] Takeaways [01:50] A huge shout-out to Comet ML for sponsoring this episode! [02:09] Please like, share, leave a review, and subscribe to our MLOps channels! [03:30] Evaluation metrics in AI [06:55] LLM Evaluation in Practice [10:57] LLM testing methodologies [16:56] LLM as a judge [18:53] OPIC track function overview [20:33] Tracking user response value [26:32] Exploring AI metrics integration [29:05] Experiment tracking and LLMs [34:27] Micro Macro collaboration in AI [38:20] RAG Pipeline Reproducibility Snapshot [40:15] Collaborative experiment tracking [45:29] Feature flags in CI/CD [48:55] Labeling challenges and solutions [54:31] LLM output quality alerts [56:32] Anomaly detection in model outputs [1:01:07] Wrap up

    1h 2m

About

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada