MLOps.community

Demetrios Brinkmann
MLOps.community

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.

  1. Re-Platforming Your Tech Stack // Michelle Marie Conway & Andrew Baker // #281

    4 DAYS AGO

    Re-Platforming Your Tech Stack // Michelle Marie Conway & Andrew Baker // #281

    Re-Platforming Your Tech Stack // MLOps Podcast #281 with Michelle Marie Conway, Lead Data Scientist at Lloyds Banking Group and Andrew Baker, Data Science Delivery Lead at Lloyds Banking Group. // Abstract Lloyds Banking Group is on a mission to embrace the power of cloud and unlock the opportunities that it provides. Andrew, Michelle, and their MLOps team have been on a journey over the last 12 months to take their portfolio of circa 10 Machine Learning models in production and migrate them from an on-prem solution to a cloud-based environment. During the podcast, Michelle and Andrew share their reflections as well as some dos (and don’ts!) of managing the migration of an established portfolio. // Bio Michelle Marie Conway Michelle is a Lead Data Scientist in the high-performance data science team at Lloyds Banking Group. With deep expertise in managing production-level Python code and machine learning models, she has worked alongside fellow senior manager Andrew to drive the bank's transition to the Google Cloud Platform. Together, they have played a pivotal role in modernising the ML portfolio in collaboration with a remarkable ML Ops team. Originally from Ireland and now based in London, Michelle blends her technical expertise with a love for the arts. Andrew Baker Andrew graduated from the University of Birmingham with a first-class honours degree in Mathematics and Music with a Year in Computer Science and joined Lloyds Banking Group on their Retail graduate scheme in 2015. Since 2021 Andrew has worked in the world of data, firstly in shaping the Retail data strategy and most recently as a Data Science Delivery Lead, growing and managing a team of Data Scientists and Machine Learning Engineers. He has built a high-performing team responsible for building and maintaining ML models in production for the Consumer Lending division of the bank. Andrew is motivated by the role that data science and ML can play in transforming the business and its processes, and is focused on balancing the power of ML with the need for simplicity and explainability that enables business users to engage with the opportunities that exist in this space and the demands of a highly regulated environment. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://www.michelleconway.co.uk/ https://www.linkedin.com/pulse/artificial-intelligence-just-when-data-science-answer-andrew-baker-hfdge/ https://www.linkedin.com/pulse/artificial-intelligence-conundrum-generative-ai-andrew-baker-qla7e/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michelle on LinkedIn: https://www.linkedin.com/in/michelle--conway/ Connect with Andrew on LinkedIn: https://www.linkedin.com/in/andrew-baker-90952289

    51 min
  2. Holistic Evaluation of Generative AI Systems // Jineet Doshi // #280

    12/23/2024

    Holistic Evaluation of Generative AI Systems // Jineet Doshi // #280

    Jineet Doshi is an award-winning Scientist, Machine Learning Engineer, and Leader at Intuit with over 7 years of experience. He has a proven track record of leading successful AI projects and building machine-learning models from design to production across various domains which have impacted 100 million customers and significantly improved business metrics, leading to millions of dollars of impact. Holistic Evaluation of Generative AI Systems // MLOps Podcast #280 with Jineet Doshi, Staff AI Scientist or AI Lead at Intuit. // Abstract Evaluating LLMs is essential in establishing trust before deploying them to production. Even post deployment, evaluation is essential to ensure LLM outputs meet expectations, making it a foundational part of LLMOps. However, evaluating LLMs remains an open problem. Unlike traditional machine learning models, LLMs can perform a wide variety of tasks such as writing poems, Q&A, summarization etc. This leads to the question how do you evaluate a system with such broad intelligence capabilities? This talk covers the various approaches for evaluating LLMs such as classic NLP techniques, red teaming and newer ones like using LLMs as a judge, along with the pros and cons of each. The talk includes evaluation of complex GenAI systems like RAG and Agents. It also covers evaluating LLMs for safety and security and the need to have a holistic approach for evaluating these very capable models. // Bio Jineet Doshi is an award winning AI Lead and Engineer with over 7 years of experience. He has a proven track record of leading successful AI projects and building machine learning models from design to production across various domains, which have impacted millions of customers and have significantly improved business metrics, leading to millions of dollars of impact. He is currently an AI Lead at Intuit where he is one of the architects and developers of their Generative AI platform, which is serving Generative AI experiences for more than 100 million customers around the world. Jineet is also a guest lecturer at Stanford University as part of their building LLM Applications class. He is on the Advisory Board of University of San Francisco’s AI Program. He holds multiple patents in the field, is on the steering committee of MLOps World Conference and has also co chaired workshops at top AI conferences like KDD. He holds a Masters degree from Carnegie Mellon university. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://www.intuit.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Jineet on LinkedIn: https://www.linkedin.com/in/jineetdoshi/

    58 min
  3. Unleashing Unconstrained News Knowledge Graphs to Combat Misinformation // Robert Caulk // #279

    12/20/2024

    Unleashing Unconstrained News Knowledge Graphs to Combat Misinformation // Robert Caulk // #279

    Robert Caulk is responsible for directing software development, enabling research, coordinating company projects, quality control, proposing external collaborations, and securing funding. He believes firmly in open-source, having spent 12 years accruing over 1000 academic citations building open-source software in domains such as machine learning, image analysis, and coupled physical processes. He received his Ph.D. from Université Grenoble Alpes, France, in computational mechanics. Unleashing Unconstrained News Knowledge Graphs to Combat Misinformation // MLOps Podcast #279 with Robert Caulk, Founder of Emergent Methods. // Abstract Indexing hundreds of thousands of news articles per day into a knowledge graph (KG) was previously impossible due to the strict requirement that high-level reasoning, general world knowledge, and full-text context *must* be present for proper KG construction. The latest tools now enable such general world knowledge and reasoning to be applied cost effectively to high-volumes of news articles. Beyond the low cost of processing these news articles, these tools are also opening up a new, controversial, approach to KG building - unconstrained KGs. We discuss the construction and exploration of the largest news-knowledge-graph on the planet - hosted on an endpoint at AskNews.app. During talk we aim to highlight some of the sacrifices and benefits that go hand-in-hand with using the infamous unconstrained KG approach. We conclude the talk by explaining how knowledge graphs like these help to mitigate misinformation. We provide some examples of how our clients are using this graph, such as generating sports forecasts, generating better social media posts, generating regional security alerts, and combating human trafficking. // Bio Robert is the founder of Emergent Methods, where he directs research and software development for large-scale applications. He is currently overseeing the structuring of hundreds of thousands of news articles per day in order to build the best news retrieval API in the world: https://asknews.app. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://emergentmethods.ai News Retrieval API: https://asknews.app --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Rob on LinkedIn: https://www.linkedin.com/in/rcaulk/ Timestamps: [00:00] Rob's preferred coffee [00:05] Takeaways [00:55] Please like, share, leave a review, and subscribe to our MLOps channels! [01:00] Join our Local Organizer Carousel! [02:15] Knowledge Graphs and ontology [07:43] Ontology vs Noun Approach [12:46] Ephemeral tools for efficiency [17:26] Oracle to PostgreSQL migration [22:20] MEM Graph life cycle [29:14] Knowledge Graph Investigation Insights [33:37] Fine-tuning and distillation of LLMs [39:28] DAG workflow and quality control [46:23] Crawling nodes with Phi 3 Llama [50:05] AI pricing risks and strategies [56:14] Data labeling and poisoning [58:34] API costs vs News latency [1:02:10] Product focus and value [1:04:52] Ensuring reliable information [1:11:01] Podcast transcripts as News [1:13:08] Ontology trade-offs explained [1:15:00] Wrap up

    1h 15m
  4. LLM Distillation and Compression // Guanhua Wang // #278

    12/17/2024

    LLM Distillation and Compression // Guanhua Wang // #278

    Guanhua Wang is a Senior Researcher in DeepSpeed Team at Microsoft. Before Microsoft, Guanhua earned his Computer Science PhD from UC Berkeley. Domino: Communication-Free LLM Training Engine // MLOps Podcast #278 with Guanhua "Alex" Wang, Senior Researcher at Microsoft. // Abstract Given the popularity of generative AI, Large Language Models (LLMs) often consume hundreds or thousands of GPUs to parallelize and accelerate the training process. Communication overhead becomes more pronounced when training LLMs at scale. To eliminate communication overhead in distributed LLM training, we propose Domino, which provides a generic scheme to hide communication behind computation. By breaking the data dependency of a single batch training into smaller independent pieces, Domino pipelines these independent pieces of training and provides a generic strategy of fine-grained communication and computation overlapping. Extensive results show that compared with Megatron-LM, Domino achieves up to 1.3x speedup for LLM training on Nvidia DGX-H100 GPUs. // Bio Guanhua Wang is a Senior Researcher in the DeepSpeed team at Microsoft. His research focuses on large-scale LLM training and serving. Previously, he led the ZeRO++ project at Microsoft which helped reduce over half of model training time inside Microsoft and Linkedin. He also led and was a major contributor to Microsoft Phi-3 model training. He holds a CS PhD from UC Berkeley advised by Prof Ion Stoica. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://guanhuawang.github.io/ DeepSpeed hiring: https://www.microsoft.com/en-us/research/project/deepspeed/opportunities/ Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference: https://youtu.be/cntxC3g22oU --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Guanhua on LinkedIn: https://www.linkedin.com/in/guanhua-wang/ Timestamps: [00:00] Guanhua's preferred coffee [00:17] Takeaways [01:36] Please like, share, leave a review, and subscribe to our MLOps channels! [01:47] Phi model explanation [06:29] Small Language Models optimization challenges [07:29] DeepSpeed overview and benefits [10:58] Crazy unimplemented crazy AI ideas [17:15] Post training vs QAT [19:44] Quantization over distillation [24:15] Using Lauras [27:04] LLM scaling sweet spot [28:28] Quantization techniques [32:38] Domino overview [38:02] Training performance benchmark [42:44] Data dependency-breaking strategies [49:14] Wrap up

    50 min
  5. AI's Next Frontier // Aditya Naganath // #277

    12/11/2024

    AI's Next Frontier // Aditya Naganath // #277

    Thanks to the High Signal Podcast by Delphina: https://go.mlops.community/HighSignalPodcast Aditya Naganath is an experienced investor currently working with Kleiner Perkins. He has a passion for connecting with people over coffee and discussing various topics related to tech, products, ideas, and markets. AI's Next Frontier // MLOps Podcast #277 with Aditya Naganath, Principal at Kleiner Perkins. // Abstract LLMs have ushered in an unmistakable supercycle in the world of technology. The low-hanging use cases have largely been picked off. The next frontier will be AI coworkers who sit alongside knowledge workers, doing work side by side. At the infrastructure level, one of the most important primitives invented by man - the data center, is being fundamentally rethought in this new wave. // Bio Aditya Naganath joined Kleiner Perkins’ investment team in 2022 with a focus on artificial intelligence, enterprise software applications, infrastructure and security. Prior to joining Kleiner Perkins, Aditya was a product manager at Google focusing on growth initiatives for the next billion users team. He previously was a technical lead at Palantir Technologies and formerly held software engineering roles at Twitter and Nextdoor, where he was a Kleiner Perkins fellow. Aditya earned a patent during his time at Twitter for a technical analytics product he co-created. Originally from Mumbai India, Aditya graduated magna cum laude from Columbia University with a bachelor’s degree in Computer Science, and an MBA from Stanford University. Outside of work, you can find him playing guitar with a hard rock band, competing in chess or on the squash courts, and fostering puppies. He is also an avid poker player. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Faith's Hymn by Beautiful Chorus: ⁠⁠https://open.spotify.com/track/1bDv6grQB5ohVFI8UDGvKK?si=4b00752eaa96413b⁠⁠ Substack: ⁠⁠https://adityanaganath.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile⁠⁠ With thanks to the High Signal Podcast by Delphina: https://go.mlops.community/HighSignalPodcast Building the Future of AI in Software Development // Varun Mohan // MLOps Podcast #195 - ⁠⁠https://youtu.be/1DJKq8StuTo⁠⁠ Do Re MI for Training Metrics: Start at the Beginning // Todd Underwood // AIQCON - ⁠⁠https://youtu.be/DxyOlRdCofo --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Aditya on LinkedIn: https://www.linkedin.com/in/aditya-naganath/

    58 min
  6. PyTorch for Control Systems and Decision Making // Vincent Moens // #276

    12/04/2024

    PyTorch for Control Systems and Decision Making // Vincent Moens // #276

    Dr Vincent Moens is an Applied Machine Learning Research Scientist at Meta and an author of TorchRL and TensorDict in Pytorch. PyTorch for Control Systems and Decision Making // MLOps Podcast #276 with Vincent Moens, Research Engineer at Meta. // Abstract PyTorch is widely adopted across the machine learning community for its flexibility and ease of use in applications such as computer vision and natural language processing. However, supporting reinforcement learning, decision-making, and control communities is equally crucial, as these fields drive innovation in areas like robotics, autonomous systems, and game-playing. This podcast explores the intersection of PyTorch and these fields, covering practical tips and tricks for working with PyTorch, an in-depth look at TorchRL, and discussions on debugging techniques, optimization strategies, and testing frameworks. By examining these topics, listeners will understand how to effectively use PyTorch for control systems and decision-making applications. // Bio Vincent Moens is a research engineer on the PyTorch core team at Meta, based in London. As the maintainer of TorchRL (https://github.com/pytorch/rl) and TensorDict (https://github.com/pytorch/tensordict), Vincent plays a key role in supporting the decision-making community within the PyTorch ecosystem. Alongside his technical role in the PyTorch community, Vincent also actively contributes to AI-related research projects. Before joining Meta, Vincent worked as an ML researcher at Huawei and AIG. Vincent holds a Medical Degree and a PhD in Computational Neuroscience. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Musical recommendation: https://open.spotify.com/artist/1Uff91EOsvd99rtAupatMP?si=jVkoFiq8Tmq0fqK_OIEglg Website: github.com/vmoens TorchRL: https://github.com/pytorch/rl TensorDict: https://github.com/pytorch/tensordict LinkedIn post: https://www.linkedin.com/posts/vincent-moens-9bb91972_join-the-tensordict-discord-server-activity-7189297643322253312-Wo9J?utm_source=share&utm_medium=member_desktop --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vincent on LinkedIn: https://www.linkedin.com/in/mvi/

    57 min
  7. AI-Driven Code: Navigating Due Diligence & Transparency in MLOps // Matt van Itallie // #275

    11/29/2024

    AI-Driven Code: Navigating Due Diligence & Transparency in MLOps // Matt van Itallie // #275

    Matt Van Itallie is the founder and CEO of Sema. Prior to this, they were the Vice President of Customer Support and Customer Operations at Social Solutions. AI-Driven Code: Navigating Due Diligence & Transparency in MLOps // MLOps Podcast #275 with Matt van Itallie, Founder and CEO of Sema. // Abstract Matt Van Itallie, founder and CEO of Sema, discusses how comprehensive codebase evaluations play a crucial role in MLOps and technical due diligence. He highlights the impact of Generative AI on code transparency and explains the Generative AI Bill of Materials (GBOM), which helps identify and manage risks in AI-generated code. This talk offers practical insights for technical and non-technical audiences, showing how proper diligence can enhance value and mitigate risks in machine learning operations. // Bio Matt Van Itallie is the Founder and CEO of Sema. He and his team have developed Comprehensive Codebase Scans, the most thorough and easily understandable assessment of a codebase and engineering organization. These scans are crucial for private equity and venture capital firms looking to make informed investment decisions. Sema has evaluated code within organizations that have a collective value of over $1 trillion. In 2023, Sema served 7 of the 9 largest global investors, along with market-leading strategic investors, private equity, and venture capital firms, providing them with critical insights. In addition, Sema is at the forefront of Generative AI Code Transparency, which measures how much code created by GenAI is in a codebase. They are the inventors behind the Generative AI Bill of Materials (GBOM), an essential resource for investors to understand and mitigate risks associated with AI-generated code. Before founding Sema, Matt was a Private Equity operating executive and a management consultant at McKinsey. He graduated from Harvard Law School and has had some interesting adventures, like hiking a third of the Appalachian Trail and biking from Boston to Seattle. Full bio: https://alistar.fm/bio/matt-van-itallie // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matt on LinkedIn: https://www.linkedin.com/in/mvi/

    57 min
  8. PyTorch's Combined Effort in Large Model Optimization // Michael Gschwind // #274

    11/26/2024

    PyTorch's Combined Effort in Large Model Optimization // Michael Gschwind // #274

    Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. // MLOps Podcast #274 with Michael Gschwind, Software Engineer, Software Executive at Meta Platforms. // Abstract Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements. // Bio Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. He led the development of MultiRay and Textray, the first deployment of LLMs at a scale exceeding a trillion queries per day shortly after its rollout. He created the strategy and led the implementation of PyTorch donation optimization with Better Transformers and Accelerated Transformers, bringing Flash Attention, PT2 compilation, and ExecuTorch into the mainstream for LLMs and GenAI models. Most recently, he led the enablement of large language models on-device AI with mobile and edge devices. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-gschwind-3704222/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Timestamps: [00:00] Michael's preferred coffee [00:21] Takeaways [01:59] Please like, share, leave a review, and subscribe to our MLOps channels! [02:10] Gaming to AI Accelerators [11:34] Torch Chat goals [18:53] Pytorch benchmarking and competitiveness [21:28] Optimizing MLOps models [24:52] GPU optimization tips [29:36] Cloud vs On-device AI [38:22] Abstraction across devices [42:29] PyTorch developer experience [45:33] AI and MLOps-related antipatterns [48:33] When to optimize [53:26] Efficient edge AI models [56:57] Wrap up

    58 min
4.9
out of 5
18 Ratings

About

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada