
288 episodes

MLOps.community Demetrios Brinkmann
-
- Technology
-
-
5.0 • 13 Ratings
-
Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.
-
AI in Education Fireside Chat // LLMs in Production Conference 3
// Abstract
Explore the transformative role of AI in EdTech, discussing its potential to enhance learning experiences and personalize education. The panelists share insights on AI use cases, challenges in AI integration, and strategies for building a differentiated business model in the evolving AI landscape. The discussion looks ahead at how the latest wave of GenAI is set to shape the future of education. Join us to understand the exciting prospects and challenges of AI in EdTech.
Moderator: Paul van der Boor
// Bio
Klinton Bicknell
Klinton Bicknell is the Head of AI @duolingo . He works at the intersection of artificial intelligence and cognitive science. His research has been published in venues including ACL, PNAS, NAACL, Psychological Science, EDM, CogSci, and Cognition, and covered in the Financial Times, BBC, and Forbes. Prior to Duolingo, he was an assistant professor at Northwestern University.
Bill Salak
Bill Salak has more than 20 years of experience overseeing large-scale development projects and more than 24 years of experience in web application architecture and development. Bill founded and served as CTO of multiple Internet and web development companies, leading technology projects for companies including Age of Learning, AOL, Educational Testing Systems, Film LA, Hasbro, HBO, Highlights for Children, NBC-Universal, and the U.S. Army.
Bill currently serves as the CTO of @Brainly-app , the leading learning platform worldwide with the most extensive Knowledge Base for all school subjects and grades.
Yeva Hyusyan
Yeva Hyusyan is the Co-Founder and CEO of @Sololearn , the most engaging platform for learning how to code.
Prior to co-founding SoloLearn, Yeva established a startup accelerator for mobile games, consumer apps, and ag-tech solutions. In a previous role, she implemented programs for the World Bank and the US Government in business and education. Later, she served as a General Manager at Microsoft, where she led sales, developer ecosystem development, and strategic partnerships.
Yeva holds an MBA in Corporate Strategy from Maastricht School of Management in the Netherlands, an MS in International Economics from Yerevan State University in Armenia, and completed the Executive Program at Stanford University's Graduate School of Business.
// Sign up for our Newsletter to never miss an event:
https://mlops.community/join/
// Watch all the conference videos here:
https://home.mlops.community/home/collections
// Check out the MLOps Community podcast: https://open.spotify.com/show/7wZygk3mUUqBaRbBGB1lgh?si=242d3b9675654a69
// Read our blog:
mlops.community/blog
// Join an in-person local meetup near you:
https://mlops.community/meetups/
// MLOps Swag/Merch:
https://mlops-community.myshopify.com/
// Follow us on Twitter:
https://twitter.com/mlopscommunity
//Follow us on Linkedin:
https://www.linkedin.com/company/mlopscommunity/ -
[Exclusive] Tecton Round-table // Get your ML Application Into Production
Join our conference: https://home.mlops.community/public/events/llms-in-production-part-iii-2023-10-03
MLOps Coffee Sessions Special episode with Tecton, Get your ML Application Into Production, sponsored by Tecton.
// Abstract
Getting an ML application into production is more difficult than most teams expect—but with the right preparation, it can be done efficiently! Join us for this exclusive roundtable, where 4 machine learning experts from Tecton will discuss some of the most common challenges and best practices to avoid them.
With over 35 years of combined experience in MLOps at companies like AWS, Google, Lyft, and Uber, and 15 years of experience at Tecton spent helping customers like FanDuel, Plaid, and HelloFresh getting ML models into production, the presenters will share how factors like organizational structure, use cases, tech stack, and more, can create different types of bottlenecks. They’ll also share best practices and lessons learned throughout their careers on how to overcome these challenges.
// Bio
Kevin Stumpf
Kevin co-founded Tecton where he leads a world-class engineering team that is building a next-generation feature store for operational Machine Learning. Kevin and his co-founders built deep expertise in operational ML platforms while at Uber, where they created the Michelangelo platform that enabled Uber to scale from 0 to 1000's of ML-driven applications in just a few years. Prior to Uber, Kevin founded Dispatcher, with the vision to build the Uber for long-haul trucking. Kevin holds an MBA from Stanford University and a Bachelor's Degree in Computer and Management Sciences from the University of Hagen. Outside of work, Kevin is a passionate long-distance endurance athlete.
Derek Salama
Derek is currently a Senior Product Manager at Tecton, where he is responsible for security, collaboration experience, and Feature Platform infrastructure. Prior to Tecton, Derek worked at Google and Lyft across both ML infrastructure and ML applications.
Eddie Esquivel
Eddie Esquivel is a Solutions Architect at Tecton, where he helps customers implement feature stores as part of their stack for operational ML. Prior to Tecton, Eddie was a Solutions Architect at AWS. He holds a Bachelor’s Degree in Computer Science & Engineering from the University of California, Los Angeles.
Isaac Cameron
Isaac Cameron is a Consulting Architect at Tecton. Prior to Tecton, he was a Principal Solutions Architect at Slalom Build, focusing on data and machine learning, where he built his own feature platform for a large U.S. airline and has enabled many organizations to build intelligent products leveraging operational ML.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Kevin on LinkedIn: https://www.linkedin.com/in/kevinstumpf/
Connect with Derek on LinkedIn: https://www.linkedin.com/in/dereksalama/
Connect with Eddie on LinkedIn: https://www.linkedin.com/in/eddie-esquivel-2016/
Connect with Isaac on LinkedIn: https://www.linkedin.com/in/isaaccameron/
Timestamps:
[00:00] Introduction to Kevin Stumpf, Derek Salama, Eddie Esquivel, and Isaac Cameron
[02:48] Challenges of traditional classical ML into production
[10:21] Infrastructure cost
[16:50] Bridging Business and Tech
[19:23] ML Infrastructure Essentials
[29:38] Integrated Batch and Stream
[35:12] Scaling AI from Zero
[36:23] Stacks red flags
[45:53] Tecton: Features Quality Monitoring
[49:06] Building Recommender System Tools
[53:19] Quantify business value in ML
[54:40] Wrap up -
DSPy: Transforming Language Model Calls into Smart Pipelines // Omar Khattab // #194
MLOps podcast #194 with Omar Khattab, PhD Candidate at Stanford, DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines.
// Abstract
The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded "prompt templates", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations. On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available as open source at https://github.com/stanfordnlp/dspy
// Bio
Omar Khattab is a PhD candidate at Stanford and an Apple PhD Scholar in AI/ML. He builds retrieval models as well as retrieval-based NLP systems, which can leverage large text collections to craft knowledgeable responses efficiently and transparently. Omar is the author of the ColBERT retrieval model, which has been central to the development of the field of neural retrieval, and author of several of its derivate NLP systems like ColBERT-QA and Baleen. His recent work includes the DSPy framework for solving advanced tasks with language models (LMs) and retrieval models (RMs).
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Website: https://omarkhattab.com/
DSPy: https://github.com/stanfordnlp/dspy
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Omar on Twitter: https://twitter.com/lateinteraction
Timestamps:
[00:00] Omar's preferred coffee
[00:26] Takeaways
[06:40] Weight & Biases Ad
[09:00] Omar's tech background
[13:35] Evolution of RAG
[16:33] Complex retrievals
[21:32] Vector Encoding for Databases
[23:50] BERT vs New Models
[28:00] Resilient Pipelines: Design Principles
[33:37] MLOps Workflow Challenges
[36:15] Guiding LLMs for Tasks
[37:40] Large Language Models: Usage and Costs
[41:32] DSPy Breakdown
[51:05] AI Compliance Roundtable
[55:40] Fine-Tuning Frustrations and Solutions
[57:27] Fine-Tuning Challenges in ML
[1:00:55] Versatile GPT-3 in Agents
[1:03:53] AI Focus: DSP and Retrieval
[1:04:55] Commercialization plans
[1:05:27] Wrap up -
Fireside Chat with LLM Startups // LLMs in Production Conference 3
// Abstract
Martian is focused on building a model router to dynamically route every prompt to the best LLM for the highest performance and lowest cost.
Corti, the Al Co-Pilot for health care uses Al to improve patient care, demonstrating the potential of Al in healthcare and medical decision-making. They recently raised $60M, with Prosus being one of the lead investors.
Transforms is pioneering in synthetic entertainment, showing how Al can transform the way we create and consume media.
Moderator: Paul van der Boor
// Speakers
Sandeep Bakshi
Head of Investments, Europe @prosusgroup3707
Shriyash Upadhyay
Founder @Martian
Lars Maaløe
Co-Founder & CTO at Corti | Adj. Assoc. Professor of Machine Learning @ Corti
Pietro Gagliano
President & Founder @Transitional Forms
// Sign up for our Newsletter to never miss an event:
https://mlops.community/join/
// Watch all the conference videos here:
https://home.mlops.community/home/collections
// Check out the MLOps Community podcast: https://open.spotify.com/show/7wZygk3mUUqBaRbBGB1lgh?si=242d3b9675654a69
// Read our blog:
mlops.community/blog
// Join an in-person local meetup near you:
https://mlops.community/meetups/
// MLOps Swag/Merch:
https://mlops-community.myshopify.com/
// Follow us on Twitter:
https://twitter.com/mlopscommunity
//Follow us on Linkedin:
https://www.linkedin.com/company/mlopscommunity/ -
LLMs in Biomaterials Production // Pierre Salvy // #193
MLOps podcast #193 with Pierre Salvy, Head of Engineering at Cambrium, LLM in Material Production co-hosted by Stephen Batifol.
// Abstract
Delve into the world of proteins, genetic engineering, and the intersection of AI and biotech. Pierre explains how his company is using advanced models to design proteins with specific properties, even creating a vegan collagen for cosmetics. By harnessing the potential of AI, they aim to revolutionize sustainability, uncovering a future of lab-grown meats, molecular cheese, and less harmful plastics, confronting regulatory barriers and decoding the syntax and grammar of proteins.
// Bio
Head of Engineering at Cambrium, a biotech company utilising genAI to design sustainable protein biomaterials for the future.
Pierre spent the last decade researching ways to make computers calculate better biological systems. This is a critical step to engineering more sustainable ways to make the products we use every day, which is their mission at Cambrium.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Website: cambrium.bio
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Stephen on LinkedIn: https://www.linkedin.com/in/stephen-batifol/
Connect with Pierre on LinkedIn: https://www.linkedin.com/in/psalvy/
Timestamps:
[00:00] Pierre's preferred coffee
[00:10] Takeaways
[05:10] Please like, share, and subscribe to our MLOps channels!
[05:25] Weights and Biases ad
[07:52] Ski story
[09:54] Pierre's career trajectory
[13:35] From employee #2 to hiring a team
[14:42] From employee #2 to head of engineering
[15:50] Uncomfortable things to say essential for growth and effectiveness
[18:27] From biotech to engineering
[21:10] LLMs at Cambrium
[24:26] Slackbot
[25:43] Quick and Easy Solutions
[26:47] Products created at Cambrium
[31:56] Impact of EU Regulation on Cambrium
[35:39] 2nd Biotech Winter
[36:35] Cost of error vs service not working
[38:00] Protein Synthesis and Mutations
[40:03] Large-Scale System Engineering Challenges
[43:28] Expensive Factors in Experiments
[44:39] LLMs vs Protein Models
[47:03] Protein Design with LLMs
[49:43] Eco-Friendly Product Vision
[53:28] Space glue
[54:00] Wrap up -
Product Engineering for LLMs // LLMs in Production Conference Part III // Panel 2
// Abstract
A product-minded engineering perspective on UX/design patterns, product evaluation, and building with AI.
// Bio
Charles Frye
Charles teaches people how to build ML applications. After doing research in psychopharmacology and neurobiology, he pivoted to artificial neural networks and completed a PhD at the University of California, Berkeley in 2020. He then worked as an educator at Weights & Biases before joining @Full Stack Deep Learning, an online community and MOOC for building with ML.
Sahar Mor
Sahar is a Product Lead at @stripe with 15y of experience in product and engineering roles. At Stripe, he leads the adoption of LLMs and the Enhanced Issuer Network - a set of data partnerships with top banks to reduce payment fraud.
Prior to Stripe he founded a document intelligence API company, was a founding PM in a couple of AI startups, including an accounting automation startup (Zeitgold, acq'd by Deel), and served in the elite intelligence unit 8200 in engineering roles.
Sahar authors a weekly AI newsletter (AI Tidbits) and maintains a few open-source AI-related libraries (https://github.com/saharmor).
Sarah Guo
Sarah Guo is the Founder and Managing Partner at @Conviction, a venture capital firm founded in 2022 to invest in intelligent software, or "Software 3.0." Prior, she spent a decade as a General Partner at Greylock Partners. She has been an early investor or advisor to 40+ companies in software, fintech, security, infrastructure, fundamental research, and AI-native applications. Sarah is from Wisconsin, has four degrees from the University of Pennsylvania, and lives in the Bay Area with her husband and two daughters. She co-hosts the AI podcast "No Priors" with Elad Gil.
Shyamala Prayaga
Shyamala is a seasoned conversational AI expert. Having led initiatives across connected home, automotive, wearables - just to name a few, she's put her work on research into usability, accessibility, speech recognition, multimodal voice user interfaces, and has even been published internationally across publications like Forbes. Outside of her research, she's spent the last 18 years designing mobile, web, desktop, and smart TV interfaces and has most recently joined @NVIDIA to work on deep learning product suites.
Willem Pienaar
Willem is the creator of @Feast, the open-source feature store and a builder in the generative AI space. Previously Willem was an engineering manager at Tecton where he led teams in both their open source and enterprise initiatives. Before that Willem built the core ML systems and created the ML platform team at Gojek, the Indonesian decacorn.
// Sign up for our Newsletter to never miss an event:
https://mlops.community/join/
// Watch all the conference videos here:
https://home.mlops.community/home/collections
// Check out the MLOps Community podcast: https://open.spotify.com/show/7wZygk3mUUqBaRbBGB1lgh?si=242d3b9675654a69
// Read our blog:
mlops.community/blog
// Join an in-person local meetup near you:
https://mlops.community/meetups/
// MLOps Swag/Merch:
https://mlops-community.myshopify.com/
// Follow us on Twitter:
https://twitter.com/mlopscommunity
//Follow us on Linkedin:
https://www.linkedin.com/company/mlopscommunity/
Customer Reviews
Consistently good information from operators
No fluff. Getting into the details. There's no other podcast like this. Thank you for sharing!
Interesting discussions covering a rapidly developing field
I’m a senior ML engineer who deals with a lot of MLOps related items since we don’t have a dedicated role for that on our team. This show (and the Slack community) have been great resources for inspiration and staying up-to-date on the constant evolution of tools and best practices. It’s very useful to hear from other practitioners as we all try to navigate this landscape together
This podcast is art! Grazie ragazzə 🎉
Long time listener, glad the show is going strong. 🦾
More generative art with ANNs in production, please! Looking forward to applying insights from Valerio Velardo episode in my genart collaboration for DEF CON AI Village