25 min

Observability for Machine Learning The Cloudcast

    • Technology

Alessya Visnjic (CEO, WhyLabs) talks about MLOps, the concept of ML Observability and why AI models can fail. Alyessa talks about the differences between data health and model health and why post production analysis of ML is so important.
SHOW: 626
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
CHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
New Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.CloudZero - Cloud Cost Intelligence for Engineering TeamsSHOW NOTES:
WhyLabs (homepage)TechCrunch Articlehttps://mlops.communityTopic 1 - Welcome Alessya! You are what is known in the AI/ML spaces as a veteran. For those who aren’t familiar with your previous work, how about a quick introduction and background.
Topic 2 - Give everyone a background in MLOps as it is still an emerging market.  We are seeing an emerging trend of trust in data to train models. How did we get to this problem? Is this a transparency and observability problem once in production?
Topic 3 -  How is model health different from data health? Post deployment of models can actually be a factor, things like data drift over time…
Topic 4 - What does a typical tool chain look like? Under the covers is this a logging platform to provide visibility into the model behavior to ensure accuracy over time? I would think every model is different, how do you “standardize/rationalize” the data to detect anomalies and incorrect results?
Topic 5 - Every new category of tools has leading use cases. Where are you seeing the most traction today and how can you best help practitioners? 
Topic 6 - How can folks get started if they are interested?
FEEDBACK?
Email: show at the cloudcast dot netTwitter: @thecloudcastnet

Alessya Visnjic (CEO, WhyLabs) talks about MLOps, the concept of ML Observability and why AI models can fail. Alyessa talks about the differences between data health and model health and why post production analysis of ML is so important.
SHOW: 626
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
CHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
New Relic (homepage)Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.CloudZero - Cloud Cost Intelligence for Engineering TeamsSHOW NOTES:
WhyLabs (homepage)TechCrunch Articlehttps://mlops.communityTopic 1 - Welcome Alessya! You are what is known in the AI/ML spaces as a veteran. For those who aren’t familiar with your previous work, how about a quick introduction and background.
Topic 2 - Give everyone a background in MLOps as it is still an emerging market.  We are seeing an emerging trend of trust in data to train models. How did we get to this problem? Is this a transparency and observability problem once in production?
Topic 3 -  How is model health different from data health? Post deployment of models can actually be a factor, things like data drift over time…
Topic 4 - What does a typical tool chain look like? Under the covers is this a logging platform to provide visibility into the model behavior to ensure accuracy over time? I would think every model is different, how do you “standardize/rationalize” the data to detect anomalies and incorrect results?
Topic 5 - Every new category of tools has leading use cases. Where are you seeing the most traction today and how can you best help practitioners? 
Topic 6 - How can folks get started if they are interested?
FEEDBACK?
Email: show at the cloudcast dot netTwitter: @thecloudcastnet

25 min