43 min

3x27: Benchmarking AI with MLPerf Utilizing Tech Season 6 - Utilizing AI

    • Technology

How fast is your machine learning infrastructure, and how do you measure it? That's the topic of this episode, featuring David Kanter of MLCommons, Frederic Van Haren, and Stephen Foskett. MLCommons is focused on making machine learning better for everyone through metrics, datasets, and enablement. The goal for MLPerf is to come up with a fair and representative benchmark to allow the makers of ML systems to demonstrate the performance of their solutions. They focus on real data from a reference ML model that defines correctness, review the performance of a solution, and post the results. MLPerf started with training then added inferencing, which is the focus for users of ML. We must also consider factors like cost and power use when evaluating a system, and a reliable bench

Links:


MLCommons.org
Connect-Converge.com


Three Questions:


Frederic: Is it possible to create a truly unbiased AI?
Stephen: How big can ML models get? Will today's hundred-billion parameter model look small tomorrow or have we reached the limit?
Andy Hock, Cerebras: What AI application would you build or what AI research would you conduct if you were not constrained by compute?

Gests and Hosts


David Kanter is the Executive Director of MLCommons. You can connect with David on Twitter at @TheKanter and on LinkedIn. You can also send David an email at david@mlcommons.org. 
Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren.
Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.

Date: 4/12/2022 Tags: @SFoskett, @FredericVHaren, 

How fast is your machine learning infrastructure, and how do you measure it? That's the topic of this episode, featuring David Kanter of MLCommons, Frederic Van Haren, and Stephen Foskett. MLCommons is focused on making machine learning better for everyone through metrics, datasets, and enablement. The goal for MLPerf is to come up with a fair and representative benchmark to allow the makers of ML systems to demonstrate the performance of their solutions. They focus on real data from a reference ML model that defines correctness, review the performance of a solution, and post the results. MLPerf started with training then added inferencing, which is the focus for users of ML. We must also consider factors like cost and power use when evaluating a system, and a reliable bench

Links:


MLCommons.org
Connect-Converge.com


Three Questions:


Frederic: Is it possible to create a truly unbiased AI?
Stephen: How big can ML models get? Will today's hundred-billion parameter model look small tomorrow or have we reached the limit?
Andy Hock, Cerebras: What AI application would you build or what AI research would you conduct if you were not constrained by compute?

Gests and Hosts


David Kanter is the Executive Director of MLCommons. You can connect with David on Twitter at @TheKanter and on LinkedIn. You can also send David an email at david@mlcommons.org. 
Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren.
Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.

Date: 4/12/2022 Tags: @SFoskett, @FredericVHaren, 

43 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Deep Questions with Cal Newport
Cal Newport
Dwarkesh Podcast
Dwarkesh Patel
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times