Why Debian won't distribute AI models any time soon Deep Dive: AI

    • Technology

Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by big data and hardware capacity and shares some key insights into what sets AlphaGo apart from previous algorithms, making applications integral, and the necessity of releasing training data along with any free software. You’ll also learn about validation data and the difference powerful hardware makes, as well as why Debian is so strict about their practice of offering free software. Finally, Mo shares his predictions for the free software community (and what he would like to see happen in an ideal world) before sharing his own plans for the future, which include a strong element of research.
If you’re looking to learn about the uphill climb for open source artificial intelligence, plus so much more, you won’t want to miss this episode! Full transcript. 
Key points from this episode:

Background on today’s guest, Mo Zhou: PhD student and Debian developer.
His recent Machine Learning Policy proposal at Debian.
Defining artificial intelligence and its evolution, driven by big data and hardware capacity.
Why the recent advancements in deep learning would be impossible without hardware. 
Where AlphaGo differs from past algorithms.
The role of data, training code, and inference code in making an application integral.
Why you have to release training data with any free software.
The financial and time expense of classifying images.
What you need access to in order to modify an existing model.
The validation data set collected by the research community.
Predicting the process of retraining.
What you can gain from powerful hardware.
Why Debian is so strict in the practice of free software. 
Problems that occur when big companies charge for their ecosystems.
What Zhou is expecting from the future of the free software community.
Which licensing schemes are most popular and why.
An ideal future for Open Source AI.
Zhou’s plans for the future and why they include research.

Links mentioned in today’s episode:

Mo Zhou on LinkedIn
Mo Zhou on GitHub
Mo Zhou
Johns Hopkins University
Debian
Debian Deep Learning Team
DeepMind
Apache

Credits
Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.
This podcast is sponsored by GitHub, DataStax and Google.
No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by big data and hardware capacity and shares some key insights into what sets AlphaGo apart from previous algorithms, making applications integral, and the necessity of releasing training data along with any free software. You’ll also learn about validation data and the difference powerful hardware makes, as well as why Debian is so strict about their practice of offering free software. Finally, Mo shares his predictions for the free software community (and what he would like to see happen in an ideal world) before sharing his own plans for the future, which include a strong element of research.
If you’re looking to learn about the uphill climb for open source artificial intelligence, plus so much more, you won’t want to miss this episode! Full transcript. 
Key points from this episode:

Background on today’s guest, Mo Zhou: PhD student and Debian developer.
His recent Machine Learning Policy proposal at Debian.
Defining artificial intelligence and its evolution, driven by big data and hardware capacity.
Why the recent advancements in deep learning would be impossible without hardware. 
Where AlphaGo differs from past algorithms.
The role of data, training code, and inference code in making an application integral.
Why you have to release training data with any free software.
The financial and time expense of classifying images.
What you need access to in order to modify an existing model.
The validation data set collected by the research community.
Predicting the process of retraining.
What you can gain from powerful hardware.
Why Debian is so strict in the practice of free software. 
Problems that occur when big companies charge for their ecosystems.
What Zhou is expecting from the future of the free software community.
Which licensing schemes are most popular and why.
An ideal future for Open Source AI.
Zhou’s plans for the future and why they include research.

Links mentioned in today’s episode:

Mo Zhou on LinkedIn
Mo Zhou on GitHub
Mo Zhou
Johns Hopkins University
Debian
Debian Deep Learning Team
DeepMind
Apache

Credits
Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.
This podcast is sponsored by GitHub, DataStax and Google.
No sponsor had any right or opportunity to approve or disapprove the content of this podcast.

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
TED Radio Hour
NPR