Pooling Layers and learning from Brains

Underrated ML

This week we take a look at the need for pooling layers within CNNs as well as discussing the regularization of CNNs using large-scale neuroscience data.

We are also very pleased to have Rosanne Liu join us on the show. Rosanne is a senior research scientist and a founding member of Uber AI. She is interested in making neural networks a better place and also currently runs a deep learning reading group called "Deep Learning: Classics and Trends".


Rosanne Liu Twitter: https://twitter.com/savvyrl?lang=en

Please let us know who you thought presented the most underrated paper in the form below:

https://forms.gle/97MgHvTkXgdB41TC8

Also let us know any suggestions for future papers or guests:

https://docs.google.com/forms/d/e/1FAIpQLSeWoZnImRHXy8MTeBhKA4bxRPVVnVXAUb0bLIP0bQpiTwX6uA/viewform

Links to the papers:

"Learning From Brains How to Regularize Machines" - https://arxiv.org/pdf/1911.05072.pdf
"Pooling is neither necessary nor sufficient for appropriate deformation stability in CNNs - https://arxiv.org/pdf/1804.04438.pdf
"Plug and play language models: A simple approach to controlled text generation" - https://arxiv.org/pdf/1912.02164.pdf

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada