10 episodes

Listen along as we try and dissect various Machine Learning papers that just haven't got the love and attention they deserve.Twitter: https://twitter.com/underrated_mlVoting Page: https://forms.gle/97MgHvTkXgdB41TC8

Underrated ML Sara Hooker & Sean Hooker

    • Technology
    • 4.0 • 4 Ratings

Listen along as we try and dissect various Machine Learning papers that just haven't got the love and attention they deserve.Twitter: https://twitter.com/underrated_mlVoting Page: https://forms.gle/97MgHvTkXgdB41TC8

    Ari Morcos - The importance of of certain layers in DNNs

    Ari Morcos - The importance of of certain layers in DNNs

    This week we are joined by Ari Morcos. Ari is a research scientist at Facebook AI Research (FAIR) in Menlo Park working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. In particular, he has worked on a variety of topics, including understanding the lottery ticket hypothesis, self-supervised learning, the mechanisms underlying common regularizers, and the properties predictive of generalization, as well as methods to compare representations across networks, the role of single units in computation, and on strategies to measure abstraction in neural network representations. Previously, he worked at DeepMind in London.

    Ari earned his PhD working with Chris Harvey at Harvard University. For his thesis, he developed methods to understand how neuronal circuits perform the computations necessary for complex behaviour. In particular, his research focused on how parietal cortex contributes to evidence accumulation decision-making.

    In this episode, we discuss the importance of certain layers within neural networks.

    Underrated ML Twitter: https://twitter.com/underrated_ml
    Naila Murray Twitter: https://twitter.com/arimorcos

    Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8

    Link to the paper:
    "Are All Layers Created Equal?" [paper]

    • 57 min
    Naila Murray - Interestingness predictions and getting to grips with data privacy

    Naila Murray - Interestingness predictions and getting to grips with data privacy

    This week we are joined by Naila Murray. Naila obtained a B.Sc. in Electrical Engineering from Princeton University in 2007. In 2012, she received her PhD from the Universitat Autonoma de Barcelona, in affiliation with the Computer Vision Center. She joined NAVER LABS Europe (then Xerox Research Centre Europe) in January 2013, working on topics including fine-grained visual categorization, image retrieval, and visual attention. From 2015 to 2019 she led the computer vision team at NLE. She currently serves as NLE's director of science. She serves/served as area chair for ICLR 2018, ICCV 2019, ICLR 2019, CVPR 2020, ECCV 2020, and programme chair for ICLR 2021. Her research interests include representation learning and multi-modal search.

    We discuss using sparse pairwise comparisons to learn a ranking function that is robust to outliers. We also take a look at using generative models in order to utilise once inaccessible datasets.

    Underrated ML Twitter: https://twitter.com/underrated_ml
    Naila Murray Twitter: https://twitter.com/NailaMurray

    Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8

    Links to the papers:
    "Interestingness Prediction by Robust Learning to Rank" [paper]
    "Generative Models for Effective ML on Private Decentralized datasets" - [paper]

    • 1 hr 8 min
    Julius Adebayo - Understanding a microprocessor and the evolution of hardware

    Julius Adebayo - Understanding a microprocessor and the evolution of hardware

    This week we are joined by Julius Adebayo. Julius is a CS PhD student at MIT, interested in safe deployment of ML based systems as it relates to privacy/security, interpretability, fairness and robustness.
    He is motivated by the need to ensure that ML based systems demonstrate safe behaviour when deployed.

    On this weeks episode we discuss how the evolution of hardware has progressed overtime and what that means for deep learning research. We also analyse how microprocessors can aid developments in neuroscience understanding.

    Underrated ML Twitter: https://twitter.com/underrated_ml
    Julius Adebayo Twitter: https://twitter.com/julius_adebayo

    Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8

    Links to the papers:
    "Could a Neuroscientist Understand a Microprocessor?" [paper]
    "When will computer hardware match the human brain?" - [paper]

    • 1 hr 10 min
    Anna Huang - Metaphor generation and ML for child welfare

    Anna Huang - Metaphor generation and ML for child welfare

    We open season two of Underrated ML with Anna Huang on the show. Anna Huang is a Research Scientist at Google Brain, working on the Magenta project. Her research focuses on designing generative models to make creating music more approachable. She is the creator of Music Transformer and also the ML model Coconet that powered Google’s first AI Doodle the Bach Doodle.
    She holds a PhD in computer science from Harvard University and was a recipient of the NSF Graduate Research Fellowship. She spent the later parts of her PhD as a visiting research student at the Montreal Institute of Learning Algorithms (MILA). She publishes in machine learning, human-computer interaction, and music, at conferences such as ICLR, IUI, CHI, and ISMIR.
    She has been a judge on the Eurovision AI Song Contest and her compositions have won awards including first place in the San Francisco Choral Artists’ a cappella composition contest. She holds a masters in media arts and sciences from the MIT Media Lab, and a B.S. in computer science and B.M. in music composition both from the University of Southern California. She grew up in Hong Kong, where she learned to play the guzheng.

    On the episode we discuss Metaphoria by Kate Gero and Lydia Chilton, which is a fascinating tool allowing users to generate metaphors from only a select number of words. We also discuss the current trends regarding the dangers of AI with a case study on child welfare.

    Underrated ML Twitter: https://twitter.com/underrated_ml
    Anna Huang Twitter: https://twitter.com/huangcza

    Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8

    Links to the papers:
    Gero, Katy Ilonka, and Lydia B. Chilton. "Metaphoria: An Algorithmic Companion for Metaphor Creation." CHI 2019. [paper][online paper] [talk] [demo]
    "A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions" - [paper]
    Additional Links:
    Compton, Kate, and Michael Mateas. "Casual Creators." ICCC 2015. [paper]Fiebrink, Rebecca, Dan Trueman, and Perry R. Cook. "A Meta-Instrument for Interactive, On-the-Fly Machine Learning." NIME 2009. [paper][talk][tool]Huang, Cheng-Zhi Anna, et al. "The Bach Doodle: Approachable music composition with machine learning at scale." ISMIR 2019. [paper][blog][doodle]

    • 1 hr 13 min
    Stephen Merity - Strongly typed RNNs and morphogenesis

    Stephen Merity - Strongly typed RNNs and morphogenesis

    We conclude season one of Underrated ML by having Stephen Merity on as our guest. Stephen has worked at various institutions such as MetaMind and Salesforce ohana, Google Sydney, Freelancer.com, the Schwa Lab at the University of Sydney, the team at...

    • 1 hr 33 min
    Sebastian Ruder - Language independence and material properties

    Sebastian Ruder - Language independence and material properties

    This week we are joined by Sebastian Ruder. He is a research scientist at DeepMind, London. He has also worked at a variety of institutions such as AYLIEN, Microsoft, IBM's Extreme Blue, Google Summer of Code, and SAP. These experiences were completed in tangent with his studies which included studying Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin before undertaking a PhD in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics.

    This week we discuss language independence and diversity in natural language processing whilst also taking a look at the attempts to identify material properties from images.

    As discussed in the podcast if you would like to donate to the current campaign of "CREATE DONATE EDUCATE" which supports Stop Hate UK then please find the link below:
    https://www.shorturl.at/glmsz
    Please also find additional links to help support black colleagues in the area of research;
    Black in AI twitter account: https://twitter.com/black_in_ai
    Mentoring and proofreading sign-up to support our Black colleagues in research: https://twitter.com/le_roux_nicolas/status/1267896907621433344?s=20

    Underrated ML Twitter: https://twitter.com/underrated_ml
    Sebastian Ruder Twitter: https://twitter.com/seb_ruder

    Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8

    Links to the papers:
    “On Achieving and Evaluating Language-Independence in NLP” - https://journals.linguisticsociety.org/elanguage/lilt/article/view/2624.html
    "The State and Fate of Linguistic Diversity and Inclusion in the NLP World” - https://arxiv.org/abs/2004.09095
    "Recognizing Material Properties from Images" - https://arxiv.org/pdf/1801.03127.pdf
    Additional Links:
    Student perspectives on applying to NLP PhD programs: https://blog.nelsonliu.me/2019/10/24/student-perspectives-on-applying-to-nlp-phd-programs/Tim Dettmer's post on how to pick your grad school: https://timdettmers.com/2020/03/10/how-to-pick-your-grad-school/Rachel Thomas' blog post on why you should blog: https://medium.com/@racheltho/why-you-yes-you-should-blog-7d2544ac1045Emily Bender's The Gradient article: https://thegradient.pub/the-benderrule-on-naming-the-languages-we-study-and-why-it-matters/Paper on order-sensitive vs order-free methods: https://www.aclweb.org/anthology/N19-1253.pdf"Exploring the Origins and Prevalence of Texture Bias in Convolutional Neural Networks": https://arxiv.org/abs/1911.09071Sebastian's website where you can find all his blog posts: https://ruder.io/

    • 1 hr 34 min

Customer Reviews

4.0 out of 5
4 Ratings

4 Ratings

Top Podcasts In Technology