146 episodes

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

Machine Learning Street Talk (MLST‪)‬ Machine Learning Street Talk (MLST)

    • Technology
    • 4.7 • 63 Ratings

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

    Prof. Chris Bishop's NEW Deep Learning Textbook!

    Prof. Chris Bishop's NEW Deep Learning Textbook!

    Professor Chris Bishop is a Technical Fellow and Director at Microsoft Research AI4Science, in Cambridge. He is also Honorary Professor of Computer Science at the University of Edinburgh, and a Fellow of Darwin College, Cambridge. In 2004, he was elected Fellow of the Royal Academy of Engineering, in 2007 he was elected Fellow of the Royal Society of Edinburgh, and in 2017 he was elected Fellow of the Royal Society. Chris was a founding member of the UK AI Council, and in 2019 he was appointed to the Prime Minister’s Council for Science and Technology.



    At Microsoft Research, Chris oversees a global portfolio of industrial research and development, with a strong focus on machine learning and the natural sciences.

    Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory.



    Chris's contributions to the field of machine learning have been truly remarkable. He has authored (what is arguably) the original textbook in the field - 'Pattern Recognition and Machine Learning' (PRML) which has served as an essential reference for countless students and researchers around the world, and that was his second textbook after his highly acclaimed first textbook Neural Networks for Pattern Recognition.



    Recently, Chris has co-authored a new book with his son, Hugh, titled 'Deep Learning: Foundations and Concepts.' This book aims to provide a comprehensive understanding of the key ideas and techniques underpinning the rapidly evolving field of deep learning. It covers both the foundational concepts and the latest advances, making it an invaluable resource for newcomers and experienced practitioners alike.



    Buy Chris' textbook here:

    https://amzn.to/3vvLcCh



    More about Prof. Chris Bishop:

    https://en.wikipedia.org/wiki/Christopher_Bishop

    https://www.microsoft.com/en-us/research/people/cmbishop/



    Support MLST:

    Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more.

    https://patreon.com/mlst

    Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA

    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail



    TOC:

    00:00:00 - Intro to Chris

    00:06:54 - Changing Landscape of AI

    00:08:16 - Symbolism

    00:09:32 - PRML

    00:11:02 - Bayesian Approach

    00:14:49 - Are NNs One Model or Many, Special vs General

    00:20:04 - Can Language Models Be Creative

    00:22:35 - Sparks of AGI

    00:25:52 - Creativity Gap in LLMs

    00:35:40 - New Deep Learning Book

    00:39:01 - Favourite Chapters

    00:44:11 - Probability Theory

    00:45:42 - AI4Science

    00:48:31 - Inductive Priors

    00:58:52 - Drug Discovery

    01:05:19 - Foundational Bias Models

    01:07:46 - How Fundamental Is Our Physics Knowledge?

    01:12:05 - Transformers

    01:12:59 - Why Does Deep Learning Work?

    01:16:59 - Inscrutability of NNs

    01:18:01 - Example of Simulator

    01:21:09 - Control

    • 1 hr 22 min
    Philip Ball - How Life Works

    Philip Ball - How Life Works

    Dr. Philip Ball is a freelance science writer. He just wrote a book called "How Life Works", discussing the how the science of Biology has advanced in the last 20 years. We focus on the concept of Agency in particular.



    He trained as a chemist at the University of Oxford, and as a physicist at the University of Bristol. He worked previously at Nature for over 20 years, first as an editor for physical sciences and then as a consultant editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology.



    YT: https://www.youtube.com/watch?v=n6nxUiqiz9I



    Transcript link on YT description



    Philip is the author of many popular books on science, including H2O: A Biography of Water, Bright Earth: The Invention of Colour, The Music Instinct and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books, while Serving the Reich was shortlisted for the Royal Society Winton Science Book Prize in 2014.



    This is one of Tim's personal favourite MLST shows, so we have designated it a special edition. Enjoy!



    Buy Philip's book "How Life Works" here: https://amzn.to/3vSmNqp



    Support MLST:
    Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more.
    https://patreon.com/mlst
    Donate: https://www.paypal.com/donate/?hosted...
    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail

    • 2 hr 9 min
    Dr. Paul Lessard - Categorical/Structured Deep Learning

    Dr. Paul Lessard - Categorical/Structured Deep Learning

    Dr. Paul Lessard and his collaborators have written a paper on "Categorical Deep Learning and Algebraic Theory of Architectures". They aim to make neural networks more interpretable, composable and amenable to formal reasoning. The key is mathematical abstraction, as exemplified by category theory - using monads to develop a more principled, algebraic approach to structuring neural networks.



    We also discussed the limitations of current neural network architectures in terms of their ability to generalise and reason in a human-like way. In particular, the inability of neural networks to do unbounded computation equivalent to a Turing machine. Paul expressed optimism that this is not a fundamental limitation, but an artefact of current architectures and training procedures.



    The power of abstraction - allowing us to focus on the essential structure while ignoring extraneous details. This can make certain problems more tractable to reason about. Paul sees category theory as providing a powerful "Lego set" for productively thinking about many practical problems.



    Towards the end, Paul gave an accessible introduction to some core concepts in category theory like categories, morphisms, functors, monads etc. We explained how these abstract constructs can capture essential patterns that arise across different domains of mathematics.



    Paul is optimistic about the potential of category theory and related mathematical abstractions to put AI and neural networks on a more robust conceptual foundation to enable interpretability and reasoning. However, significant theoretical and engineering challenges remain in realising this vision.



    Please support us on Patreon. We are entirely funded from Patreon donations right now.

    https://patreon.com/mlst

    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail



    Links:

    Categorical Deep Learning: An Algebraic Theory of Architectures

    Bruno Gavranović, Paul Lessard, Andrew Dudzik,

    Tamara von Glehn, João G. M. Araújo, Petar Veličković

    Paper: https://categoricaldeeplearning.com/



    Symbolica:

    https://twitter.com/symbolica

    https://www.symbolica.ai/



    Dr. Paul Lessard (Principal Scientist - Symbolica)

    https://www.linkedin.com/in/paul-roy-lessard/



    Interviewer: Dr. Tim Scarfe



    TOC:

    00:00:00 - Intro

    00:05:07 - What is the category paper all about

    00:07:19 - Composition

    00:10:42 - Abstract Algebra

    00:23:01 - DSLs for machine learning

    00:24:10 - Inscrutibility

    00:29:04 - Limitations with current NNs

    00:30:41 - Generative code / NNs don't recurse

    00:34:34 - NNs are not Turing machines (special edition)

    00:53:09 - Abstraction

    00:55:11 - Category theory objects

    00:58:06 - Cat theory vs number theory

    00:59:43 - Data and Code are one in the same

    01:08:05 - Syntax and semantics

    01:14:32 - Category DL elevator pitch

    01:17:05 - Abstraction again

    01:20:25 - Lego set for the universe

    01:23:04 - Reasoning

    01:28:05 - Category theory 101

    01:37:42 - Monads

    01:45:59 - Where to learn more cat theory

    • 1 hr 49 min
    Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

    Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

    Dr. Minqi Jiang and Dr. Marc Rigter explain an innovative new method to make the intelligence of agents more general-purpose by training them to learn many worlds before their usual goal-directed training, which we call "reinforcement learning".

    Their new paper is called "Reward-free curricula for training robust world models" https://arxiv.org/pdf/2306.09205.pdf

    https://twitter.com/MinqiJiang
    https://twitter.com/MarcRigter

    Interviewer: Dr. Tim Scarfe

    Please support us on Patreon, Tim is now doing MLST full-time and taking a massive financial hit. If you love MLST and want this to continue, please show your support! In return you get access to shows very early and private discord and networking. https://patreon.com/mlst

    We are also looking for show sponsors, please get in touch if interested mlstreettalk at gmail.

    MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

    • 1 hr 57 min
    Prof. Nick Chater - The Language Game (Part 1)

    Prof. Nick Chater - The Language Game (Part 1)

    Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game.



    Please support me on Patreon (this is now my main job!) - https://patreon.com/mlst - Access the private Discord, networking, and early access to content.

    MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

    https://twitter.com/MLStreetTalk



    Buy The Language Game:

    https://amzn.to/3SRHjPm



    Buy The Mind is Flat:

    https://amzn.to/3P3BUUC



    YT version: https://youtu.be/5cBS6COzLN4



    https://www.wbs.ac.uk/about/person/nick-chater/

    https://twitter.com/nickjchater?lang=en

    • 1 hr 43 min
    Kenneth Stanley created a new social network based on serendipity and divergence

    Kenneth Stanley created a new social network based on serendipity and divergence

    See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network". The idea is that you follow interests, NOT people. It's a social network without the popularity contest. We discuss the phgilosophy and technology behind the venture in great detail. The main ideas of which came from Kenneth's famous book "Why greatness cannot be planned".



    See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network".The idea is that you follow interests, NOT people. It's a social network without the popularity contest.


    YT version: https://www.youtube.com/watch?v=pWIrXN-yy8g



    Chapters should be baked into the MP3 file now


    MLST public Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

    Please support our work on Patreon - get access to interviews months early, private Patreon, networking, exclusive content and regular calls with Tim and Keith.
    https://patreon.com/mlst

    Get Maven here:
    https://www.heymaven.com/

    Kenneth:
    https://twitter.com/kenneth0stanley
    https://www.kenstanley.net/home

    Host - Tim Scarfe:
    https://www.linkedin.com/in/ecsquizor/
    https://www.mlst.ai/

    Original MLST show with Kenneth:
    https://www.youtube.com/watch?v=lhYGXYeMq_E

    Tim explains the book more here:

    https://www.youtube.com/watch?v=wNhaz81OOqw

    • 3 hr 15 min

Customer Reviews

4.7 out of 5
63 Ratings

63 Ratings

harryoekndn ,

Super informative!

A podcast that has truly changed my life over the past three years. Phenomenal guests, impeccable ideas.

diamond bishop ,

Strong sometimes

Lots of potential and a great host usually but there are too many episodes (most recent included) where he brings on someone who does not know how to debate for a debate. Great example is that Connor keeps taking air time. It really ruins the quality and feels like a high school debate being recorded as he talks down to people and tries to “establish” hypothetical decision points. Go back to the expert discussions and depth over clickbait and you’ll have a great show.

Usability guy ,

Neel Nanda episode was fantastic

Adds to a strong catalog.

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Deep Questions with Cal Newport
Cal Newport
Dwarkesh Podcast
Dwarkesh Patel
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times

You Might Also Like

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Dwarkesh Podcast
Dwarkesh Patel
Practical AI: Machine Learning, Data Science
Changelog Media
Eye On A.I.
Craig S. Smith
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn