99 episodes

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Science

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 187: COSYNE 2024 Neuro-AI Panel

    BI 187: COSYNE 2024 Neuro-AI Panel

    Support the show to get full episodes and join the Discord community.









    Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience.




    COSYNE.

    • 1 hr 3 min
    BI 186 Mazviita Chirimuuta: The Brain Abstracted

    BI 186 Mazviita Chirimuuta: The Brain Abstracted

    Support the show to get full episodes and join the Discord community.











    Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.



    She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.






    Mazviita's University of Edinburgh page.



    The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.



    Previous Brain Inspired episodes:

    BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality



    BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind






    0:00 - Intro
    5:28 - Neuroscience to philosophy
    13:39 - Big themes of the book
    27:44 - Simplifying by mathematics
    32:19 - Simplifying by reduction
    42:55 - Simplification by analogy
    46:33 - Technology precedes science
    55:04 - Theory, technology, and understanding
    58:04 - Cross-disciplinary progress
    58:45 - Complex vs. simple(r) systems
    1:08:07 - Is science bound to study stability?
    1:13:20 - 4E for philosophy but not neuroscience?
    1:28:50 - ANNs as models
    1:38:38 - Study of mind

    • 1 hr 43 min
    BI 185 Eric Yttri: Orchestrating Behavior

    BI 185 Eric Yttri: Orchestrating Behavior

    Support the show to get full episodes and join the Discord community.











    As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.



    Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.



    We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.



    Yttri Lab




    Twitter: @YttriLab



    Related papers

    Opponent and bidirectional control of movement velocity in the basal ganglia.



    B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors.






    0:00 - Intro
    2:36 - Eric's background
    14:47 - Different animal models
    17:59 - ANNs as models for animal brains
    24:34 - Main question
    25:43 - How circuits produce appropriate behaviors
    26:10 - Cerebellum
    27:49 - What do motor cortex and basal ganglia do?
    49:12 - Neuroethology
    1:06:09 - What is a behavior?
    1:11:18 - Categorize behavior (B-SOiD)
    1:22:01 - Real behavior vs. ANNs
    1:33:09 - Best era in neuroscience

    • 1 hr 44 min
    BI 184 Peter Stratton: Synthesize Neural Principles

    BI 184 Peter Stratton: Synthesize Neural Principles

    Support the show to get full episodes and join the Discord community.









    Peter Stratton is a research scientist at Queensland University of Technology.





    I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.



    What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?




    Peter's website.



    Related papers

    Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?



    Making a Spiking Net Work: Robust brain-like unsupervised machine learning.



    Global segregation of cortical activity and metastable dynamics.



    Unlocking neural complexity with a robotic key






    0:00 - Intro
    3:50 - AI background, neuroscience principles
    8:00 - Overall view of modern AI
    14:14 - Moravec's paradox and robotics
    20:50 -Understanding movement to understand cognition
    30:01 - How close are we to understanding brains/minds?
    32:17 - Pete's goal
    34:43 - Principles from neuroscience to build AI
    42:39 - Levels of abstraction and implementation
    49:57 - Mental disorders and robustness
    55:58 - Function vs. implementation
    1:04:04 - Spiking networks
    1:07:57 - The roadmap
    1:19:10 - AGI
    1:23:48 - The terms AGI and AI
    1:26:12 - Consciousness

    • 1 hr 30 min
    BI 183 Dan Goodman: Neural Reckoning

    BI 183 Dan Goodman: Neural Reckoning

    Support the show to get full episodes and join the Discord community.











    You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.



    All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.



    We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.



    So what does it mean that modern neural networks disregard spiking altogether?



    Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.




    Neural Reckoning Group.



    Twitter: @neuralreckoning.



    Related papers

    Neural heterogeneity promotes robust learning.



    Dynamics of specialization in neural modules under resource constraints.



    Multimodal units fuse-then-accumulate evidence across channels.



    Visualizing a joint future of neuroscience and neuromorphic engineering.






    0:00 - Intro
    3:47 - Why spiking neural networks, and a mathematical background
    13:16 - Efficiency
    17:36 - Machine learning for neuroscience
    19:38 - Why not jump ship from SNNs?
    23:35 - Hard and easy tasks
    29:20 - How brains and nets learn
    32:50 - Exploratory vs. theory-driven science
    37:32 - Static vs. dynamic
    39:06 - Heterogeneity
    46:01 - Unifying principles vs. a hodgepodge
    50:37 - Sparsity
    58:05 - Specialization and modularity
    1:00:51 - Naturalistic experiments
    1:03:41 - Projects for SNN research
    1:05:09 - The right level of abstraction
    1:07:58 - Obstacles to progress
    1:12:30 - Levels of explanation
    1:14:51 - What has AI taught neuroscience?
    1:22:06 - How has neuroscience helped AI?

    • 1 hr 28 min
    BI 182: John Krakauer Returns… Again

    BI 182: John Krakauer Returns… Again

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience











    John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like




    Whether brains actually reorganize after damage



    The role of brain plasticity in general



    The path toward and the path not toward understanding higher cognition



    How to fix motor problems after strokes



    AGI



    Functionalism, consciousness, and much more.




    Relevant links:




    John's Lab.



    Twitter: @blamlab



    Related papers

    What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.



    Against cortical reorganisation.





    Other episodes with John:

    BI 025 John Krakauer: Understanding Cognition



    BI 077 David and John Krakauer: Part 1



    BI 078 David and John Krakauer: Part 2



    BI 113 David Barack and John Krakauer: Two Views On Cognition






    Time stamps
    0:00 - Intro
    2:07 - It's a podcast episode!
    6:47 - Stroke and Sherrington neuroscience
    19:26 - Thinking vs. moving, representations
    34:15 - What's special about humans?
    56:35 - Does cortical reorganization happen?
    1:14:08 - Current era in neuroscience

    • 1 hr 25 min

Top Podcasts In Science

The Science of Coffee
James Harper
Emergency Medicine Board Bombs
EM Board Bombs
Frankly Speaking About Family Medicine
Pri-Med
Philosophy
Cambridge University
NASA's Curious Universe
National Aeronautics and Space Administration (NASA)
Overheard at National Geographic
National Geographic

You Might Also Like

Closer To Truth
Closer To Truth
Lex Fridman Podcast
Lex Fridman
The Joy of Why
Steven Strogatz, Janna Levin and Quanta Magazine
Into the Impossible With Brian Keating
Big Bang Productions Inc.
Quanta Science Podcast
Quanta Magazine
Dwarkesh Podcast
Dwarkesh Patel