99 episodes

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Science
    • 5.0 • 7 Ratings

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 186 Mazviita Chirimuuta: The Brain Abstracted

    BI 186 Mazviita Chirimuuta: The Brain Abstracted

    Support the show to get full episodes and join the Discord community.











    Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.



    She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.






    Mazviita's University of Edinburgh page.



    The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.



    Previous Brain Inspired episodes:

    BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality



    BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind






    0:00 - Intro
    5:28 - Neuroscience to philosophy
    13:39 - Big themes of the book
    27:44 - Simplifying by mathematics
    32:19 - Simplifying by reduction
    42:55 - Simplification by analogy
    46:33 - Technology precedes science
    55:04 - Theory, technology, and understanding
    58:04 - Cross-disciplinary progress
    58:45 - Complex vs. simple(r) systems
    1:08:07 - Is science bound to study stability?
    1:13:20 - 4E for philosophy but not neuroscience?
    1:28:50 - ANNs as models
    1:38:38 - Study of mind

    • 1 hr 43 min
    BI 185 Eric Yttri: Orchestrating Behavior

    BI 185 Eric Yttri: Orchestrating Behavior

    Support the show to get full episodes and join the Discord community.











    As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.



    Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.



    We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.



    Yttri Lab




    Twitter: @YttriLab



    Related papers

    Opponent and bidirectional control of movement velocity in the basal ganglia.



    B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors.






    0:00 - Intro
    2:36 - Eric's background
    14:47 - Different animal models
    17:59 - ANNs as models for animal brains
    24:34 - Main question
    25:43 - How circuits produce appropriate behaviors
    26:10 - Cerebellum
    27:49 - What do motor cortex and basal ganglia do?
    49:12 - Neuroethology
    1:06:09 - What is a behavior?
    1:11:18 - Categorize behavior (B-SOiD)
    1:22:01 - Real behavior vs. ANNs
    1:33:09 - Best era in neuroscience

    • 1 hr 44 min
    BI 184 Peter Stratton: Synthesize Neural Principles

    BI 184 Peter Stratton: Synthesize Neural Principles

    Support the show to get full episodes and join the Discord community.









    Peter Stratton is a research scientist at Queensland University of Technology.





    I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.



    What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?




    Peter's website.



    Related papers

    Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?



    Making a Spiking Net Work: Robust brain-like unsupervised machine learning.



    Global segregation of cortical activity and metastable dynamics.



    Unlocking neural complexity with a robotic key






    0:00 - Intro
    3:50 - AI background, neuroscience principles
    8:00 - Overall view of modern AI
    14:14 - Moravec's paradox and robotics
    20:50 -Understanding movement to understand cognition
    30:01 - How close are we to understanding brains/minds?
    32:17 - Pete's goal
    34:43 - Principles from neuroscience to build AI
    42:39 - Levels of abstraction and implementation
    49:57 - Mental disorders and robustness
    55:58 - Function vs. implementation
    1:04:04 - Spiking networks
    1:07:57 - The roadmap
    1:19:10 - AGI
    1:23:48 - The terms AGI and AI
    1:26:12 - Consciousness

    • 1 hr 30 min
    BI 183 Dan Goodman: Neural Reckoning

    BI 183 Dan Goodman: Neural Reckoning

    Support the show to get full episodes and join the Discord community.











    You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.



    All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.



    We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.



    So what does it mean that modern neural networks disregard spiking altogether?



    Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.




    Neural Reckoning Group.



    Twitter: @neuralreckoning.



    Related papers

    Neural heterogeneity promotes robust learning.



    Dynamics of specialization in neural modules under resource constraints.



    Multimodal units fuse-then-accumulate evidence across channels.



    Visualizing a joint future of neuroscience and neuromorphic engineering.






    0:00 - Intro
    3:47 - Why spiking neural networks, and a mathematical background
    13:16 - Efficiency
    17:36 - Machine learning for neuroscience
    19:38 - Why not jump ship from SNNs?
    23:35 - Hard and easy tasks
    29:20 - How brains and nets learn
    32:50 - Exploratory vs. theory-driven science
    37:32 - Static vs. dynamic
    39:06 - Heterogeneity
    46:01 - Unifying principles vs. a hodgepodge
    50:37 - Sparsity
    58:05 - Specialization and modularity
    1:00:51 - Naturalistic experiments
    1:03:41 - Projects for SNN research
    1:05:09 - The right level of abstraction
    1:07:58 - Obstacles to progress
    1:12:30 - Levels of explanation
    1:14:51 - What has AI taught neuroscience?
    1:22:06 - How has neuroscience helped AI?

    • 1 hr 28 min
    BI 182: John Krakauer Returns… Again

    BI 182: John Krakauer Returns… Again

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience











    John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like




    Whether brains actually reorganize after damage



    The role of brain plasticity in general



    The path toward and the path not toward understanding higher cognition



    How to fix motor problems after strokes



    AGI



    Functionalism, consciousness, and much more.




    Relevant links:




    John's Lab.



    Twitter: @blamlab



    Related papers

    What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.



    Against cortical reorganisation.





    Other episodes with John:

    BI 025 John Krakauer: Understanding Cognition



    BI 077 David and John Krakauer: Part 1



    BI 078 David and John Krakauer: Part 2



    BI 113 David Barack and John Krakauer: Two Views On Cognition






    Time stamps
    0:00 - Intro
    2:07 - It's a podcast episode!
    6:47 - Stroke and Sherrington neuroscience
    19:26 - Thinking vs. moving, representations
    34:15 - What's special about humans?
    56:35 - Does cortical reorganization happen?
    1:14:08 - Current era in neuroscience

    • 1 hr 25 min
    BI 181 Max Bennett: A Brief History of Intelligence

    BI 181 Max Bennett: A Brief History of Intelligence

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience













    By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.



    Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.



    The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.




    Twitter:

    @maxsbennett





    Book:

    A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.






    0:00 - Intro
    5:26 - Why evolution is important
    7:22 - Maclean's triune brain
    14:59 - Breakthrough 1: Steering
    29:06 - Fish intelligence
    40:38 - Breakthrough 3: Mentalizing
    52:44 - How could we improve the human brain?
    1:00:44 - What is intelligence?
    1:13:50 - Breakthrough 5: Speaking

    • 1 hr 27 min

Customer Reviews

5.0 out of 5
7 Ratings

7 Ratings

cblackall ,

Best brain-food around…

This is an excellent podcast. The guests are experts in their fields and are uniformly good communicators. The host asks good questions and does not overly intrude. I've already learnt a lot from listening and I look forward its continuation.

Top Podcasts In Science

All In The Mind
ABC listen
Dr Karl Podcast
ABC listen
Reinvent Yourself with Dr. Tara
Dr. Tara Swart Bieber
Hidden Brain
Hidden Brain, Shankar Vedantam
The Infinite Monkey Cage
BBC Radio 4
Making Sense with Sam Harris
Sam Harris

You Might Also Like

COMPLEXITY: Physics of Life
Santa Fe Institute
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Quanta Science Podcast
Quanta Magazine
The Joy of Why
Steven Strogatz, Janna Levin and Quanta Magazine
The Jim Rutt Show
The Jim Rutt Show
The Michael Shermer Show
Michael Shermer