99 episodes

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Science
    • 4.8 • 123 Ratings

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 184 Peter Stratton: Synthesize Neural Principles

    BI 184 Peter Stratton: Synthesize Neural Principles

    Support the show to get full episodes and join the Discord community.









    Peter Stratton is a research scientist at Queensland University of Technology.





    I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.



    What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?




    Peter's website.



    Related papers

    Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?



    Making a Spiking Net Work: Robust brain-like unsupervised machine learning.



    Global segregation of cortical activity and metastable dynamics.



    Unlocking neural complexity with a robotic key






    0:00 - Intro
    3:50 - AI background, neuroscience principles
    8:00 - Overall view of modern AI
    14:14 - Moravec's paradox and robotics
    20:50 -Understanding movement to understand cognition
    30:01 - How close are we to understanding brains/minds?
    32:17 - Pete's goal
    34:43 - Principles from neuroscience to build AI
    42:39 - Levels of abstraction and implementation
    49:57 - Mental disorders and robustness
    55:58 - Function vs. implementation
    1:04:04 - Spiking networks
    1:07:57 - The roadmap
    1:19:10 - AGI
    1:23:48 - The terms AGI and AI
    1:26:12 - Consciousness

    • 1 hr 30 min
    BI 183 Dan Goodman: Neural Reckoning

    BI 183 Dan Goodman: Neural Reckoning

    Support the show to get full episodes and join the Discord community.











    You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.



    All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.



    We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.



    So what does it mean that modern neural networks disregard spiking altogether?



    Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.




    Neural Reckoning Group.



    Twitter: @neuralreckoning.



    Related papers

    Neural heterogeneity promotes robust learning.



    Dynamics of specialization in neural modules under resource constraints.



    Multimodal units fuse-then-accumulate evidence across channels.



    Visualizing a joint future of neuroscience and neuromorphic engineering.






    0:00 - Intro
    3:47 - Why spiking neural networks, and a mathematical background
    13:16 - Efficiency
    17:36 - Machine learning for neuroscience
    19:38 - Why not jump ship from SNNs?
    23:35 - Hard and easy tasks
    29:20 - How brains and nets learn
    32:50 - Exploratory vs. theory-driven science
    37:32 - Static vs. dynamic
    39:06 - Heterogeneity
    46:01 - Unifying principles vs. a hodgepodge
    50:37 - Sparsity
    58:05 - Specialization and modularity
    1:00:51 - Naturalistic experiments
    1:03:41 - Projects for SNN research
    1:05:09 - The right level of abstraction
    1:07:58 - Obstacles to progress
    1:12:30 - Levels of explanation
    1:14:51 - What has AI taught neuroscience?
    1:22:06 - How has neuroscience helped AI?

    • 1 hr 28 min
    BI 182: John Krakauer Returns… Again

    BI 182: John Krakauer Returns… Again

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience











    John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like




    Whether brains actually reorganize after damage



    The role of brain plasticity in general



    The path toward and the path not toward understanding higher cognition



    How to fix motor problems after strokes



    AGI



    Functionalism, consciousness, and much more.




    Relevant links:




    John's Lab.



    Twitter: @blamlab



    Related papers

    What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.



    Against cortical reorganisation.





    Other episodes with John:

    BI 025 John Krakauer: Understanding Cognition



    BI 077 David and John Krakauer: Part 1



    BI 078 David and John Krakauer: Part 2



    BI 113 David Barack and John Krakauer: Two Views On Cognition






    Time stamps
    0:00 - Intro
    2:07 - It's a podcast episode!
    6:47 - Stroke and Sherrington neuroscience
    19:26 - Thinking vs. moving, representations
    34:15 - What's special about humans?
    56:35 - Does cortical reorganization happen?
    1:14:08 - Current era in neuroscience

    • 1 hr 25 min
    BI 181 Max Bennett: A Brief History of Intelligence

    BI 181 Max Bennett: A Brief History of Intelligence

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience













    By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.



    Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.



    The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.




    Twitter:

    @maxsbennett





    Book:

    A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.






    0:00 - Intro
    5:26 - Why evolution is important
    7:22 - Maclean's triune brain
    14:59 - Breakthrough 1: Steering
    29:06 - Fish intelligence
    40:38 - Breakthrough 3: Mentalizing
    52:44 - How could we improve the human brain?
    1:00:44 - What is intelligence?
    1:13:50 - Breakthrough 5: Speaking

    • 1 hr 27 min
    BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding

    BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding

    Support the show to get full episodes and join the Discord community.









    Welcome to another special panel discussion episode.



    I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.



    There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.




    Aspirational Neuroscience



    Panelists:

    Anton Arkhipov, Allen Institute for Brain Science.

    @AntonSArkhipov





    Konrad Kording, University of Pennsylvania.

    @KordingLab





    Tomás Ryan, Trinity College Dublin.

    @TJRyan_77





    Srinivas Turaga, Janelia Research Campus.



    Dong Song, University of Southern California.

    @dongsong





    Zhihao Zheng, Princeton University.

    @zhihaozheng








    0:00 - Intro
    1:45 - Ken Hayworth
    14:09 - Panel Discussion

    • 1 hr 29 min
    BI 179 Laura Gradowski: Include the Fringe with Pluralism

    BI 179 Laura Gradowski: Include the Fringe with Pluralism

    Support the show to get full episodes and join the Discord community.







    Check out my free video series about what's missing in AI and Neuroscience









    Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.



    We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.



    Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh.



    Facing the Fringe.



    Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe



    0:00 - Intro
    3:57 - What is fringe?
    10:14 - What makes a theory fringe?
    14:31 - Fringe to mainstream
    17:23 - Garcia effect
    28:17 - Fringe to mainstream: other examples
    32:38 - Fringe and consciousness
    33:19 - Words meanings change over time
    40:24 - Pseudoscience
    43:25 - How fringe becomes mainstream
    47:19 - More fringe characteristics
    50:06 - Pluralism as a solution
    54:02 - Progress
    1:01:39 - Encyclopedia of theories
    1:09:20 - When to reject a theory
    1:20:07 - How fringe becomes fringe
    1:22:50 - Marginilization
    1:27:53 - Recipe for fringe theorist

    • 1 hr 39 min

Customer Reviews

4.8 out of 5
123 Ratings

123 Ratings

oystersandsauce ,

Sincere and Personal

This is clearly a podcast built out of passion for a deep and important topic, the topic of who we are, what underlies our cognition, how do we build and understand cognition

SAARKÉSH ,

NEURO-AI course : does it work???

Enrolled in the course that is being advertised currently with the promotion. Sadly, all I have received is just an invoice, no login info or other info on how to get started! Furthermore, m stuck with no support option!
Dump your web host Paul!

Abder E ,

Amazing insights

Wow the creativity episode was amazing!!! Thank you for sharing this valuable knowledge

Top Podcasts In Science

Hidden Brain, Shankar Vedantam
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
WNYC Studios
Alie Ward
Neil deGrasse Tyson
Sam Harris

You Might Also Like

Santa Fe Institute
Ginger Campbell, MD
Machine Learning Street Talk (MLST)
Steven Strogatz, Janna Levin and Quanta Magazine
Sean Carroll | Wondery
Quanta Magazine