99 episodes

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Science
    • 4.8 • 115 Ratings

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 148 Gaute Einevoll: Brain Simulations

    BI 148 Gaute Einevoll: Brain Simulations

    Check out my free video series about what's missing in AI and Neuroscience






    Support the show to get full episodes and join the Discord community.










    Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).



    Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast.



    0:00 - Intro
    3:25 - Beautiful and messy models
    6:34 - In Silico
    9:47 - Goals of human brain project
    15:50 - Brain simulation approach
    21:35 - Degeneracy in parameters
    26:24 - Abstract principles from simulations
    32:58 - Models as tools
    35:34 - Predicting brain signals
    41:45 - LFPs closer to average
    53:57 - Plasticity in simulations
    56:53 - How detailed should we model neurons?
    59:09 - Lessons from predicting signals
    1:06:07 - Scaling up
    1:10:54 - Simulation as a tool
    1:12:35 - Oscillations
    1:16:24 - Manifolds and simulations
    1:20:22 - Modeling cortex like Hodgkin and Huxley

    • 1 hr 28 min
    BI 147 Noah Hutton: In Silico

    BI 147 Noah Hutton: In Silico

    Check out my free video series about what's missing in AI and Neuroscience






    Support the show to get full episodes and join the Discord community.










    Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.





    In Silico website.Rent or buy In Silico.Noah's website.Twitter: @noah_hutton.



    0:00 - Intro
    3:36 - Release and premier
    7:37 - Noah's background
    9:52 - Origins of In Silico
    19:39 - Recurring visits
    22:13 - Including the critics
    25:22 - Markram's shifting outlook and salesmanship
    35:43 - Promises and delivery
    41:28 - Computer and brain terms interchange
    49:22 - Progress vs. illusion of progress
    52:19 - Close to quitting
    58:01 - Salesmanship vs bad at estimating timelines
    1:02:12 - Brain simulation science
    1:11:19 - AGI
    1:14:48 - Brain simulation vs. neuro-AI
    1:21:03 - Opinion on TED talks
    1:25:16 - Hero worship
    1:29:03 - Feedback on In Silico

    • 1 hr 37 min
    BI 146 Lauren Ross: Causal and Non-Causal Explanation

    BI 146 Lauren Ross: Causal and Non-Causal Explanation

    Check out my free video series about what's missing in AI and Neuroscience






    Support the show to get full episodes and join the Discord community.










    Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.



    Lauren's website.Twitter: @ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science.



    0:00 - Intro
    2:46 - Lauren's background
    10:14 - Jim Woodward legacy
    15:37 - Golden era of causality
    18:56 - Mechanistic explanation
    28:51 - Pathways
    31:41 - Cascades
    36:25 - Topology
    41:17 - Constraint
    50:44 - Hierarchy of explanations
    53:18 - Structure and function
    57:49 - Brain and mind
    1:01:28 - Reductionism
    1:07:58 - Constraint again
    1:14:38 - Multiple realizability

    • 1 hr 22 min
    BI 145 James Woodward: Causation with a Human Face

    BI 145 James Woodward: Causation with a Human Face

    Check out my free video series about what's missing in AI and Neuroscience






    Support the show to get full episodes and join the Discord community.










    James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.





    Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology.



    0:00 - Intro
    4:14 - Causation with a Human Face & Functionalist approach
    6:16 - Interventionist causality; Epistemology and metaphysics
    9:35 - Normative and descriptive
    14:02 - Rationalist approach
    20:24 - Normative vs. descriptive
    28:00 - Varying notions of causation
    33:18 - Invariance
    41:05 - Causality in complex systems
    47:09 - Downward causation
    51:14 - Natural laws
    56:38 - Proportionality
    1:01:12 - Intuitions
    1:10:59 - Normative and descriptive relation
    1:17:33 - Causality across disciplines
    1:21:26 - What would help our understanding of causation

    • 1 hr 25 min
    BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

    BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

    Check out my short video series about what's missing in AI and Neuroscience.





    Support the show to get full episodes and join the Discord community.








    Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.





    Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.





    Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.



    EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)



    0:00 - Intro
    4:35 - Language and cognition
    15:38 - Grasping for meaning
    21:32 - Are large language models producing language?
    23:09 - Next-word prediction in brains and models
    32:09 - Interface between language and thought
    35:18 - Studying language in nonhuman animals
    41:54 - Do we understand language enough?
    45:51 - What do language models need?
    51:45 - Are LLMs teaching us about language?
    54:56 - Is meaning necessary, and does it matter how we learn language?
    1:00:04 - Is our biology important for language?
    1:04:59 - Future outlook

    • 1 hr 11 min
    BI 143 Rodolphe Sepulchre: Mixed Feedback Control

    BI 143 Rodolphe Sepulchre: Mixed Feedback Control

    Check out my free video series about what's missing in AI and Neuroscience






    Support the show to get full episodes and join the Discord community.










    Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.



    Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience



    0:00 - Intro
    4:38 - Control engineer
    9:52 - Control vs. dynamical systems
    13:34 - Building vs. understanding
    17:38 - Mixed feedback signals
    26:00 - Robustness
    28:28 - Eve Marder
    32:00 - Loneliness
    37:35 - Across levels
    44:04 - Neuromorphics and neuromodulation
    52:15 - Barrier to adopting neuromorphics
    54:40 - Deep learning influence
    58:04 - Beyond energy efficiency
    1:02:02 - Deep learning for neuro
    1:14:15 - Role of philosophy
    1:16:43 - Doing it right

    • 1 hr 24 min

Customer Reviews

4.8 out of 5
115 Ratings

115 Ratings

oystersandsauce ,

Sincere and Personal

This is clearly a podcast built out of passion for a deep and important topic, the topic of who we are, what underlies our cognition, how do we build and understand cognition

SAARKÉSH ,

Intersections of Brain Neurology & Technology

Love the introduction music & the gradual immersion into whatever the specific topics, surrounding discovery & manipulation of the human brain.
Brilliant queuing of the deep dives 🙏

Abder E ,

Amazing insights

Wow the creativity episode was amazing!!! Thank you for sharing this valuable knowledge

You Might Also Like

Ginger Campbell, MD
Santa Fe Institute, Michael Garfield
Sean Carroll | Wondery
Quanta Magazine
Sam Charrington
Steven Strogatz and Quanta Magazine