Brain Inspired

Paul Middlebrooks
Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

  1. MAY 7

    BI 211 COGITATE: Testing Theories of Consciousness

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled Adversarial testing of global neuronal workspace and integrated information theories of consciousness, and this is what Rony, Alex, and Oscar discuss with me today. The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related. So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with Dean Buonomano, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory. COGITATE. Oscar Ferrante. @ferrante_oscar Rony Hirschhorn. @RonyHirsch Alex Lepauvre. @LepauvreAlex Paper: Adversarial testing of global neuronal workspace and integrated information theories of consciousness. BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics 0:00 - Intro 4:00 - COGITATE 17:42 - How the experiments were developed 32:37 - How data was collected and analyzed 41:24 - Prediction 1: Where is consciousness? 47:51 - The experimental task 1:00:14 - Prediction 2: Duration of consciousness-related activity 1:18:37 - Prediction 3: Inter-areal communication 1:28:28 - Big picture of the results 1:44:25 - Moving forward

    1h 60m
  2. APR 22

    BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics

    Support the show to get full episodes, full archive, and join the Discord community. Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues. One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus. But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called The brain holds no exclusive rights on how to create intelligence. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe. We then talk about his recent chapter with physicist Carlo Rovelli, titled Bridging the neuroscience and physics of time, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time. Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. Buonomano lab. Twitter: @DeanBuono. Related papers The brain holds no exclusive rights on how to create intelligence. What makes a theory of consciousness unscientific? Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns. Bridging the neuroscience and physics of time. BI 204 David Robbe: Your Brain Doesn’t Measure Time Read the transcript. 0:00 - Intro 8:49 - AI doesn't need biology 17:52 - Time in physics and in neuroscience 34:04 - Integrated information theory 1:01:34 - Global neuronal workspace theory 1:07:46 - Organotypic slices and predictive processing 1:26:07 - Do brains actually measure time? David Robbe

    1h 51m
  3. APR 9

    BI 209 Aran Nayebi: The NeuroAI Turing Test

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort. We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation. Aran's Website. Twitter: @ayan_nayebi. Related papers Brain-model evaluations need the NeuroAI Turing Test. Barriers and pathways to human-AI alignment: a game-theoretic approach. 0:00 - Intro 5:24 - Background 20:46 - Building embodied agents 33:00 - Adaptability 49:25 - Marr's levels 54:12 - Sensorimotor loop and intrinsic goals 1:00:05 - NeuroAI Turing Test 1:18:18 - Representations 1:28:18 - How to know what to measure 1:32:56 - AI safety

    1h 44m
  4. MAR 26

    BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation

    Support the show to get full episodes, full archive, and join the Discord community. Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons. We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially. We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience. Gabriele's website. Carl Correns Foundation for Mathematical Biology. Neuro-AI spinoff Related papers Sketch of a novel approach to a neural model. Localist neural plasticity identified by mutual information. Related episodes BI 199 Hessam Akhlaghpour: Natural Universal Computation BI 172 David Glanzman: Memory All The Way Down BI 126 Randy Gallistel: Where Is the Engram? 0:00 - Intro 4:41 - Gabriele's early interests in verbal thinking 14:14 - What is thinking? 24:04 - Starting one's own foundation 58:18 - Building a new single neuron model 1:19:25 - The right level of abstraction 1:25:00 - How a new neuron would change AI

    1h 35m
  5. MAR 12

    BI 207 Alison Preston: Schemas in our Brains and Minds

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her neuroscience research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood. I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on. Last episode Ciara Greene discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place. Preston Lab Twitter: @preston_lab Related papers: Concept formation as a computational cognitive process. Schema, Inference, and Memory. Developmental differences in memory reactivation relate to encoding and inference in the human brain. Read the transcript. 0:00 - Intro 6:51 - Schemas 20:37 - Schemas and the developing brain 35:03 - Information theory, dimensionality, and detail 41:17 - Geometry of schemas 47:26 - Schemas and creativity 50:29 - Brain connection pruning with development 1:02:46 - Information in brains 1:09:20 - Schemas and development in AI

    1h 30m
  6. FEB 26

    BI 206 Ciara Greene: Memories Are Useful, Not Accurate

    Support the show to get full episodes, full archive, and join the Discord community. Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example. Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories. Attention and Memory Lab Twitter: @ciaragreene01. Book: Memory Lane: The Perfectly Imperfect Ways We Remember Read the transcript. 0:00 - Intro 5:35 - The function of memory 6:41 - Reconstructive nature of memory 13:50 - Memory schemas, highly superior autobiographical memory 20:49 - Misremembering and flashbulb memories 27:52 - Forgetting and schemas 36:06 - What is a "good" memory? 39:35 - Memories and intention 43:47 - Memory and context 49:55 - Implanting false memories 1:04:10 - Memory suggestion during interrogations 1:06:30 - Memory, imagination, and creativity 1:13:45 - Artificial intelligence and memory 1:21:21 - Driven by questions

    1h 29m
  7. FEB 12

    BI 205 Dmitri Chklovskii: Neurons Are Smarter Than You Think

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: To explore more neuroscience news and perspectives, visit thetransmitter.org. Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example? Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics. We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work. Chklovskii Lab. Twitter: @chklovskii. Related papers The Neuron as a Direct Data-Driven Controller. Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction. Related episodes BI 143 Rodolphe Sepulchre: Mixed Feedback Control BI 119 Henry Yin: The Crisis in Neuroscience Read the transcript. 0:00 - Intro 7:34 - Physicists approach for neuroscience 12:39 - What's missing in AI and neuroscience? 16:36 - Connectomes 31:51 - Understanding complex systems 33:17 - Earliest models of neurons 39:08 - Smart neurons 42:56 - Neuron theories that influenced Mitya 46:50 - Neuron as a controller 55:03 - How to test the neuron as controller hypothesis 1:00:29 - Direct data-driven control 1:11:09 - Experimental evidence 1:22:25 - Single neuron doctrine and population doctrine 1:25:30 - Neurons as agents 1:28:52 - Implications for AI 1:30:02 - Limits to control perspective

    1h 39m
4.9
out of 5
133 Ratings

About

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada