Brain Inspired

Paul Middlebrooks
Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

  1. 26 FÉVR.

    BI 206 Ciara Greene: Memories Are Useful, Not Accurate

    Support the show to get full episodes, full archive, and join the Discord community. Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example. Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories. Attention and Memory Lab Twitter: @ciaragreene01. Book: Memory Lane: The Perfectly Imperfect Ways We Remember Read the transcript. 0:00 - Intro 5:35 - The function of memory 6:41 - Reconstructive nature of memory 13:50 - Memory schemas, highly superior autobiographical memory 20:49 - Misremembering and flashbulb memories 27:52 - Forgetting and schemas 36:06 - What is a "good" memory? 39:35 - Memories and intention 43:47 - Memory and context 49:55 - Implanting false memories 1:04:10 - Memory suggestion during interrogations 1:06:30 - Memory, imagination, and creativity 1:13:45 - Artificial intelligence and memory 1:21:21 - Driven by questions

    1 h 29 min
  2. 12 FÉVR.

    BI 205 Dmitri Chklovskii: Neurons Are Smarter Than You Think

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: To explore more neuroscience news and perspectives, visit thetransmitter.org. Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example? Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics. We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work. Chklovskii Lab. Twitter: @chklovskii. Related papers The Neuron as a Direct Data-Driven Controller. Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction. Related episodes BI 143 Rodolphe Sepulchre: Mixed Feedback Control BI 119 Henry Yin: The Crisis in Neuroscience Read the transcript. 0:00 - Intro 7:34 - Physicists approach for neuroscience 12:39 - What's missing in AI and neuroscience? 16:36 - Connectomes 31:51 - Understanding complex systems 33:17 - Earliest models of neurons 39:08 - Smart neurons 42:56 - Neuron theories that influenced Mitya 46:50 - Neuron as a controller 55:03 - How to test the neuron as controller hypothesis 1:00:29 - Direct data-driven control 1:11:09 - Experimental evidence 1:22:25 - Single neuron doctrine and population doctrine 1:25:30 - Neurons as agents 1:28:52 - Implications for AI 1:30:02 - Limits to control perspective

    1 h 39 min
  3. 29 JANV.

    BI 204 David Robbe: Your Brain Doesn’t Measure Time

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: To explore more neuroscience news and perspectives, visit thetransmitter.org. When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave. David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time. Cortical-Basal Ganglia Circuits and Behavior Lab. Twitter: @dav_robbe Related papers Lost in time: Relocating the perception of duration outside the brain. Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging. 0:00 - Intro 3:59 - Why behavior is so important in itself 10:27 - Henri Bergson 21:17 - Bergson's view of life 26:25 - A task to test how animals time things 34:08 - Back to Bergson and duree 39:44 - Externalizing time 44:11 - Internal representation of time 1:03:38 - Cognition as internal movement 1:09:14 - Free will 1:15:27 - Implications for AI

    1 h 38 min
  4. 14 JANV.

    BI 203 David Krakauer: How To Think Like a Complexity Scientist

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The Complex World: An Introduction to the Fundamentals of Complexity Science. The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same. David's website. David's SFI homepage. The book: The Complex World: An Introduction to the Fundamentals of Complexity Science. The 4-Volume Series: Foundational Papers in Complexity Science. Mentioned: Aeon article: Problem-solving matter. The information theory of individuality. Read the transcript. 0:00 - Intro 3:45 - Origins of The Complex World 20:10 - 4 pillars of complexity 36:27 - 40s to 70s in complexity 42:33 - How to proceed as a complexity scientist 54:32 - Broken symmetries 1:02:40 - Emergence 1:13:25 - Time scales and complexity 1:18:48 - Consensus and how ideas migrate 1:29:25 - Disciplinary matrix (Kuhn) 1:32:45 - Intelligence vs. life

    1 h 46 min
  5. 3 JANV.

    BI 202 Eli Sennesh: Divide-and-Conquer to Predict

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new Brain Inspired episode is released. Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding.  As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side. Eli's website. Bastos lab. Twitter: @EliSennesh Related papers Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm. Related episode: BI 201 Rajesh Rao: Active Predictive Coding. Read the transcript. 0:00 - Intro 3:59 - Eli's worldview 17:56 - NeuroAI is hard 24:38 - Prediction errors vs surprise 55:16 - Divide and conquer 1:13:24 - Challenges 1:18:44 - How to build AI 1:25:56 - Affect 1:31:55 - Abolish the value function

    1 h 38 min
  6. 18/12/2024

    BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors

    Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions. So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script. Raj's website. Twitter: @RajeshPNRao. Related papers A sensory–motor theory of the neocortex. Brain co-processors: using AI to restore and augment brain function. Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Read the transcript. 0:00 - Intro 7:40 - Predictive coding origins 16:14 - Early appreciation of recurrence 17:08 - Prediction as a general theory of the brain 18:38 - Rao and Ballard 1999 26:32 - Prediction as a general theory of the brain 33:24 - Perception vs action 33:28 - Active predictive coding 45:04 - Evolving to augment our brains 53:03 - BrainNet 57:12 - Neural co-processors 1:11:19 - Decoding the Indus Script 1:20:18 - Transformer models relation to active predictive coding

    1 h 37 min
  7. 04/12/2024

    BI 200 Grace Hwang and Joe Monaco: The Future of NeuroAI

    Support the show to get full episodes, full archive, and join the Discord community. Joe Monaco and Grace Hwang  co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases. BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode. Joe's NIH page. Grace's NIH page. Twitter:  Joe: @j_d_monaco Related papers Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. Spatial synchronization codes from coupled rate-phase neurons. Oscillators that sync and swarm. Mentioned A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications. Recalling Lashley and reconsolidating Hebb. BRAIN NeuroAI Workshop (Nov 12–13) NIH BRAIN NeuroAI Workshop Program Book NIH VideoCast – Day 1 Recording – BRAIN NeuroAI Workshop NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22) NPBH 2024 BRAIN Investigators Meeting 2020 Symposium & Perspective Paper BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube) Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation NSF/CIRC Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being Read the transcript. 0:00 - Intro 25:45 - NeuroAI Workshop - neuromorphics 33:31 - Neuromorphics and theory 49:19 - Reflections on the workshop 54:22 - Neurodynamical computing and information boundaries 1:01:04 - Perceptual control theory 1:08:56 - Digital twins and neural foundation models 1:14:02 - Base layer of computation

    1 h 37 min
4,9
sur 5
131 notes

À propos

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Vous aimeriez peut‑être aussi

Pour écouter des épisodes au contenu explicite, connectez‑vous.

Recevez les dernières actualités sur cette émission

Connectez‑vous ou inscrivez‑vous pour suivre des émissions, enregistrer des épisodes et recevoir les dernières actualités.

Choisissez un pays ou une région

Afrique, Moyen‑Orient et Inde

Asie‑Pacifique

Europe

Amérique latine et Caraïbes

États‑Unis et Canada