Brain Inspired

Paul Middlebrooks
Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

  1. HÁ 3 DIAS

    BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors

    Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions. So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script. Raj's website. Related papers A sensory–motor theory of the neocortex. Brain co-processors: using AI to restore and augment brain function. Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Read the transcript. 0:00 - Intro 7:40 - Predictive coding origins 16:14 - Early appreciation of recurrence 17:08 - Prediction as a general theory of the brain 18:38 - Rao and Ballard 1999 26:32 - Prediction as a general theory of the brain 33:24 - Perception vs action 33:28 - Active predictive coding 45:04 - Evolving to augment our brains 53:03 - BrainNet 57:12 - Neural co-processors 1:11:19 - Decoding the Indus Script 1:20:18 - Transformer models relation to active predictive coding

    1h37min
  2. 4 DE DEZ.

    BI 200 Grace Hwang and Joe Monaco: The Future of NeuroAI

    Support the show to get full episodes, full archive, and join the Discord community. Joe Monaco and Grace Hwang  co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases. BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode. Joe's NIH page. Grace's NIH page. Twitter:  Joe: @j_d_monaco Related papers Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. Spatial synchronization codes from coupled rate-phase neurons. Oscillators that sync and swarm. Mentioned A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications. Recalling Lashley and reconsolidating Hebb. BRAIN NeuroAI Workshop (Nov 12–13) NIH BRAIN NeuroAI Workshop Program Book NIH VideoCast – Day 1 Recording – BRAIN NeuroAI Workshop NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22) NPBH 2024 BRAIN Investigators Meeting 2020 Symposium & Perspective Paper BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube) Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation NSF/CIRC Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being Read the transcript. 0:00 - Intro 25:45 - NeuroAI Workshop - neuromorphics 33:31 - Neuromorphics and theory 49:19 - Reflections on the workshop 54:22 - Neurodynamical computing and information boundaries 1:01:04 - Perceptual control theory 1:08:56 - Digital twins and neural foundation models 1:14:02 - Base layer of computation

    1h37min
  3. 26 DE NOV.

    BI 199 Hessam Akhlaghpour: Natural Universal Computation

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature. Hessam's website. Maimon Lab. Twitter: @theHessam. Related papers An RNA-based theory of natural universal computation. The molecular memory code and synaptic plasticity: a synthesis. Lifelong persistence of nuclear RNAs in the mouse brain. Cris Moore's conjecture #5 in this 1998 paper. (The Gallistel book): Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Related episodes BI 126 Randy Gallistel: Where Is the Engram? BI 172 David Glanzman: Memory All The Way Down Read the transcript. 0:00 - Intro 4:44 - Hessam's background 11:50 - Randy Gallistel's book 14:43 - Information in the brain 17:51 - Hessam's turn to universal computation 35:30 - AI and universal computation 40:09 - Universal computation to solve intelligence 44:22 - Connecting sub and super molecular 50:10 - Junk DNA 56:42 - Genetic material for coding 1:06:37 - RNA and combinatory logic 1:35:14 - Outlook 1:42:11 - Reflecting on the molecular world

    1h49min
  4. 11 DE NOV.

    BI 198 Tony Zador: Neuroscience Principles to Improve AI

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI. We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past. Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency. Zador Lab Twitter: @TonyZador Previous episodes: BI 187: COSYNE 2024 Neuro-AI Panel. BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys BI 034 Tony Zador: How DNA and Evolution Can Inform AI Related papers Catalyzing next-generation Artificial Intelligence through NeuroAI. Encoding innate ability through a genomic bottleneck. Essays NeuroAI: A field born from the symbiosis between neuroscience, AI. What the brain can teach artificial neural networks. Read the transcript. 0:00 - Intro 3:28 - "Neuro-AI" 12:48 - Visual cognition history 18:24 - Information theory in neuroscience 20:47 - Necessary steps for progress 24:34 - Neuro-AI models and cognition 35:47 - Animals for inspiring AI 41:48 - What we want AI to do 46:01 - Development and AI 59:03 - Robots 1:25:10 - Catalyzing the next generation of AI

    1h35min
  5. 25 DE OUT.

    BI 197 Karen Adolph: How Babies Learn to Move and Think

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors. Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives. Infant Action Lab (Karen Adolph's lab) Sleep and Behavioral Development Lab (Mark Blumberg's lab) Related papers Motor Development: Embodied, Embedded, Enculturated, and Enabling An Ecological Approach to Learning in (Not and) Development An update of the development of motor behavior Protracted development of motor cortex constrains rich interpretations of infant cognition Read the transcript.

    1h30min
  6. 11 DE OUT.

    BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød

    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.  This is the second conversation I had while teamed up with Gaute Einevoll at a workshop on NeuroAI in Norway. In this episode, Gaute and I are joined by Cristina Savin and Tim Vogels. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules. We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions. Be sure to check out Gaute's Theoretical Neuroscience podcast as well! Mikkel Lepperød Cristina Savin Tim Vogels Twitter: @TPVogels Gaute Einevoll Twitter: @GauteEinevoll Gaute's Theoretical Neuroscience podcast. Validating models: How would success in NeuroAI look like? Read the transcript, provided by The Transmitter.

    1h20min
  7. 27 DE SET.

    BI 194 Vijay Namboodiri & Ali Mohebi: Dopamine Keeps Getting More Interesting

    Support the show to get full episodes, full archive, and join the Discord community. https://youtu.be/lbKEOdbeqHo The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.  The Transmitter has provided a transcript for this episode. Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning. The classic story is that dopamine is related to reward prediction errors. That is, dopamine is modulated when you expect reward and don't get it, and/or when you don't expect reward but do get it. Vijay calls this a "prospective" account of dopamine function, since it requires an animal to look into the future to expect a reward. Vijay has shown, however, that a retrospective account of dopamine might better explain lots of know behavioral data. This retrospective account links dopamine to how we understand causes and effects in our ongoing behavior. So in this episode, Vijay gives us a history lesson about dopamine, his newer story and why it has caused a bit of controversy, and how all of this came to be. I happened to be looking at the Transmitter the other day, after I recorded this episode, and low and behold, there was an article titles Reconstructing dopamine’s link to reward. Vijay is featured in the article among a handful of other thoughtful researchers who share their work and ideas about this very topic. Vijay wrote his own piece as well: Dopamine and the need for alternative theories. So check out those articles for more views on how the field is reconsidering how dopamine works. Nam Lab. Mohebi & Associates (Ali's Lab). Twitter: @vijay_mkn @mohebial Transmitter Dopamine and the need for alternative theories. Reconstructing dopamine’s link to reward. Related papers Mesolimbic dopamine release conveys causal associations. Mesostriatal dopamine is sensitive to changes in specific cue-reward contingencies. What is the state space of the world for real animals? The learning of prospective and retrospective cognitive maps within neural circuits Further reading (Ali's paper): Dopamine transients follow a striatal gradient of reward time horizons. Ali listed a bunch of work on local modulation of DA release: Local control of striatal dopamine release. Synaptic-like axo-axonal transmission from striatal cholinergic interneurons onto dopaminergic fibers. Spatial and temporal scales of dopamine transmission. Striatal dopamine neurotransmission: Regulation of release and uptake. Striatal Dopamine Release Is Triggered by Synchronized Activity in Cholinergic Interneurons. An action potential initiation mechanism in distal axons for the control of dopamine release. Read the transcript, produced by The Transmitter. 0:00 - Intro 3:42 - Dopamine: the history of theories 32:54 - Importance of learning and behavior studies 39:12 - Dopamine and causality 1:06:45 - Controversy over Vijay's findings

    1h37min
4,9
de 5
130 avaliações

Sobre

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Você também pode gostar de

Para ouvir episódios explícitos, inicie sessão.

Fique por dentro deste podcast

Inicie sessão ou crie uma conta para seguir podcasts, salvar episódios e receber as atualizações mais recentes.

Selecionar um país ou região

África, Oriente Médio e Índia

Ásia‑Pacífico

Europa

América Latina e Caribe

Estados Unidos e Canadá