59 episodes

Neuroscience and artificial intelligence work better together. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The show is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Natural Sciences

Neuroscience and artificial intelligence work better together. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The show is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 059 Wolfgang Maass: How Do Brains Compute?

    BI 059 Wolfgang Maass: How Do Brains Compute?

    In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.







    Wolfgang’s website.Advice To a Young Investigator (has the quote at the beginning of the episode) by Santiago Ramon y Cajal.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.

    • 1 hr
    BI 058 Wolfgang Maass: Computing Brains and Spiking Nets

    BI 058 Wolfgang Maass: Computing Brains and Spiking Nets

    In this first part of our conversation (here’s the second part), Wolfgang and I discuss the state of theoretical and computational neuroscience, and how experimental results in neuroscience should guide theories and models to understand and explain how brains compute. We also discuss brain-machine interfaces, neuromorphics, and more. In the next part (here), we discuss principles of brain processing to inform and constrain theories of computations, and we briefly talk about some of his most recent work making spiking neural networks that incorporate some of these brain processing principles.







    Wolfgang’s website. The book Wolfgang recommends: The Brain from Inside Out by György Buzsáki.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.

    • 55 min
    BI 057 Nicole Rust: Visual Memory and Novelty

    BI 057 Nicole Rust: Visual Memory and Novelty

    Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning.



    Check out Nicole’s Visual Memory Laboratory website. Follow her on twitter: @VisualMemoryLab The papers we discuss or mention: Single-exposure visual memory judgments are reflected in inferotemporal cortex. Population response magnitude variation in inferotemporal cortex predicts image memorability.Visual novelty, curiosity, and intrinsic reward in machine learning and the brain.The work by Dan Yamins’s group that Nicole mentions: Local Aggregation for Unsupervised Learning of Visual Embeddings

    • 1 hr 21 min
    BI 056 Tom Griffiths: The Limits of Cognition

    BI 056 Tom Griffiths: The Limits of Cognition

    Support the show on Patreon for almost nothing.







    I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon’s bounded rationality and Stuart Russel’s bounded optimality concepts. The resource-rational framework illuminates how the constraints of optimizing our available cognition can help us understand what algorithms our brains use to get things done, and can serve as a bridge between Marr’s computational, algorithmic, and implementation levels of understanding. We also talk cognitive prostheses, artificial general intelligence, consciousness, and more.







    Visit Tom’s Computational Cognitive Science Lab. Check out his book with Brian Christian, Algorithms To Live By.Some of the papers we discuss or mention:Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources.Data on the mind – the data repository we discussed briefly A paper that discusses it: Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets.

    • 1 hr 27 min
    BI 055 Thomas Naselaris: Seeing Versus Imagining

    BI 055 Thomas Naselaris: Seeing Versus Imagining

    Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more.



    Thomas’s lab website.  Papers we discuss or mention: Resolving Ambiguities of MVPA Using Explicit Models of Representation. Human brain activity during mental imagery exhibits signatures of inference in a hierarchical generative model.

    • 1 hr 26 min
    BI 054 Kanaka Rajan: How Do We Switch Behaviors?

    BI 054 Kanaka Rajan: How Do We Switch Behaviors?

    Support the Podcast











    Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about her work showing how neural circuits transition from active to passive coping behavior in zebrafish, and how RNNs could be used to understand how we switch tasks in general and how we multi-task. Plus the usual fun speculation, advice, and more.



    Kanaka’s google scholar profile. Follow her on twitter: @rajankdr. Papers we discuss: Neuronal Dynamics Regulating Brain and Behavioral State Transitions. How to study the neural mechanisms of multiple tasks. Gilbert Strang’s linear algebra video lectures Kanaka suggested.

    • 1 hr 15 min

Listeners Also Subscribed To