65 episódios

Neuroscience and artificial intelligence work better together. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The show is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Brain Inspired Paul Middlebrooks

    • Ciências Naturais

Neuroscience and artificial intelligence work better together. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The show is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

    BI 065 Thomas Serre: How Recurrence Helps Vision

    BI 065 Thomas Serre: How Recurrence Helps Vision

    Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed.



    Show Notes:



    Visit the Serre Lab website. Follow Thomas on twitter: @tserre.Good reviews that references all the work we discussed, including the HMAX model: Beyond the feedforward sweep: feedback computations in the visual cortex. Deep learning: the good, the bad and the ugly. Papers about the topics we discuss: Complementary Surrounds Explain Diverse Contextual Phenomena Across Visual Modalities. Recurrent neural circuits for contour detection.Learning long-range spatial dependencies with horizontal gated-recurrent units.

    • 1h 40 min
    BI 064 Galit Shmueli: Explanation vs. Prediction

    BI 064 Galit Shmueli: Explanation vs. Prediction

    Galit and I discuss the independent roles of prediction and explanation in scientific models, their history and eventual separation in the philosophy of science, how they can inform each other, and how statisticians like Galit view the current deep learning explosion.







    Galit’s website. Follow her on twitter: @gshmueli. The papers we discuss or mention: To Explain or To Predict?Predictive Analytics in Information Systems Research.

    • 1h 28 min
    BI 063 Uri Hasson: The Way Evolution Does It

    BI 063 Uri Hasson: The Way Evolution Does It

    Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.



    Show notes:



    Uri’s lab website. Follow his lab on twitter: @HassonLab.The paper we discuss: Direct Fit to Nature: An EvolutionaryPerspective on Biological and Artificial Neural Networks. Here’s the BioRxiv version in case the above doesn’t work.  Uri mentioned his newest paper: Keep it real: rethinking the primacy of experimental control in cognitive neuroscience.

    • 1h 32 min
    BI 062 Stefan Leijnen: Creativity and Constraint

    BI 062 Stefan Leijnen: Creativity and Constraint

    Stefan and I discuss creativity and constraint in artificial and biological intelligence. We talk about his Asimov Institute and its goal of artificial creativity and constraint, different types and functions of creativity, the neuroscience of creativity and its relation to intelligence, how constraint is an essential factor in all creative processes, and how computational accounts of intelligence may need to be discarded to account for our unique creative abilities. 



    Show notes:











    The Asimov Institute.Get that Zoo of Networks poster we talk about! See preview below.His site at Utrecht University of Applied Sciences. Stefan’s personal website. Follow the Asimov Institute on twitter: @asimovinstitute .Stuff mentioned: Creativity and Constraint in Artificial Systems (Leijnen 2014 Dissertation). Incomplete Nature – Terrance Deacon’s long, challenging read with fascinating original ideas. Neither Ghost Nor Machine – Jeremy Sherman’s succinct, readable summary of some arguments in Incomplete Nature.

    • 1h 57 min
    BI 061 Jörn Diedrichsen and Niko Kriegeskorte: Brain Representations

    BI 061 Jörn Diedrichsen and Niko Kriegeskorte: Brain Representations

    Jörn, Niko and I continue the discussion of mental representation from last episode with Michael Rescorla, then we discuss their review paper, Peeling The Onion of Brain Representations, about different ways to extract and understand what information is represented in measured brain activity patterns.



    Show notes:



    Jörn’s lab website. Niko’s lab website. Jörn on twitter: DiedrichsenLab. Niko on twitter: KriegeskorteLab.The papers we discuss or mention: Peeling the Onion of Brain Representations. Annual Review of Neuroscience, 2019 Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS, 2017.

    • 1h 29 min
    BI 060 Michael Rescorla: Mind as Representation Machine

    BI 060 Michael Rescorla: Mind as Representation Machine

    Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of thought hypothesis, how science and philosophy interact, how representation relates to computation in brains and machines, levels of computational explanation, and we discuss some examples of representational approaches to mental processes like bayesian modeling.



    Show notes:



    Michael’s website (with links to a ton of his publications). Science and PhilosophyWhy science needs philosophy by Laplane et al 2019.Why Cognitive Science Needs Philosophy and Vice Versa by Paul Thagard, 2009.Some of Michael’s papers/articles we discuss or mention: The Computational Theory of Mind. Levels of Computational Explanation. Computational Modeling of the Mind: What Role for Mental Representation?From Ockham to Turing — and Back Again. Talks: Predictive coding “debate” with Michael and a few other folks. An overview and history of the philosophy of representation. Books we mentioned: The Structure of Scientific Revolutions by Thomas Kuhn. Memory and the Computational Brain by Randy Gallistel and Adam King. Representation In Cognitive Science by Nicholas Shea. Types and Tokens: On Abstract Objects by Linda Wetzel. Probabilistic Robotics by Thrun, Burgard, and Fox.

    • 1h 36 min

Top de podcasts em Ciências Naturais

Outros ouvintes também assinaram