55 min

What does ChatGPT really know‪?‬ Many Minds

    • Science

By now you’ve probably heard about the new chatbot called ChatGPT. There’s no question it’s something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you’ve probably also been wondering: What’s really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities?
My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models’, and it’s the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway.
Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities."
Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info.
Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!
 
The paper we discuss is here. A transcript of this episode is here.
 
Notes and links
6:30 – The 2017 “breakthrough” article by Vaswani and colleagues.
8:00 – A popular article about GPT-3.
10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT).
14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.”
19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT.
30:00 – One of Dr. Shanahan’s books is titled, Embodiment and the Inner Life.
39:00 – An example of a robotic agent, SayCan, which is connected to a language model.
40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind.
44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here.
45:00 – See Dr. Shanahan’s general audience essay on “conscious exotica" and the space of possible minds.
49:00 – See Dennett’s book, The Intentional Stance.
 
Dr. Shanahan recommends:
Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell
(see also our earlier episode with Dr.

By now you’ve probably heard about the new chatbot called ChatGPT. There’s no question it’s something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you’ve probably also been wondering: What’s really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities?
My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models’, and it’s the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway.
Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities."
Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out disi.org for more info.
Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!
 
The paper we discuss is here. A transcript of this episode is here.
 
Notes and links
6:30 – The 2017 “breakthrough” article by Vaswani and colleagues.
8:00 – A popular article about GPT-3.
10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT).
14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.”
19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT.
30:00 – One of Dr. Shanahan’s books is titled, Embodiment and the Inner Life.
39:00 – An example of a robotic agent, SayCan, which is connected to a language model.
40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind.
44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here.
45:00 – See Dr. Shanahan’s general audience essay on “conscious exotica" and the space of possible minds.
49:00 – See Dennett’s book, The Intentional Stance.
 
Dr. Shanahan recommends:
Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell
(see also our earlier episode with Dr.

55 min

Top Podcasts In Science

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Ologies with Alie Ward
Alie Ward
The Infinite Monkey Cage
BBC Radio 4
Making Sense with Sam Harris
Sam Harris
Science Vs
Spotify Studios
BBC Inside Science
BBC Radio 4