“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.”
- Raphaël Millière
How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?
Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.
Topics discussed in the episode:
- Introduction (0:00)
- How Raphaël came to work on AI (1:25)
- How do large language models work? (5:50)
- Deflationary and inflationary claims about large language models (19:25)
- The dangers of overclaiming and underclaiming (25:20)
- Summary of cognitive capacities large language models might have (33:20)
- Intelligence (38:10)
- Artificial general intelligence (53:30)
- Consciousness and sentience (1:06:10)
- Theory of mind (01:18:09)
- Compositionality (1:24:15)
- Language understanding and referential grounding (1:30:45)
- Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
- Conclusion (1:47:23)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
信息
- 节目
- 发布时间2023年7月3日 UTC 12:00
- 长度1 小时 49 分钟
- 单集22
- 分级儿童适宜