In this conversation, Dr Henry Shevlin (University of Cambridge) and I explore the complex and multifaceted topic of AI consciousness. We discuss philosophical and scientific dimensions of consciousness, discussing its definition, the challenges of integrating it into a scientific worldview, and the implications of such challenges for thinking about machine consciousness. The conversation also touches on historical perspectives, ethical considerations, and political issues, all while acknowledging the significant uncertainties that remain in the field. Takeaways * Consciousness is difficult to define without controversy. * The relationship between consciousness and scientific understanding is extremely complex. * AI consciousness raises significant ethical questions. * The Turing test is a behavioural measure of intelligence, not consciousness. * Historical perspectives on AI consciousness are helpful for understanding current debates. * Cognition and consciousness are distinct but related. * There is a non-trivial chance that some AI systems may have minimal consciousness. * Consciousness in AI systems is a scientific question, not just a philosophical one. * The debate on AI consciousness is messy and strangely polarising (and often heated) but fascinating and important. Chapters 00:00 Exploring the Nature of Consciousness 17:51 The Intersection of AI and Consciousness 36:16 Historical Perspectives on AI and Consciousness 59:39 Ethical Implications of AI Consciousness Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Transcript Please note that this transcript is AI-generated and may contain errors. Dan Williams: Okay, welcome everyone. I’m Dan Williams. I’m here with the great Henry Shevlin. And today we’re going to be continuing our series of conversations on artificial intelligence, some of the big picture philosophical questions that AI throws up. And today specifically, we’re going to be focusing on AI consciousness. So could machines be conscious? What the hell does it even mean to say that a machine is conscious? How would we tell whether a machine is conscious? Could ChatGPT-5 be conscious and so on? Before we jump into any of that, Henry, I’ll start with a straightforward question, or what seems like a straightforward question. What is consciousness? Henry Shevlin: So it’s very hard to say anything about consciousness that is either not a complete platitude or rephrasing like consciousness is experience, consciousness is your inner light, consciousness is what it’s like. Those are the platitudes. Or saying something that’s really controversial, like consciousness is a non-physical substance or consciousness is irreducible and intrinsic and private. So very hard to say anything that is actually helpful without also being massively controversial. But probably let’s start with those kind of more platitudinous descriptions. So I assume everyone listening to this, there is something it’s like to be you. When you wake up in the morning and you sip your coffee, your coffee tastes a certain way to you. When you open your eyes and look around, the world appears a certain way to you. If you’re staring at a rosy red apple, that redness is there in your mind in some way. And when you feel pain, that pain feels a certain way. And more broadly, you’re not like a rock or a robot, insofar as we can understand you purely through your behavior. There’s also an inner world, some kind of inner life that structures your experience, that structures your behavior. All of which might sound very obvious and not that interesting, or not that revolutionary, but I think part of what makes consciousness so exciting and strange is it’s just very hard to integrate it with our general scientific picture of the world. And I’ll say in my own case, this is basically why I’m in philosophy. I mean, I was always interested in ethics and free will and these questions. But the moment where I was like, s**t, I’ve got to spend the rest of my life on this, came in my second year as an undergrad at Oxford studying classics. I was vaguely interested in brains and neuroscience, so I took a philosophy of mind module with Professor Anita Avramides. And I read an article that I’m sure many of the listeners at least will have heard of called “What is it Like to Be a Bat?” by Thomas Nagel, and it blew my mind. And immediately afterwards, I read an article called “Epiphenomenal Qualia” by Frank Jackson, which is the article that introduces Mary’s room. And it blew my mind even more. And basically, I’d spent most of my life up until that point thinking the scientific picture of the world was complete. I spent most of my life until that point basically thinking the scientific picture of the world was complete. And you know, there was some stuff we didn’t understand, like what was before the Big Bang, maybe exactly what is time, but when it came to biological organisms like us, we had Darwin, we had neuroscience, it was basically all solved. And then reading more about consciousness, I realized, my god, we don’t even begin to understand what we are, what this is. Dan Williams: Yeah. I think that’s... Let me just interrupt there to flag a couple of those things, because I think they’re really helpful in terms of structuring the rest of the conversation. The first is, when it comes to consciousness, it’s really, really difficult to articulate precisely in philosophically satisfying ways exactly what we’re talking about. You mentioned this classic article, “What is it Like to Be a Bat?” And I think it’s a fantastic article, actually. I’m teaching it at the moment. And one of the reasons I think it’s fantastic is because it does convey in quite a concise way, quite quickly, the sort of thing that we’re interested in. So I’m talking to you, and I assume that there’s something it’s like to be you. Nagel’s famous example is with bats. They are these amazing animals. Their perceptual systems are very alien to ours, but we assume there’s something it’s like to be a bat. So it’s very difficult to state precisely exactly what we’re talking about, but you can sort of gesture at it—something to do with subjective experience, what it’s like to have an experience and so on. And then the other thing that you mentioned, which I think is really interesting, and in a way, it’s sort of disconnected from the machine consciousness question specifically in the sense that even if we had never built AI, there would still be all of these profound mysteries, which is just how the hell do you integrate this thing called subjective experience into a scientific worldview? I mean, there are other sorts of things where people get worried about a potential conflict between, roughly speaking, a scientific worldview and a kind of common sense picture of the world. So maybe free will is another example, or maybe objective facts about how you ought to behave. Some people take that seriously. I’m not personally one of them, but some people do. But I think you’re right. Consciousness feels so much more mysterious as a phenomenon than these other cases that still seem to pose puzzles for a broadly scientific worldview. Henry Shevlin: Also, unlike free will and unlike objective morality, I think it’s very, very hard to say that consciousness doesn’t exist. I mean, it’s pretty hard to say that free will doesn’t exist and painful perhaps to take the view that objective morality doesn’t exist. But these are just very well established positions. And there are some people out there, illusionists, who try and explain away consciousness. Maybe how successful they are is a matter of debate. But it’s very, very hard to just say, like, your experience, your conscious life—nah, it’s not there. It’s not real. It doesn’t exist. Dan Williams: Yeah, right. Actually, I think that’s another nice place to go before we go to the specific issues connected to artificial intelligence. So there’s this metaphysical mystery, which is how does consciousness, how does subjective experience fit into a broadly scientific, we might even say physicalist picture of the world? And so then there are lots of metaphysical theories of consciousness. I’ll run through my understanding of them, which might be somewhat inadequate, and then you can tell me whether it’s sort of up to date. Roughly speaking, you’ve got physicalist theories that say consciousness is or is realized by or is constituted by physical processes in the brain, in our case. You’ve got dualist theories that say consciousness is something over and above the merely physical. It’s a separate metaphysical domain, and then that comes in all sorts of different forms. You’ve got panpsychism, which is, to me at least, strangely influential at the moment, or at least it seems to be among some philosophers, that says basically everything at some level is conscious, so electrons and quarks are conscious. And then you’ve got illusionism, and I suppose probably the most influential philosopher that’s often associated with illusionism would be Daniel Dennett. I understand that he had a sort of awkward relationship to the branding. But there the idea is something like, look, we take there to be such a thing as consciousness. We take there to be such a thing as subjective experience. But actually, it’s kind of just an illusion. It doesn’t exist. Is that a fair taxonomy? Is that how you view the different pictures of consciousness in the metaphysical debate? Henry Shevlin: Yeah, I think that’s pretty much fair. A couple of tiny little things I’ll add. So panpsychism maybe doesn’t completely slot into this taxonomy in quite the way you might think. Because a lot of panpsychists would say, no, we’re just physicalists, right? We believe