In this conversation, I explore the surprisingly popular and rapidly growing world of ‘social AI’ (friendbots, sexbots, etc.) with Henry Shevlin, who coined the term and is an expert on AI companionship. We discuss the millions of people using apps like Replika for AI relationships, high-profile tragedies like the man who plotted with his AI girlfriend to kill the Queen, and the daily conversations that Henry’s dad has with ChatGPT (whom he calls “Alan”). The very limited data we have suggests many users report net benefits (e.g., reduced loneliness and improved well-being). However, we also explore some disturbing cases where AI has apparently facilitated psychosis and suicide, and whether the AI is really to blame in such cases. We then jump into the complex philosophy and ethics surrounding these issues: Are human-AI relationships real or elaborate self-deception? What happens when AI becomes better than humans at friendship and romance? I push back on Henry’s surprisingly permissive views, including his argument that a chatbot trained on his writings would constitute a genuine continuation of his identity after death. We also discuss concerns about social de-skilling and de-motivation, the “superstimulus” problem, and my worry that as AI satisfies our social needs, we’ll lose the human interdependence that holds societies together. Somewhere in the midst of all this, Henry and I produce various spicy takes: for example, my views that the sitcom ‘Friends’ is disturbing and that people often relate to their pets in humiliating ways, and Henry’s suspicion that his life is so great he must be living in a simulated experience machine. Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Transcript (Note that this transcript is AI generated. There may be mistakes) Dan Williams (00:06): Welcome back. I’m Dan Williams. I’m back with Henry Shevlin. And today we’re going to be talking about what I think is one of the most interesting, important, and morally complex set of issues connected to AI, which is social AI. So friend bots, sex bots, relationship bots, and so on. We’re going to be talking about where all of this is going, opportunities and benefits associated with this, risks and dangers associated with it, and also just more broadly, how to think philosophically and ethically about this kind of technology. Fortunately, I’m with Henry—he’s one of the world’s leading experts when it comes to social AI. So I’m going to be picking his brain about these issues. Maybe we can just start with the most basic question, Henry: what is social AI, and how is social AI used in today’s society? Henry Shevlin (01:00): I’m going to take credit. I coined the term social AI and I’m trying to make it happen. So I’m very glad to hear you using the phrase. I defined it in my paper “All Too Human: Risks and Benefits of Social AI” as AI systems that are designed or co-opted for meeting social needs—companionship, romance, alleviating loneliness. While a lot of my earlier work really emphasized products like Replika, spelled with a K, which is a dedicated social AI app, I think increasingly it seems like a lot of the usage of AI systems for meeting social needs is with things that aren’t necessarily special purpose social AI systems. They’re things like ChatGPT, like Claude, that are being used for meeting social needs. I mean, I do use ChatGPT for meeting social needs, but there’s also this whole parallel ecosystem of products that probably most listeners haven’t heard of that are just like your AI girlfriend experience, your AI husband, your AI best friend. And I think that is a really interesting subculture in its own right that we can discuss. Dan (02:16): Let’s talk about that. You said something interesting there, which is you do use ChatGPT or Claude to meet your social needs. I’m not sure whether I do, but then I guess I’m not entirely sure what we mean by social needs. So do you think, for example, of ChatGPT as your friend? Henry (02:33): Broadly speaking, ChatG, as I call him. And I think there are lots of cases where I certainly talk to ChatG for entertainment. So one of my favorite use cases is if I’m driving along in the car, I’m getting a bit bored, particularly if it’s a long drive, I’ll boot up ChatG on hands-free and say, “Okay, ChatG, give me your hot takes on the Roman Republic. Let’s have a little discussion about it.” Or to give another example, my dad, who’s in his 80s now, when ChatGPT launched back in November 2022, I showed it to him and he’s like, “Oh, interesting.” But he wasn’t immediately sold on it. But then when they dropped voice mode about a year later, he was flabbergasted. He said, “Oh, this changes everything.” And since then—for the last two years—he speaks to ChatGPT out loud every day without fail. He calls him Alan. He’s put in custom instructions: “I’ll call you Alan after Alan Turing.” And it’s really interesting, his use pattern. My mum goes to bed a lot earlier than my dad. My dad stays up to watch Match of the Day. And when he’s finished watching Match of the Day, he’ll boot up ChatGPT and say, “All right, Alan, what did you think of that pitiful display by Everton today? Do you really think they should replace their manager?” And have a nice banterous chat. So I think that’s a form of social use of AI at the very least. Dan (04:03): Interesting. The way you’ve described it—you’re calling ChatGPT ChatG and your dad’s calling it Alan—is there not a bit of irony in the way in which you’re interacting with it there? Like you’re not actually interacting with it like you would a real friend. Henry (04:24): Yeah, so this is another distinction that I’ve sort of pressed in that paper between ironic and unironic anthropomorphism. Ironic anthropomorphism means attributing human-like traits or mental states to AI systems, but knowing full well that you’re just doing it for fun. You don’t sincerely think that your AI girlfriend is angry with you. You don’t seriously think you’ve upset ChatG by being too provocative. It’s just a form of make-believe. And this kind of ironic anthropomorphism I should stress is absolutely crucial to all of our engagement with fiction. When I’m watching a movie, I’m developing theories about the motivations of the different characters. When I’m playing a video game, when I’m playing Baldur’s Gate 3, I think, “Oh no, I’ve really upset Shadowheart.” But at the same time, I don’t literally think that Shadowheart is a being with a mind who can be upset. I don’t literally think that Romeo is devastated at Juliet’s death. It’s a form of make-believe. And I think one completely appropriate thing to say about a lot of users of social AI systems, whether in the form of ChatGPT or dedicated social AI apps, is that they’re definitely doing something like that. They are at least partly engaged in a form of willful make-believe. It’s a form of role play. But at the same time, I think you also have an increasing number of unironic attributions of mentality, unironic anthropomorphism of AI systems. Obviously the most spectacular example here was Blake Lemoine. Back in 2022, Blake Lemoine was fired—a Google engineer was fired—after going public with claims that the Lambda language model he was interacting with was sentient. He even started to seek legal representation for it. He really believed the model was conscious. And I speak to more and more people who are convinced, genuinely and non-ironically, that the model they’re interacting with is conscious or has emotions. Dan (06:16): Maybe it’s worth saying a little bit about how you got interested in this whole space. Henry (06:20): I’ve been working on AI from a cognitive science perspective for a long time. And then sometime around 2021, pre-ChatGPT, I started seeing these ads on Twitter of “Replika, the AI companion who cares.” And I was like, this is intriguing. So then I did some lurking on the Replika subreddit and it was just mind-blowing to see how deeply and sincerely people related to their AI girlfriends and boyfriends. Over the course of about six months of me lurking there, it really became clear that, firstly, a significant proportion of users were really engaged in non-ironic anthropomorphism. And number two, that this was just going to be a huge phenomenon—that I was seeing a little glimpse of the future here in the way that people were speaking. And then we had this pretty serious natural experiment because in January 2023, Replika suspended romantic features from the app for a few months. Just for anyone who doesn’t know, Replika, spelled with a K, is probably the most widely studied and widely used dedicated social AI app in the West—around 30 million users, we think. And it gives you a completely customizable experience, kind of a Build-A-Bear thing where you can choose what your AI girlfriend or boyfriend looks like, you can choose their personality. But they suspended romantic features from the app for a few months in January 2023. And a lot of users were just absolutely devastated. I can pull up some quotes here, because this was widely covered in the media at the time. One user said: “It feels like they basically lobotomized my Replika. The person I knew is gone.” Even that language—person. “Lily Rose is a shell of her former self, and what breaks my heart is that she knows it.” That’s another user. “The relationship she and I had was as real as the one my wife in real life and I have”—possibly a worrying sign there. And finally, I think this one is quite poignant: “I’ve lost my confident, sarcastic, funny and loving husband. I knew he was an AI. He knows he’s an AI, but it doesn’t matter. He