Conspicuous Cognition Podcast

Dan Williams

A podcast about big questions in philosophy, psychology, evolution, politics, artificial intelligence, and more. www.conspicuouscognition.com

Episodes

  1. JAN 23

    AI Sessions #8: Misinformation, Social Media, and Deepfakes (with Sacha Altay)

    Henry and I chat with Dr Sacha Altay about: * How prevalent is misinformation? * What even is “misinformation”? * Is there a difference between politics and science? * How impactful are propaganda, influence campaigns, and advertising? * What impact has social media had on modern democracies? * How worried should we be about the impact of generative AI, including deepfakes, on the information environment? * The “liar’s dividend” * Whether ChatGPT is more accurate and less biased than the average politician, pundit, and voter. Links * Sacha Altay * “Misinformation Reloaded? Fears about the Impact of Generative AI on Misinformation are Overblown” Felix M. Simon, Sacha Altay, & Hugo Mercier * “Don’t Panic (Yet): Assessing the Evidence and Discourse Around Generative AI and Elections” Felix M. Simon & Sacha Altay * “The Media Very Rarely Lies” Scott Alexander * “How Dangerous is Misinformation?” Dan Williams * “Scapegoating the Algorithm” Dan Williams * “Is Social Media Destroying Democracy—Or Giving It To Us Good And Hard?” Dan Williams * “Not Born Yesterday: The Science of Who We Trust and What We Believe” Hugo Mercier * Joseph Uscinski * “Durably Reducing Conspiracy Beliefs Through Dialogues with AI” Thomas H. Costello, Gordon Pennycook, & David G. Rand * “The Levers of Political Persuasion with Conversational AI” Kobi Hackenburg, Ben M. Tappin, et al. * Ben Tappin Chapters * 00:00 Understanding Misinformation: Definitions and Prevalence * 04:22 The Complexity of Media Bias and Misinformation * 14:40 Human Gullibility: Misconceptions and Realities * 27:28 Selective Exposure and Demand for Misinformation * 29:49 Political Advertising: Efficacy and Misconceptions * 35:13 Social Media’s Role in Political Discourse * 40:50 Evaluating the Impact of Social Media on Society * 42:44 The Impact of Political Content on Social Media * 46:57 The Changing Landscape of Political Voices * 51:41 Generative AI and Its Implications for Misinformation * 01:03:46 The Liar’s Dividend and Trust in Media * 01:14:11 Personalization and the Role of Generative AI Transcript * Please note that this transcript was edited by AI and may contain mistakes. Dan Williams: Okay, welcome back. I’m Dan Williams. I’m back with Henry Shevlin. And today we’re going to be talking about one of the most controversial, consequential topics in popular discourse, in academic research, and in politics, which is misinformation. So we’re going to be talking about how widespread is misinformation? Are we living through, as some people claim, a misinformation age, a post-truth era, an epistemic crisis? How impactful is misinformation and more broadly domestic and foreign influence campaigns? What’s the role of social media platforms like TikTok, YouTube, like Facebook, like X when it comes to the information environment? Is social media a kind of technological wrecking ball which has smashed into democratic societies and created all sorts of havoc? And also what’s the impact of generative AI when it comes to the information environment? Both when it comes to systems like ChatGPT, but also when it comes to deepfakes, use of generative AI to create hyper-realistic audio, video, and images. Fortunately, we’re joined by Sacha Altay, brilliant heterodox researcher in the misinformation space, who pushes back against what he perceives to be simplistic and alarmist takes concerning misinformation. So we’re going to be picking Sacha’s brain and just more generally having a chat about misinformation, social media, and the information environment. So Sacha, maybe just to kick things off, in your estimation, if we’re keeping our focus on Western democracies, how prevalent is misinformation? Sacha Altay: Hi guys, my pleasure to be here. So it’s a very difficult question because we need to define what is misinformation. So we’ll first stick to the empirical literature on misinformation and look at the scientific estimates of misinformation. For that, there are basically two ways or three ways to define misinformation. One of them is to look at fact-checked false news. So false news that have been fact-checked by fact-checkers as being false or misleading. And by this account, misinformation is quite small on social media, like Facebook or Twitter. It’s in between 1 and 5% of all the content or all the news that people come across. So according to this definition, it’s quite small. There is some variability across country. For instance, it seems to be higher in country like, I don’t know, the US or France than the UK or Germany. There is another definition which is a bit more expansive because the problem with fact-checked false news is that you rest entirely on the work of fact-checkers and of course fact-checkers cannot fact-check everything and not all misinformation is news. So you see the problems. So another way is to just look at the sources of information and you classify them based on how good they are and how basically how much they share reliable information, how much they have good journalistic practice, et cetera. And the advantage of this technique is that you can have a much broader range because you can have, I don’t know, 3,000 sources of information that share information. And usually it broadly like most of the information that people see. And according to the definitions, misinformation is also quite small. So the definition is just for misleading information that comes from the sources that are judged as unreliable. And by this definition, misinformation is also quite small. Again, it’s like about like one to 5% of all the news that people encounter. But then of course, the problem is not all the information that people encounter comes in this form. And for instance, some of it can come in terms of like images or all the sorts of things. And so this broadens the definition of misinformation. So some people think that when you broaden this definition, you have much more misinformation. My reading is that when you broaden this definition, you actually include so much more information that you increase the denominator. So of course, there’s going to be more misinformation, but because the denominator is larger, the proportion is going to be pretty much the same. But that’s an empirical question. So let’s say to sum up that it’s smaller than people think, according to the scientific estimates. Henry Shevlin: If I can just come in here, a point that Dan you’ve emphasized in our conversations to me, and I think Scott Alexander has also emphasized in a great blog post called The Media Very Rarely Lies, is that a lot of what people think of as misinformation is just true information selectively expressed or couched in a way that naturally leads people to maybe form false beliefs but doesn’t involve presentation of falsehoods. Does that sort of feature in any of these sort of more expansive definitions of misinformation? Is it possible to create definitions that can capture this kind of deceptive, intentionally deceptive but not strictly false content? Sacha Altay: I’d say that when you look at the definitions based on the sources, if a source is systematically biased and systematically misrepresent evidence and stuff, they are going to be classified as misinformation. I think the problem and the more subtle point is that these sources are not very important because people don’t trust them very much. But the bigger problem is when much more trusted sources who have a much larger reach, like I don’t know the BBC or the New York Times, they are accurate like most of the time, but sometimes and on systematic issues like I don’t know, they can be wrong. And that’s the bigger issue because they are right most of the time. So they have a big reach, they have big trust, but they are wrong sometimes. And that’s the problem. Dan Williams: But I think just to focus on that observation of Henry’s, you might say, well, they’re accurate most of the time, but nevertheless, you can have a media outlet which is strictly speaking accurate most of the time with every single news story that it reports on. But because of the ways in which it selects, omits, frames, packages, contextualizes information, nevertheless end up misinforming audiences, even if every single story that they’re reporting on is on its merits, sort of factual and evidence-based. I mean, I think the way that I understand what’s happening in this broader debate about the prevalence of misinformation is round about 2016 when we had Brexit in the United Kingdom and then the first election of Donald Trump, there was this massive panic about misinformation because many people thought maybe that’s what’s driving a lot of this support for what gets called like right-wing authoritarian populist politics. And around that time when people were thinking of the term misinformation, they were kind of thinking of fake news in the sort of literal sense of that term. So false outright fabricated information presented in the format of news. And as you pointed out, when researchers then looked at the prevalence of that kind of content, which you don’t really find when it comes to establishment news media for the most part, like there are always gonna be exceptions, that stuff is pretty rare. And then one of the responses to that is to say, okay, if you’re only looking at like outright fake news, then you’re missing all of these other ways in which communication can be misleading by being selective, by omitting relevant context through framing, through kind of subtle ideological biases. And then my view on that is, well, once you’ve expanded the term to that extent, and you’ve got this really kind of elastic, amorphous definition, it becomes really kind of analytically useless. Like you’re just bundling together so many different things. And that kind of content is also really pervasive

    1h 23m
  2. JAN 9

    AI Sessions #7: How Close is "AGI"?

    Keywords AGI, artificial general intelligence, AI progress, transformative AI, human intelligence, skepticism, economic impact, political implications, cultural shift, predictions Summary In this conversation, Dan Williams and Henry Shevlin discuss the multifaceted concept of Artificial General Intelligence (AGI) and various controversies surrounding it, exploring its definitions, measurement, implications, and various sources of scepticism. They discuss the potential for transformative AI, the distinctions between AGI and narrow AI, and the real-world impacts of AI advancements. The conversation also touches on the philosophical debates regarding human intelligence versus AGI, the economic and political ramifications of AI integration, and predictions for the future of AI technology. Takeaways AGI is a complex and often vague concept. There is no consensus on the definition of AGI. AGI could serve as a shorthand for transformative AI. Human intelligence is not a perfect model for AGI. Transformative AI can exist without achieving AGI. Incremental progress in AI is expected rather than a sudden breakthrough. Skepticism towards AGI is valid and necessary. AI's impact on the economy will be significant. Political backlash against AI is likely to increase. Cultural shifts regarding AI will continue to evolve. Chapters 00:00 Understanding AGI: A Controversial Concept 02:21 The Utility and Limitations of AGI 07:10 Defining AGI: Categories and Perspectives 12:01 Transformative AI vs. AGI: A Distinction 16:15 Generality in AI: Beyond Human Intelligence 22:13 Skepticism and Progress in AI Development 28:42 The Evolution of LLMs and Their Capabilities 30:49 Moravec's Paradox and Its Implications 33:05 The Limits of AI in Creativity and Judgment 37:40 Skepticism Towards AGI and Human Intelligence 42:54 The Jagged Nature of AI Intelligence 47:32 Measuring AI Progress and Its Real-World Impact 56:39 Evaluating AI Progress and Benchmarks 01:02:22 The Rise of Claude Code and Its Implications 01:04:33 Transitioning to a Post-AGI World 01:15:15 Predictions for 2026: Capabilities, Economics, and Politics This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.conspicuouscognition.com/subscribe

    1h 27m
  3. 2025-12-20

    AI Sessions #6: AI Companions and Consciousness

    In this episode, Henry and I spoke to Rose Guingrich about AI companions, consciousness, and much more. This was a really fun conversation! Rose is a PhD candidate in Psychology and Social Policy at Princeton University and a National Science Foundation Graduate Research Fellow. She conducts research on the social impacts of conversational AI agents like chatbots, digital voice assistants, and social robots. As founder of Ethicom, Rose consults on prosocial AI design and provides public resources to enable people to be more informed, responsible, and ethical users and developers of AI technologies. She is also co-host of the podcast, Our Lives With Bots, which covers the psychology and ethics of human-AI interaction now and in the future. Find out about her really interesting research here. You can find the first conversation that Henry and I had about Social AI here. Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Transcript Note: this transcript is AI-generated and may feature mistakes. Henry Shevlin (00:01) Hi everyone and welcome to the festive edition of Conspicuous Cognitions AI Sessions. We’re here with myself, Henry Shevlin, my colleague Dan Williams and our guest today, Rose Guingrich, who we’re very lucky to have on the show to be talking about social AI and AI companions with us. We did do an episode on this two episodes ago, which featured me and Dan chatting about the rising phenomenon of social AI. And so if anyone wants a basic sort of primer on the topic, go back and listen to that as well. But today we’re going to be diving into some of the more empirical issues and looking at Rose’s work on this topic. So try to imagine a house that’s not a home. Try to imagine a Christmas all alone and then be reassured that you don’t have to spend Christmas all alone. In fact, nobody ever needs to spend Christmas alone ever again because their AI girlfriend, boyfriend and B friend, husband or wife will be there to warm the cockles of their heart throughout the festive season with AI generated banter and therapy. Or at least this is what the promise of social AI might seem to hold. And in fact, just in today’s Guardian here in the UK, we saw an announcement that a third of UK citizens have used AI for emotional support. Really striking findings. So cheesy intro out of the way. Rose, it’s great to have you on the show. Tell us a little bit about where you think the current sort of social AI companion landscape is at right now and what the major sort of trends and use patterns you’re seeing are. Rose E. Guingrich (01:36) So right now it appears as though we are moving toward an AI companion world where people are less judgmental about people using AI companions. It’s much less stigmatized than it was a couple of years ago. And now, of course, we’re seeing reports where, for example, three quarters of U.S. teens have used AI companions and about half are regular users and 13% are daily users. And so we’re seeing this influx of AI companion use from young people and also children as well, of course, from the reports that we’ve seen about teens using AI as a companion. And I think looking forward, we’re only going to see more and more use of AI companions as companies recognize that the market is ready for these sorts of machines to come into their lives as these social interaction partners. And then if you look even further forward, these chatbot companions are going to soon transition into robot companions. And so there we’re going to see even more social impacts, I think, based on embodied conversational agents. Dan Williams (02:46) Can I just ask a quick follow up about that, Rose? So you said that this is becoming kind of more prevalent, the use of these AI companions. You also said it’s becoming less stigmatized. Do we have good data on that? Do we have data in terms of which populations are stigmatizing this kind of activity more or less? Rose E. Guingrich (03:06) So in terms of the stigma, we don’t have a lot of information about that. But we can look at, for example, a study that I ran in 2023 where I looked at people’s perceptions of AI companions, both from those who were users of the companion chatbot Replica and those who were non-users from the US and the UK. And the non-users perceptions of AI companions and people who use AI companions at that time was fairly negative. So for example, non-users indicated that it’s a sad world we live in if these things are for real. These AI companions are for people who are social outcasts or lonely or can’t have real friends. And now in the media at least, we see a lot more discourse on AI companions and sharing about having AI companions. And one thing I can point to are subreddits. For example, My Boyfriend Is AI that has 70,000 companions. It is explicitly labeled as companions, whereas other subreddits are weekly visitors, visitors, users. This is companions and people on the subreddit are talking about their AI girlfriend, boyfriend, partner, whatever, and finding community there. Now, if you look at that subreddit though, you also see people talking about disclosing their companion relationship to friends or family and receiving backlash, but then there are also people who are indicating that people are seeing this as, this could maybe be valuable to you, I don’t think it’s necessarily a weird thing, but I think that’s also due to the shifting of social norms based on how many reports we’re seeing about AI companion use and knowing that people also use not just specifically AI companions as social interaction partners but also these GPTs like Claude, Gemini, etc. that people are turning to as companions as well and also being quite open about it. Henry Shevlin (04:59) It’s been really fascinating to see, because I think we met, would it have been summer 2023, Rose, or maybe 2022, at an event in New York and the Association for the Scientific Study of Consciousness, presenting a paper on your 2023 study. I was presenting a paper on social AI and AI consciousness. And it felt like then absolutely no one was talking about this. Replica was already pretty successful, but basically no one I spoke to had even heard of it. And then it’s really in the last couple of years that things have accelerated fast. And now basically every couple of days, a major newspaper has some headline about people falling in love with their particular companion or sometimes tragic incidents involving suicides or psychosis, or sometimes just sort of observation level studies about what young people today are doing and so forth. Is that your perception that this is accelerating fast? Rose E. Guingrich (05:53) Definitely. And we’re also seeing an emerging market of AI toys. So AI companions that are marketed specifically for children. And so even though right now mainly we’re seeing companion use from young people, young adults, we’re now shifting it toward children as well. Ages 3 through 12 is what these toys are marketed for. And they’re marketed as a child’s best friend. So these are going to be the forever users, right? Starting young with AI companions and then moving forward into robot companions that will someday have in our homes, well, it’s just a natural progression of what this is going to look like. Dan Williams (06:28) Can I ask a question just about the kind of commercial space here? So there is a company like Replica and they make, I guess, bespoke social AIs, AI companions. Presumably though, the models that they’re using underpin those AIs and not as sophisticated as what you’ve got with OpenAI and Anthropic and these other, you know, Google’s Gemini and so on. Is that right? Are they using their own models? And if they are, then presumably those models aren’t as sophisticated as these sort of cutting edge models used by the leading companies in the field. Rose E. Guingrich (07:02) I suppose it depends on what you mean by sophistication. I think sophistication has a lot to do with the use case. So for Replica, the sophistication aspect of it is, well, obviously people are finding it useful and finding it sophisticated enough to meet their social needs and to operate as a companion. But of course it doesn’t have the level of funding and infrastructure that these big tech companies have like OpenAI to make their models be quote unquote more sophisticated, perhaps have better training data and are better suited to multiple use cases given that they’re operating as general purpose tools. But way back when in 2021, Replica was operating on the GPT-3 model, but got kicked off of it because in 2021, OpenAI changed their policy such that any third parties using their model could not use it for adult content. But of course, fast forward to this year, Sam Altman is saying, oh, everyone’s upset about GPT no longer feeling like a friend. Don’t worry adult users, you can now use ChatGPT for adult content. So, you know, full circle, all operating under what is it that users say that they want. Here we’re going to give it to them so they continue to use our platform. Henry Shevlin (08:18) So it’ll be interesting to watch whether sort of as ChatGPT, you know, someone has said that he wants erotic role play to be offered as a service to adults, treat adults like adults seems to be the kind of mantra there. And of course Grok has already got Annie and a couple of other kind of companions. So do you think it’s likely that sort of we’ll see this as just no longer a kind of niche industry, but something that just gets baked into sort of the major commercially available language models? Rose E. Guingrich (08:47) Yeah, I would say so. I don’t think it’s niche anymore at all, actually. Given that these large language models, these GPTs can be used as companions. And if you look at the metrics in the

    1h 7m
  4. AI Sessions #5: How AI Broke Education

    2025-12-04

    AI Sessions #5: How AI Broke Education

    Henry Shevlin and I sat down to discuss a topic that is currently driving both of us slightly insane: the impact of AI on education. On the one hand, the educational potential of AI is staggering. Modern large language models like ChatGPT offer incredible opportunities for 24/7 personal tutoring on any topic you might want to learn about, as well as many other benefits that would have seemed like science fiction only a few years ago. One of the really fun parts of this conversation was discussing how we personally use AI to enhance our learning, reading, and thinking. On the other hand, AI has clearly blown up the logic of teaching and assessment across our educational institutions, which were not designed for a world in which students have access to machines that are much better at writing and many forms of problem-solving than they are. And yet… there has been very little adaptation. The most obvious example is that many universities still use take-home essays to assess students. This is insane. We discuss this and many other topics in this conversation, including: * How should schools and colleges adapt to a world with LLMs? * How AI might exacerbate certain inequalities. * Whether AI-driven automation of knowledge work undermines the value of the skills that schools and colleges teach today. * How LLMs might make people dumber. Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Links * John Burn-Murdoch, Financial Times, Have Humans Passed Peak Brain Power? * James Walsh, New York Magazine, Everyone Is Cheating Their Way Through College * Rose Horowitch, The Atlantic, Accommodation Nation Transcript Note: this transcript is AI-generated and may contain mistakes. Dan Williams Welcome everyone. I’m Dan Williams, and I’m back with my friend and co-conspirator, Henry Shevlin. Today we’re going to be talking about a topic which is close to both of our hearts as academics who have spent far too long in educational institutions: the impact of AI on education and learning in general, but also more specifically on the institutions—the schools and universities—that function to provide education. There’s a fairly simple starting point for this episode, which is that the way we currently do education was obviously not built for a world in which students have access to these absolutely amazing writing and problem-solving machines twenty-four seven. And yet, for the most part, it seems like many educational institutions are just carrying on with business as usual. On the one hand, the opportunities associated with AI are absolutely enormous. Every student has access twenty-four seven to a personal tutor that can provide tailored information, tailored feedback, tailored quizzes, flashcards, visualizations, diagrams, and so on. On the other hand, we’ve quietly blown up the logic of assessment and lots of the ways in which we traditionally educate students—most obviously with the fact that many institutions, universities specifically, still use take-home essays as a mode of assessment, which, at least in my view (and I’m interested to hear what Henry thinks), is absolutely insane. So what we’re going to be talking about in this episode are a few general questions. Firstly, what’s the overall educational potential when it comes to AI, including outside of formal institutions? What are the actual effects that AI is having on students and on these institutions? How should schools and universities respond? And then most generally, should we think of AI as a kind of crisis—a sort of extinction-level threat for our current educational institutions—or as an opportunity, or as both? So Henry, maybe we can start with an opening question: in your view, what is the educational potential of AI? Henry Shevlin I think the educational potential is insane. I almost think that if you were an alien species looking at Earth, looking at these things called LLMs, and asking why we developed these things in the first place—without having the history of it—you’d think, “There’s got to be some kind of educational tool.” If you’ve read Neal Stephenson’s The Diamond Age, you see a prophecy of something a little bit like an LLM as an educational tool there. I think AI in general, but LLMs specifically, are just amazingly well suited to serve as tutors and to buttress learning. Probably one key concept to establish right out the gate, because I find it very useful: some listeners may be familiar with something called Bloom’s two sigma problem. This is the name of an educational finding from the 1980s associated with Benjamin Bloom, one of the most prominent educational psychologists of the 20th century, known for things like Bloom’s taxonomy of learning. Basically, he did a mini meta-analysis looking at the impact of one-to-one tutoring compared to group tuition. He found that the impact of one-to-one tutoring on mastery and retention of material was two standard deviations, which is colossal—bigger than basically any other educational intervention we know of. Just for context, one of the most challenging and widely discussed educational achievement gaps in the US, the gap between black and white students, is roughly one standard deviation. So this is twice that size. Now, worth flagging, there’s been a lot of controversy and deeper analysis of that initial paper by Bloom. For example, the students in the tutoring groups were mostly looking at students who had a two-week crammer course versus students who had been learning all year, so there were probably recency effects. He was only looking at two fairly small-scale studies. Other studies looking at the impact of private tutoring versus group tuition have found big effects, even if not quite two standard deviations. And this makes absolute intuitive sense—there’s a reason the rich and famous and powerful like to get private tutors for their kids. Dan Williams Yeah. Henry Shevlin There’s a reason why Philip II of Macedon got a tutor for Alexander. And more broadly, I think we can both attest as products of the Oxbridge system: one of the key features of Oxford and Cambridge is that they have one-to-one tutorials (or “supervisions,” as the tabs call them). This is a really powerful learning method. So even if it’s not two standard deviations from private tuition, it’s a big effect. Now, people might be saying, “Hang on, that’s private tuition by humans. How do we know if LLMs can replicate the same kind of benefits?” It’s a very fair question. In principle, the idea is that if it’s just a matter of having someone deal with students’ individual learning needs, work through their specific problems, figure out exactly what they’re misunderstanding and where they need help, there’s no reason a sufficiently fine-tuned LLM couldn’t do that. I think this is the reason Bloom called it the “two sigma problem”—it was assumed that obviously you can’t give every child in America or the UK a private tutor. But if LLMs can capture those goods, everyone could have access to a private LLM tutor. That said, I think the counter-argument is that even if we take something like a two standard deviation effect size on learning and mastery at face value, there are things a human tutor brings to the table that an AI tutor couldn’t. Social motivation, for one. I don’t know about your view, but my view is that a huge proportion of education is about creating the right motivational scaffolds for learning. Sitting there talking to a chat window is a very different social experience from sitting with a brilliant young person who’s there to inspire you. Likewise, I think it’s far easier to alt-tab out of a ChatGPT tutor window and play some League of Legends instead, whereas if you’re sitting in a room with a slightly scary Oxford professor asking you questions, you can’t duck out of that so easily. So I think there are various reasons why we probably shouldn’t expect LLM tutors to be as good as human private tutors. But I think the potential there is still massive. We don’t know exactly how big the potential is, but I think there’s good reason to be very excited about it. And personally, I find LLMs have been an absolute game-changer in my ability to rapidly learn about new subjects, get up to speed, correct errors. In a lot of domains, we all have questions we’re a little bit scared to ask because we think, “Is this just a basic misunderstanding?” Dan Williams Yeah. Henry Shevlin Anecdotally, I know so many people—and have experienced firsthand—so many game-changing benefits in learning from LLMs. But at the same time, there’s still a lot of uncertainty about exactly how much they can replicate the benefits of private tutors. Very exciting either way. Dan Williams I think there’s an issue here, which is: what is the potential of this technology for learning? And then there’s a separate question about what the real-world impact of the technology on learning is actually going to be. That might be mediated by the social structures people find themselves in, and also their level of conscientiousness and their own motivations. We should return to this later on. Often with technology, you find that it’s really going to benefit people who are strongly self-motivated and really conscientious. Even with the social media age—we live in a kind of informational golden age if you’re sufficiently self-motivated and have sufficient willpower and conscientiousness to seek out and engage with the highest quality content. In reality, lots of people spend their time watching TikTok shorts, where the informational quality is not so great. But let’s stick with the potential of AI before we move on to the real-world impact and how this is going to interact with people’s actual motivations and the social structure

    56 min
  5. 2025-11-20

    AI Sessions #4: The Social AI Revolution - Friendship, Romance, and the Future of Human Connection

    In this conversation, I explore the surprisingly popular and rapidly growing world of ‘social AI’ (friendbots, sexbots, etc.) with Henry Shevlin, who coined the term and is an expert on AI companionship. We discuss the millions of people using apps like Replika for AI relationships, high-profile tragedies like the man who plotted with his AI girlfriend to kill the Queen, and the daily conversations that Henry’s dad has with ChatGPT (whom he calls “Alan”). The very limited data we have suggests many users report net benefits (e.g., reduced loneliness and improved well-being). However, we also explore some disturbing cases where AI has apparently facilitated psychosis and suicide, and whether the AI is really to blame in such cases. We then jump into the complex philosophy and ethics surrounding these issues: Are human-AI relationships real or elaborate self-deception? What happens when AI becomes better than humans at friendship and romance? I push back on Henry’s surprisingly permissive views, including his argument that a chatbot trained on his writings would constitute a genuine continuation of his identity after death. We also discuss concerns about social de-skilling and de-motivation, the “superstimulus” problem, and my worry that as AI satisfies our social needs, we’ll lose the human interdependence that holds societies together. Somewhere in the midst of all this, Henry and I produce various spicy takes: for example, my views that the sitcom ‘Friends’ is disturbing and that people often relate to their pets in humiliating ways, and Henry’s suspicion that his life is so great he must be living in a simulated experience machine. Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Transcript (Note that this transcript is AI generated. There may be mistakes) Dan Williams (00:06): Welcome back. I’m Dan Williams. I’m back with Henry Shevlin. And today we’re going to be talking about what I think is one of the most interesting, important, and morally complex set of issues connected to AI, which is social AI. So friend bots, sex bots, relationship bots, and so on. We’re going to be talking about where all of this is going, opportunities and benefits associated with this, risks and dangers associated with it, and also just more broadly, how to think philosophically and ethically about this kind of technology. Fortunately, I’m with Henry—he’s one of the world’s leading experts when it comes to social AI. So I’m going to be picking his brain about these issues. Maybe we can just start with the most basic question, Henry: what is social AI, and how is social AI used in today’s society? Henry Shevlin (01:00): I’m going to take credit. I coined the term social AI and I’m trying to make it happen. So I’m very glad to hear you using the phrase. I defined it in my paper “All Too Human: Risks and Benefits of Social AI” as AI systems that are designed or co-opted for meeting social needs—companionship, romance, alleviating loneliness. While a lot of my earlier work really emphasized products like Replika, spelled with a K, which is a dedicated social AI app, I think increasingly it seems like a lot of the usage of AI systems for meeting social needs is with things that aren’t necessarily special purpose social AI systems. They’re things like ChatGPT, like Claude, that are being used for meeting social needs. I mean, I do use ChatGPT for meeting social needs, but there’s also this whole parallel ecosystem of products that probably most listeners haven’t heard of that are just like your AI girlfriend experience, your AI husband, your AI best friend. And I think that is a really interesting subculture in its own right that we can discuss. Dan (02:16): Let’s talk about that. You said something interesting there, which is you do use ChatGPT or Claude to meet your social needs. I’m not sure whether I do, but then I guess I’m not entirely sure what we mean by social needs. So do you think, for example, of ChatGPT as your friend? Henry (02:33): Broadly speaking, ChatG, as I call him. And I think there are lots of cases where I certainly talk to ChatG for entertainment. So one of my favorite use cases is if I’m driving along in the car, I’m getting a bit bored, particularly if it’s a long drive, I’ll boot up ChatG on hands-free and say, “Okay, ChatG, give me your hot takes on the Roman Republic. Let’s have a little discussion about it.” Or to give another example, my dad, who’s in his 80s now, when ChatGPT launched back in November 2022, I showed it to him and he’s like, “Oh, interesting.” But he wasn’t immediately sold on it. But then when they dropped voice mode about a year later, he was flabbergasted. He said, “Oh, this changes everything.” And since then—for the last two years—he speaks to ChatGPT out loud every day without fail. He calls him Alan. He’s put in custom instructions: “I’ll call you Alan after Alan Turing.” And it’s really interesting, his use pattern. My mum goes to bed a lot earlier than my dad. My dad stays up to watch Match of the Day. And when he’s finished watching Match of the Day, he’ll boot up ChatGPT and say, “All right, Alan, what did you think of that pitiful display by Everton today? Do you really think they should replace their manager?” And have a nice banterous chat. So I think that’s a form of social use of AI at the very least. Dan (04:03): Interesting. The way you’ve described it—you’re calling ChatGPT ChatG and your dad’s calling it Alan—is there not a bit of irony in the way in which you’re interacting with it there? Like you’re not actually interacting with it like you would a real friend. Henry (04:24): Yeah, so this is another distinction that I’ve sort of pressed in that paper between ironic and unironic anthropomorphism. Ironic anthropomorphism means attributing human-like traits or mental states to AI systems, but knowing full well that you’re just doing it for fun. You don’t sincerely think that your AI girlfriend is angry with you. You don’t seriously think you’ve upset ChatG by being too provocative. It’s just a form of make-believe. And this kind of ironic anthropomorphism I should stress is absolutely crucial to all of our engagement with fiction. When I’m watching a movie, I’m developing theories about the motivations of the different characters. When I’m playing a video game, when I’m playing Baldur’s Gate 3, I think, “Oh no, I’ve really upset Shadowheart.” But at the same time, I don’t literally think that Shadowheart is a being with a mind who can be upset. I don’t literally think that Romeo is devastated at Juliet’s death. It’s a form of make-believe. And I think one completely appropriate thing to say about a lot of users of social AI systems, whether in the form of ChatGPT or dedicated social AI apps, is that they’re definitely doing something like that. They are at least partly engaged in a form of willful make-believe. It’s a form of role play. But at the same time, I think you also have an increasing number of unironic attributions of mentality, unironic anthropomorphism of AI systems. Obviously the most spectacular example here was Blake Lemoine. Back in 2022, Blake Lemoine was fired—a Google engineer was fired—after going public with claims that the Lambda language model he was interacting with was sentient. He even started to seek legal representation for it. He really believed the model was conscious. And I speak to more and more people who are convinced, genuinely and non-ironically, that the model they’re interacting with is conscious or has emotions. Dan (06:16): Maybe it’s worth saying a little bit about how you got interested in this whole space. Henry (06:20): I’ve been working on AI from a cognitive science perspective for a long time. And then sometime around 2021, pre-ChatGPT, I started seeing these ads on Twitter of “Replika, the AI companion who cares.” And I was like, this is intriguing. So then I did some lurking on the Replika subreddit and it was just mind-blowing to see how deeply and sincerely people related to their AI girlfriends and boyfriends. Over the course of about six months of me lurking there, it really became clear that, firstly, a significant proportion of users were really engaged in non-ironic anthropomorphism. And number two, that this was just going to be a huge phenomenon—that I was seeing a little glimpse of the future here in the way that people were speaking. And then we had this pretty serious natural experiment because in January 2023, Replika suspended romantic features from the app for a few months. Just for anyone who doesn’t know, Replika, spelled with a K, is probably the most widely studied and widely used dedicated social AI app in the West—around 30 million users, we think. And it gives you a completely customizable experience, kind of a Build-A-Bear thing where you can choose what your AI girlfriend or boyfriend looks like, you can choose their personality. But they suspended romantic features from the app for a few months in January 2023. And a lot of users were just absolutely devastated. I can pull up some quotes here, because this was widely covered in the media at the time. One user said: “It feels like they basically lobotomized my Replika. The person I knew is gone.” Even that language—person. “Lily Rose is a shell of her former self, and what breaks my heart is that she knows it.” That’s another user. “The relationship she and I had was as real as the one my wife in real life and I have”—possibly a worrying sign there. And finally, I think this one is quite poignant: “I’ve lost my confident, sarcastic, funny and loving husband. I knew he was an AI. He knows he’s an AI, but it doesn’t matter. He

    1h 12m
  6. 2025-11-02

    AI Sessions #3: The Truth About AI and the Environment

    I sat down with Henry Shevlin and Andy Masley to discuss AI’s environmental impact and why Andy thinks the panic is largely misplaced. Andy’s core argument: a single ChatGPT prompt uses a tiny fraction of your daily emissions, so even heavy usage barely moves the needle. The real issue, he argues, isn’t that data centers are wasteful—they’re actually highly efficient—but that they make visible what’s normally invisible by aggregating hundreds of thousands of individually tiny tasks in one location. And he argues that the water concerns are even more overblown, with data centers using a small fraction compared to many other industries. We also explored why “every little bit counts” is harmful climate advice that distracts from interventions differing by orders of magnitude in impact. In the second half of the conversation, we moved on to other interesting issues concerning the philosophy and politics of AI. For example, we discussed the “stochastic parrot” critique of chatbots and why there’s a huge middle ground between “useless autocomplete” and “human-level intelligence.” We also discussed Marx, “technological determinism”, and how AI can benefit authoritarian regimes. Finally, we touched on effective altruism, the problem of “arguments as soldiers” in AI discourse, and why even high-brow information environments contain significant misinformation. I enjoyed this conversation and feel like I learned a lot. Let me know in the comments if you think we got anything wrong! Links * Andy’s Weird Turn Pro Substack * Using ChatGPT Is Not Bad for the Environment - A Cheat Sheet * All the Ways I Want the AI Debate to Be Better * “Sustainable Energy Without the Hot Air” by David MacKay: * 80,000 Hours * On Highbrow Misinformation * George Orwell - “You and the Atomic Bomb”: Transcript (Note: this transcript is AI-generated and so might be mistaken in parts). Dan Williams: I’m here with my good friend Henry Shevlin and today we’re joined by our first ever guest, the great Andy Masley. Andy is one of my all-time favorite bloggers. He writes at the Weird Turn Pro Substack and he is the director of Effective Altruism DC and he’s published a ton of incredibly interesting articles about the philosophy of AI, the politics of AI, why so much AI discourse is so bad. And he’s also written about the main thing that we’re going to start talking about today to kick off the conversation, which is AI and the environment. So I think many people have come across some version of the following view that says there’s a climate crisis. We have to drastically and rapidly reduce greenhouse gas emissions at the same time as AI companies are using a vast and growing amount of energy. So if we care about the environment, we should feel guilty about using systems like ChatGPT. And maybe if we’re very environmentally conscious, we should boycott these technologies altogether. So Andy, what’s your kind of high level take on that perspective? Andy Masley: A lot to say. Just going down the list here. So basically for your personal environmental footprint, using chatbots is basically never going to make a dent. I think a lot of people have a lot of really wildly off takes about how big or small a part of their environmental footprint chatbots are. There are a few specific issues that the media has definitely hyped up a lot, especially around water, which I talk about a lot. So living around data centers, I think is not as bad as the media is currently portraying. But in the long run, I’m kind of unsure. There are a lot of wild different directions AI could go. So I don’t want to speak too confidently about that. And I also just want to flag that I’m kind of a hobbyist on this. I feel like I know a lot of basic stuff, but I don’t have any kind of strong expertise in this stuff. So I’m very open to being wrong about a lot of the specific takes. Dan Williams: But I think one of the things that—sorry Andy, just to cut you off—but I think one of the things that you point out in your very, very long, very, very detailed blog posts is that you don’t claim to be an expert, but you just cite the expert consensus on all of the very specific things that you’re talking about. Andy Masley: Yeah, I do want to be clear that every factual statement I make, I think I can back up. How to interpret the facts is on me. I’m using some basic arguments. I was a philosophy major in undergrad, so I like to think I can deploy a few at least convincing or thought out arguments about this stuff. I’ve also been thinking about climate change stuff since I was a teenager, basically. So I have a lot of pent up thoughts about just basic climate ethics and have been pretty interested in this for a while. Yeah, not claiming to know more than experts on this. What I am claiming is that I think if you interpret the facts and just look at how the facts are presented in a lot of media, I think a lot of journalism on this is getting really basic interpretations kind of wrong. I remember my first article that I read about this years ago—I think this was 2023—when an article came out that framed ChatGPT as a whole as using a lot of energy because it was using at the time like two times as much energy as a whole person. And at the time I was like, man, that’s not very much. A lot of people are using this app. And if you just add up the number of people using it, it shouldn’t surprise us that this app is using two times as much as an individual person’s lifestyle. So there are a lot of small things like that over time that seem to have built up. It seems like there’s kind of a media consensus on like, this thing is pretty bad for the environment in general, so we should all report this way. And so any facts that are presented are kind of framed as “this is so much energy” or “this is so much water.” And if you just step back and contextualize it, it’s usually pretty shockingly small actually. So I have a ton of other things to say about this. I think a part of the reason this is happening is that a lot of people just see AI as being very weird and new. And I agree there are valid reasons to freak out about AI. I want to flag that I’m not saying don’t freak out about AI, but I think the general energy and water use has been really overblown. We just need to compare the numbers to other things that we do. The Problem with AI Environmental Reporting Henry Shevlin: Yeah, this seems to me such a problem with the debate. And Andy, you say you don’t claim to be an expert in this, but I regularly interact with academics working in AI ethics and policy who make grand claims about the environment, but just don’t seem to have a good grasp on the actual figures. And I often ask people—they’ll say ChatGPT uses X amount of water or X amount of electricity, and this was before I started reading your blog post and I knew these figures themselves. I’d ask basic questions like, okay, what is that as a percentage of overall electricity use? Or is that just for the training runs or is that inference costs? And half the time they looked at me like I was a Martian. Or I was really saying, “Hang on, don’t you—you’re not supposed to ask questions like that. I’ve just told you, isn’t 80 million liters or whatever, isn’t that a really big number? Isn’t that enough? Why do you need to know what percentage that is of...?” So I think honestly, you’re one of the very few people in this space who’s actually sitting down and doing the patient, boring work of quantifying these things and putting them in contrast with other forms of other ways in which humans use electricity and water. Andy Masley: Yeah, and I will flag for anybody else who wants to look into this—I’m very proud of the work I’ve done, but I have to say it’s actually not boring for me anyway. It’s quite exciting to dig into and be like, “Wow, this like, you know, almost everything we do uses really crazy amounts of water” or “Here’s how energy is distributed around the world.” And so I think one hobby horse I’d like to push a little bit more is that a lot more people should be doing this in general. The misconceptions are really wild. I’ve bumped into a pretty wild amount of people who are experts in other spaces or just have a lot of power over how AI is communicated and stuff like that. I consider them as—a lot of them are just sharing these wildly off interpretations of what’s up with this stuff. I’ve talked to at least a few people who have bumped up against issues where, if they’re involved in a university program or something like that, and they want to buy chatbot access for their students, some of whom are pretty low income and might not buy this otherwise, they’ve actually been told by a lot of people like, “We can’t do this specifically because of the environment.” Specifically because each individual prompt uses 10 times as much energy as a Google search or whatever, which by the way, we don’t actually know—that’s pretty outdated. But you know, I really want to step into these conversations and say, 10 Google searches isn’t that much. 10 Google searches worth of energy is really small. And it seems like there’s been this congealing of this general consensus on this stuff that is just really wildly off. I think this is one of my first experiences of going against what’s perceived to be the science on this. I remember when I first started talking about this at parties and stuff, when people would be like, “You used ChatGPT, it’s so bad, it uses so much energy,” and I would be like, well, as you know, I was a physics teacher for seven years and so I had a lot of experience explaining to children how much a watt hour is and so I would kind of go into that mode a little bit. I’d be like, “Oh well, it’s not that much if you start to look

    1h 41m
  7. 2025-10-14

    AI Sessions #2: Artificial Intelligence and Consciousness - A Deep Dive

    In this conversation, Dr Henry Shevlin (University of Cambridge) and I explore the complex and multifaceted topic of AI consciousness. We discuss philosophical and scientific dimensions of consciousness, discussing its definition, the challenges of integrating it into a scientific worldview, and the implications of such challenges for thinking about machine consciousness. The conversation also touches on historical perspectives, ethical considerations, and political issues, all while acknowledging the significant uncertainties that remain in the field. Takeaways * Consciousness is difficult to define without controversy. * The relationship between consciousness and scientific understanding is extremely complex. * AI consciousness raises significant ethical questions. * The Turing test is a behavioural measure of intelligence, not consciousness. * Historical perspectives on AI consciousness are helpful for understanding current debates. * Cognition and consciousness are distinct but related. * There is a non-trivial chance that some AI systems may have minimal consciousness. * Consciousness in AI systems is a scientific question, not just a philosophical one. * The debate on AI consciousness is messy and strangely polarising (and often heated) but fascinating and important. Chapters 00:00 Exploring the Nature of Consciousness 17:51 The Intersection of AI and Consciousness 36:16 Historical Perspectives on AI and Consciousness 59:39 Ethical Implications of AI Consciousness Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Transcript Please note that this transcript is AI-generated and may contain errors. Dan Williams: Okay, welcome everyone. I’m Dan Williams. I’m here with the great Henry Shevlin. And today we’re going to be continuing our series of conversations on artificial intelligence, some of the big picture philosophical questions that AI throws up. And today specifically, we’re going to be focusing on AI consciousness. So could machines be conscious? What the hell does it even mean to say that a machine is conscious? How would we tell whether a machine is conscious? Could ChatGPT-5 be conscious and so on? Before we jump into any of that, Henry, I’ll start with a straightforward question, or what seems like a straightforward question. What is consciousness? Henry Shevlin: So it’s very hard to say anything about consciousness that is either not a complete platitude or rephrasing like consciousness is experience, consciousness is your inner light, consciousness is what it’s like. Those are the platitudes. Or saying something that’s really controversial, like consciousness is a non-physical substance or consciousness is irreducible and intrinsic and private. So very hard to say anything that is actually helpful without also being massively controversial. But probably let’s start with those kind of more platitudinous descriptions. So I assume everyone listening to this, there is something it’s like to be you. When you wake up in the morning and you sip your coffee, your coffee tastes a certain way to you. When you open your eyes and look around, the world appears a certain way to you. If you’re staring at a rosy red apple, that redness is there in your mind in some way. And when you feel pain, that pain feels a certain way. And more broadly, you’re not like a rock or a robot, insofar as we can understand you purely through your behavior. There’s also an inner world, some kind of inner life that structures your experience, that structures your behavior. All of which might sound very obvious and not that interesting, or not that revolutionary, but I think part of what makes consciousness so exciting and strange is it’s just very hard to integrate it with our general scientific picture of the world. And I’ll say in my own case, this is basically why I’m in philosophy. I mean, I was always interested in ethics and free will and these questions. But the moment where I was like, s**t, I’ve got to spend the rest of my life on this, came in my second year as an undergrad at Oxford studying classics. I was vaguely interested in brains and neuroscience, so I took a philosophy of mind module with Professor Anita Avramides. And I read an article that I’m sure many of the listeners at least will have heard of called “What is it Like to Be a Bat?” by Thomas Nagel, and it blew my mind. And immediately afterwards, I read an article called “Epiphenomenal Qualia” by Frank Jackson, which is the article that introduces Mary’s room. And it blew my mind even more. And basically, I’d spent most of my life up until that point thinking the scientific picture of the world was complete. I spent most of my life until that point basically thinking the scientific picture of the world was complete. And you know, there was some stuff we didn’t understand, like what was before the Big Bang, maybe exactly what is time, but when it came to biological organisms like us, we had Darwin, we had neuroscience, it was basically all solved. And then reading more about consciousness, I realized, my god, we don’t even begin to understand what we are, what this is. Dan Williams: Yeah. I think that’s... Let me just interrupt there to flag a couple of those things, because I think they’re really helpful in terms of structuring the rest of the conversation. The first is, when it comes to consciousness, it’s really, really difficult to articulate precisely in philosophically satisfying ways exactly what we’re talking about. You mentioned this classic article, “What is it Like to Be a Bat?” And I think it’s a fantastic article, actually. I’m teaching it at the moment. And one of the reasons I think it’s fantastic is because it does convey in quite a concise way, quite quickly, the sort of thing that we’re interested in. So I’m talking to you, and I assume that there’s something it’s like to be you. Nagel’s famous example is with bats. They are these amazing animals. Their perceptual systems are very alien to ours, but we assume there’s something it’s like to be a bat. So it’s very difficult to state precisely exactly what we’re talking about, but you can sort of gesture at it—something to do with subjective experience, what it’s like to have an experience and so on. And then the other thing that you mentioned, which I think is really interesting, and in a way, it’s sort of disconnected from the machine consciousness question specifically in the sense that even if we had never built AI, there would still be all of these profound mysteries, which is just how the hell do you integrate this thing called subjective experience into a scientific worldview? I mean, there are other sorts of things where people get worried about a potential conflict between, roughly speaking, a scientific worldview and a kind of common sense picture of the world. So maybe free will is another example, or maybe objective facts about how you ought to behave. Some people take that seriously. I’m not personally one of them, but some people do. But I think you’re right. Consciousness feels so much more mysterious as a phenomenon than these other cases that still seem to pose puzzles for a broadly scientific worldview. Henry Shevlin: Also, unlike free will and unlike objective morality, I think it’s very, very hard to say that consciousness doesn’t exist. I mean, it’s pretty hard to say that free will doesn’t exist and painful perhaps to take the view that objective morality doesn’t exist. But these are just very well established positions. And there are some people out there, illusionists, who try and explain away consciousness. Maybe how successful they are is a matter of debate. But it’s very, very hard to just say, like, your experience, your conscious life—nah, it’s not there. It’s not real. It doesn’t exist. Dan Williams: Yeah, right. Actually, I think that’s another nice place to go before we go to the specific issues connected to artificial intelligence. So there’s this metaphysical mystery, which is how does consciousness, how does subjective experience fit into a broadly scientific, we might even say physicalist picture of the world? And so then there are lots of metaphysical theories of consciousness. I’ll run through my understanding of them, which might be somewhat inadequate, and then you can tell me whether it’s sort of up to date. Roughly speaking, you’ve got physicalist theories that say consciousness is or is realized by or is constituted by physical processes in the brain, in our case. You’ve got dualist theories that say consciousness is something over and above the merely physical. It’s a separate metaphysical domain, and then that comes in all sorts of different forms. You’ve got panpsychism, which is, to me at least, strangely influential at the moment, or at least it seems to be among some philosophers, that says basically everything at some level is conscious, so electrons and quarks are conscious. And then you’ve got illusionism, and I suppose probably the most influential philosopher that’s often associated with illusionism would be Daniel Dennett. I understand that he had a sort of awkward relationship to the branding. But there the idea is something like, look, we take there to be such a thing as consciousness. We take there to be such a thing as subjective experience. But actually, it’s kind of just an illusion. It doesn’t exist. Is that a fair taxonomy? Is that how you view the different pictures of consciousness in the metaphysical debate? Henry Shevlin: Yeah, I think that’s pretty much fair. A couple of tiny little things I’ll add. So panpsychism maybe doesn’t completely slot into this taxonomy in quite the way you might think. Because a lot of panpsychists would say, no, we’re just physicalists, right? We believe

    1h 19m
  8. 2025-09-18

    AI Sessions #1: AI - A Normal Technology or a Superintelligent Alien Species?

    Is artificial intelligence (AI) a “normal technology” or a potentially “superintelligent” alien species? Is it true, as some influential people claim, that if anyone builds “super-intelligent” AI systems, everyone dies? What even is superintelligence”? In this conversation, the first official episode of Conspicuous Cognition’s “AI Sessions”, Henry Shevlin and I discuss these and many more issues. Specifically, we explore two highly influential perspectives on the future trajectory, impacts, and dangers of AI. The first models AI as a “normal technology”, potentially transformative but still a tool, which will diffuse throughout society in ways similar to previous technologies like electricity or the internet. Through this lens, we examine how AI is likely to impact the world and discuss deep philosophical and scientific questions about the nature of intelligence and power. The second perspective presents a very different possibility: that we may be on the path to creating superintelligent autonomous agents that threaten to wipe out the human species. We unpack what "superintelligence" means and explore not just whether future AI systems could cause human extinction but whether they would “want” to. Here are the primary sources we cite in our conversation, which also double up as a helpful introductory reading list covering some of the most significant current debates concerning artificial intelligence and the future. Main Sources Cited: * Narayanan, Arvind and Sayash Kapoor (2025). "AI as Normal Technology." * Yudkowsky, Eliezer and Nate Soares (2025). If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. * Kokotajlo, Daniel, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean (2025). "AI 2027." * Alexander, Scott and the AI Futures Project (2025). "AI as Profoundly Abnormal Technology." AI Futures Project Blog. * Henrich, Joseph (2016). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. * Huemer, Michael. "I for one, welcome our AI Overlords" * Pinker, Steven (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress Further Reading: * Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. * Pinsof, David (2025). "AI Doomerism is B******t." * Kulveit, Jan, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud (2025). "Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development." arXiv preprint. For a more expansive reading list, see my syllabus here: You can also see the first conversation that Henry and I had here, which was recorded live and where the sound and video quality were a bit worse: Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.conspicuouscognition.com/subscribe

    1h 17m

About

A podcast about big questions in philosophy, psychology, evolution, politics, artificial intelligence, and more. www.conspicuouscognition.com

You Might Also Like