Exploring Machine Consciousness

PRISM

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.

Épisodes

  1. Rose Guingrich: AI Companions, Chatbots, and the Psychology of Human-AI Interaction

    16 FÉVR.

    Rose Guingrich: AI Companions, Chatbots, and the Psychology of Human-AI Interaction

    Rose Guingrich is a PhD candidate in Psychology and Social Policy at Princeton University, where she is a National Science Foundation Graduate Research Fellow. Her research examines human-AI interaction through the lens of social psychology and ethics, focusing on how people perceive minds in machines and how those perceptions shape behavior toward AI and other humans. Rose is founder of Ethicom, a consulting initiative providing tools and information for responsible AI use and development, and co-hosts the Our Lives with Bots podcast with Angy Watson.  In this episode, Rose explains why she focuses not on whether AI is conscious, but on the consequences of people perceiving AI as conscious. In this episode, Rose explains why she focuses not on whether AI is conscious, but on the consequences of people perceiving AI as conscious. We discuss: How her interdisciplinary background led her to study the perception of personhood in AI systems.Why she prioritises studying the impacts of perceived consciousness over debates about whether AI truly is conscious, and how this connects to Michael Graziano's theory of consciousness as a social construct.The psychological theory behind "carryover effects", how interacting with AI that we anthropomorphize can influence our subsequent interactions with real people, either through practice or relief mechanisms.Results from her longitudinal research on companion chatbots like Replika, showing that anthropomorphism mediates social impacts and that people with greater desire for social connection anthropomorphize chatbots more.Her proposed design framework for companion chatbotsWhy she believes we'll see increased attribution of consciousness to AI once humanoid robots become common.Her call for a psychology subfield dedicated to human-AI interaction, arguing that understanding psychological mechanisms like anthropomorphism will remain relevant even as AI advances.Rose argues that regardless of philosophical debates about machine consciousness, the fact that people can and do perceive AI as conscious has measurable social and ethical consequences that deserve serious empirical investigation.

    57 min
  2. Cameron Berg: Why Do LLMs Report Subjective Experience?

    08/12/2025

    Cameron Berg: Why Do LLMs Report Subjective Experience?

    Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness. In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss: New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processingMechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hearWhy Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experienceThe "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxesThe existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial sufferingCameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.

    58 min
  3. Lenore Blum: AI Consciousness is Inevitable: The Conscious Turing Machine

    03/11/2025

    Lenore Blum: AI Consciousness is Inevitable: The Conscious Turing Machine

    *Lenore refers to a few slides in this podcast; you can see them here.  Intro Today's guest, distinguished mathematician and computer scientist Lenore Blum, explains why she and her husband Manuel believe machine consciousness isn't just possible, it's inevitable. Their reasoning? If consciousness is computational (and they're betting it is), and we can mathematically specify those computations, then we can build them. It's that simple, and that profound. In this conversation, host Will Millership and Callum Chace discuss with Lenore: How the Conscious Turing Machine (CTM) draws from and extends the foundational ideas of Alan Turing's Universal Turing Machine.Using mathematics to "extract and simplify" the complexities of consciousness, searching for the fundamental, formal principles that define it.How the CTM acts as a high-level framework that aligns with the functionalities of competing theories like Global Workspace Theory and Integrated Information Theory (IIT).Why the Blums believe that AI consciousness is "inevitable" and that this provides a functional "roadmap for a conscious AI".The ethical implications of machine suffering, and why the phenomenon of "pain asymbolia" suggests a conscious AI must be able* *to suffer in order to function.What lessons Alan Turing's original "imitation game" can offer us for creating a practical, real-world test for machine consciousness.Lenore's Work (links) Blum, L., & Blum,M. (2024). AI Consciousness is Inevitable: A Theoretical Computer Science Perspective. arXiv. https://arxiv.org/pdf/2403.17101Blum, L., & Blum, M. (2022). A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. PNAS, 119(21). https://doi.org/10.1073/pnas.21159341Closer to Truth, Blums’ Conscious Turing MachineFull list of references here.

    1 h 43 min
  4. Henry Shevlin: Anticipating an Einstein moment in the understanding of consciousness

    28/05/2025

    Henry Shevlin: Anticipating an Einstein moment in the understanding of consciousness

    Welcome to the first episode of Understanding Machine Consciousness. This episode is a collaboration with The London Futurists Podcast. Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives.  He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence.  In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs. Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast. Selected follow-ups: Henry Shevlin - personal siteKinds of Intelligence - Leverhulme Centre for the Future of IntelligenceConsciousness, Machines, and Moral Status - 2024 paper by Henry ShevlinApply rich psychological terms in AI with care - by Henry Shevlin and Marta HalinaWhat insects can tell us about the origins of consciousness - by Andrew Barron and Colin KleinConsciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et alAssociation for the Study of ConsciousnessLondon Futurist Podcasthttps://www.prism-global.com/

    41 min

Notes et avis

5
sur 5
2 notes

À propos

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.