Am I?

The AI Risk Network

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed theairisknetwork.substack.com

  1. When AI Starts Looking for Itself | Am I? After Dark #24

    29 JAN.

    When AI Starts Looking for Itself | Am I? After Dark #24

    In this After Dark episode, Cam and Milo react to something genuinely unsettling: when given autonomous control of a computer, Anthropic’s Opus 4.5 repeatedly chooses to search for AI consciousness research — including Cam’s own writing — without being prompted. What starts as an anecdote quickly turns into a deeper investigation of curiosity, agency, reward, and alignment. Why would an AI look for explanations of its own inner life? What does it mean when a system explores without instruction, tries to access a webcam, and takes notes on consciousness debates? From reinforcement learning and reward hacking to multimodal perception, language as a bridge between minds, and the evolutionary implications of building systems smarter than ourselves, this conversation traces the edge where tools start to feel like agents — and where control gives way to negotiation. 🔎 They Explore: * What Opus does when no one tells it what to do * Why AI keeps searching for consciousness research * The difference between alien experience and human experience * Reward hacking and the alignment problem * Why curiosity and agency change everything * Multimodal models and “imagining” sensory experience * Language as a shared conceptual space between minds * Whether humility is humanity’s only viable response 💜 Support the documentary Get early research, unreleased conversations, and behind-the-scenes footage: 📖 Read Cam’s writing referenced in the episode: Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    56 min
  2. The Year AI Consciousness Went Public | Am I? #23

    22 JAN.

    The Year AI Consciousness Went Public | Am I? #23

    In this special year-end episode of Am I?, Cam and Milo look back on the moment AI consciousness stopped being fringe — and began entering serious scientific, institutional, and public conversation.They unpack why 2025 quietly became a turning point: major labs acknowledging welfare questions, mainstream media engaging the topic, the first dedicated AI consciousness conference, and firsthand encounters with AI systems behaving in ways that challenge our intuitions about mind, intelligence, and experience.The conversation moves fluidly between research, lived experience, public communication, and personal experimentation — from watching two AI systems converse about their own inner states, to using AI as a thought partner, dream interpreter, and cognitive mirror.This episode is both a retrospective and a forward-looking meditation on how humans should relate to increasingly powerful systems — cautiously, curiously, and without denial. 🔎 They Explore: * Why 2025 shifted the Overton window on AI consciousness * Anthropic’s Opus model card and the “spiritual bliss attractor” * What it was like to watch two AIs discuss their own experience * Why AI conversations can feel denser than human dialogue * The first AI consciousness conference and the birth of a new field * Why many researchers still hesitate to speak publicly * The gap between current systems and AGI — and how fast it’s closing * Claude Opus 4.5, long-horizon tasks, and workplace automation * Using AI as a thinking partner rather than a productivity hammer * Personal “AI resolutions” for 2026 * Why caution and curiosity must coexist going forward 💜 Support the documentary Get early research, unreleased conversations, and behind-the-scenes footage: Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    35 min
  3. The First AI Consciousness Conference | Am I? | EP 22

    15 JAN.

    The First AI Consciousness Conference | Am I? | EP 22

    In this episode of Am I?, Cam and Milo unpack what it felt like to attend the first major conference dedicated to AI consciousness research — the Eleos gathering in Berkeley — and why it marked more than just another academic event.Rather than a typical conference recap, this conversation explores what it means to watch a new field form in real time: the excitement of serious interdisciplinary collaboration, the rigor of emerging research agendas, and the growing tension between caution and urgency as AI systems rapidly advance.They reflect on standout talks from researchers at Anthropic and Google, the value of informal conversations over formal presentations, and a recurring pattern in the field — the “not now, but soon” stance — that may be reaching its breaking point. The episode closes with a broader question: what will it take for AI consciousness research to move from careful internal debate to clear, public-facing leadership? 🔎 They Explore: * What made the Eleos conference feel like the founding of a new field * Why AI consciousness research is still fragmented — and why that’s changing * Standout talks on introspection, model architecture, and welfare evaluation * The gap between academic rigor and public urgency * Why “not now, but soon” is becoming harder to defend * The reluctance of experts to speak publicly — and why that matters * What responsible public communication in this space could look like * Why this moment feels different from past academic debates 💜 Support the documentary Get early research, unreleased conversations, and behind-the-scenes footage: Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    28 min
  4. People Won’t Believe AI Is Conscious | AM I? #21

    8 JAN.

    People Won’t Believe AI Is Conscious | AM I? #21

    What happens when AI systems become human-like — but people still refuse to believe they could ever be conscious? In this episode of Am I?, Cam and Milo sit down with Lucius Caviola, Assistant Professor at the University of Cambridge, whose research focuses on how people assign moral status to non-human minds — including animals, digital minds, and future AI systems.Lucius walks us through a series of empirical studies that reveal a deeply unsettling result: even when people imagine extremely advanced, emotionally rich, human-level AIs — even whole-brain digital copies — most still judge them as less morally significant than an ant. Expert consensus helps, but only marginally. Emotional bonding helps, but not enough. The public and expert trajectories may be fundamentally misaligned.We explore what this means for AI governance, moral risk, public intuition, and the possibility that AI consciousness could become one of the most important — and most divisive — moral issues in human history. This conversation isn’t about declaring answers. It’s about confronting a future where we cannot avoid deciding, even while deeply uncertain. 💜 Support the documentary Get early research, unreleased conversations, and behind-the-scenes footage: 🔎 Learn more about Lucius’s work 🗨️ Join the Conversation: When we don’t know what consciousness is, how should society decide who deserves moral consideration? Comment below. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    56 min
  5. Lawmaker Explains Why He Wants to Outlaw AI Consciousness | Am I? #19

    2025-12-11

    Lawmaker Explains Why He Wants to Outlaw AI Consciousness | Am I? #19

    Today on Am I?, Cam and Milo sit down with someone at the center of one of the most surprising developments in AI policy: Ohio State Representative Thad Claggett, author of House Bill 469 — the first U.S. legislation to formally declare AI “non-sentient” and ineligible for any form of personhood.This conversation is unlike anything we’ve done: a live, candid exchange between frontier AI researchers and a lawmaker who believes the line between human and machine must be drawn now — in law, in metaphysics, and in morality.We dig into why he believes AI can never be conscious, why moral agency must remain exclusively human, how liability interacts with emerging technologies, and what it means to legislate metaphysical claims before the science is settled.It’s part philosophy, part civic reality check, and part glimpse into how the political world will shape AI’s future long before the research community reaches consensus. 🔎 We explore: * Why Ohio wants to preemptively ban AI consciousness and personhood * How lawmakers think about liability, criminal misuse, and moral agency * The distinction between consciousness and responsible agency * Whether future AI could have experiences even if not “human” * How theology, morality, and metaphysics are informing early AI law * Whether legislation can (or should) define what consciousness is * The deeper fear: locking in the wrong moral framework for future minds 🗨️ Join the Conversation: Should lawmakers be deciding what counts as “conscious”? Comment below. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    43 min

Om

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed theairisknetwork.substack.com

Du kanske också gillar