What The Bot with Reuben Adams

Reuben Adams

Interviews about AI: what's going right, what's going wrong, and where we're all headed.

Episodes

  1. JAN 7

    The Turing Test Is Flawed. Here's Why.

    I thought the Turing Test was between a person and a computer. But Turing’s paper seems to imply it’s between a man and a woman, and then a computer and a woman. So what really is the Turing Test? And is it a good measure of intelligence anyway? Sources Alan Turing: Computing Machinery and Intelligence, 1950  https://phil415.pbworks.com/f/TuringComputing.pdf Gualtiero Piccinini: Turing’s Rules for the Imitation Game, 2000 https://www.researchgate.net/publication/251383110_Turing's_Rules_for_the_Imitation_Game Judith Genova: Turing’s Sexual Guessing Game, 1994 https://www.tandfonline.com/doi/abs/10.1080/02691729408578758 EDSAC: https://commons.wikimedia.org/wiki/File:EDSAC_(25).jpg Jimmy Kimmel Clip: https://www.youtube.com/watch?v=earRJKrE8Bw EDSAC iPhone13 Comparison EDSAC, from Wiki: “Cycle time was 1.5 ms for all ordinary instructions” It looks like addition was one of the “ordinary instructions.” “Numbers were either 17 bits (one word) or 35 bits (two words) long.” My understanding is that 35 bits would take two operations to add, so I’ve stuck to adding two 17 bit numbers, which can be done in one floating point operation. “The first calculation done by EDSAC was a program run on 6 May 1949” From then to Christmas day 2025 is 27992 days = 2418508800 seconds So the number of 17 bit numbers the EDSAC could add in the time period is 2418508800 s / 1.5 ms = 1612339200000 iPhone 13, from Wiki: The GPU runs at 1.37 TFLOPS (tera FLOPS), so 1.37 * 1012 FLOPS. Let’s assume adding two 17 bit numbers takes 1 FLOP. Then adding 1612339200000 17 bit numbers can be done in (1612339200000 additions) / (1.37 * 1012 additions/s) = *1.176889 s.*

    16 min
  2. 12/29/2025

    Could We Really Lose Control of AI?

    Suppose an AI “went rogue”. Couldn’t we just switch it off? How would it keep itself running without human help or an army of robots? And why would AI necessarily be evil, rather than kind? I put these questions to Zershaaneh Qureshi, a researcher at 80,000hrs. Her article “Risks from power-seeking AI systems” is *the* best introduction to the debate on whether AI may one day be an extinction level risk. What struck me most was this fantastic analogy: “You could be a scholar of the antebellum south… You'll know everything about why slave owners believe that they were justified in owning slaves. But that definitely doesn't mean that you're going to think yourself that slavery is justifiable.” This really drives home the fact that even if we manage to build AIs that understand human values, that doesn’t mean that they will adopt those values as their own. Timestamps: 06:49 - Is Talk of AI Extinction Just Hype From AI Companies? 18:08 - Will AI Always Be Just a Tool? 26:26 - Can We Just Switch It Off If It “Goes Rogue”? 33:52 - The Challenge of Instilling the Right Goals 46:38 - Specification Gaming and Goal Misgeneralization 53:48 - Instrumental Goals: Self-Preservation and Power-Seeking 1:01:57 - Situational Awareness: Do AIs Need to Be Conscious? 1:08:53 - Why Would We Deploy Something This Dangerous? 1:11:48 - The Deception Problem: AIs Could Hide Their True Intentions 1:20:53 - Could AI Actually Take Over the Physical World? 1:36:26 - Have We Argued Ourselves to an Absurd Conclusion?

    1h 40m
  3. Does ChatGPT have a mind?

    10/14/2025

    Does ChatGPT have a mind?

    Do large language models like ChatGPT actually understand what they're saying? Can AI systems have beliefs, desires, or even consciousness? Philosophers Henry Shevlin and Alex Grzankowski debunk the common arguments against LLM minds and explore whether these systems genuinely think.This episode examines popular objections to AI consciousness - from "they're just next token predictors" to "it's just matrix multiplication" - and explains why these arguments fail. The conversation covers the Moses illusion, competence vs performance, the intentional stance, and whether we're applying unfair double standards to AI that we wouldn't apply to humans or animals.Key topics discussed: Why "just next token prediction" isn't a good argument against LLM mindsThe competence vs performance distinction in cognitive scienceHow humans make similar errors to LLMs (Moses illusion, conjunction fallacy)Whether LLMs can have beliefs, preferences, and understandingThe difference between base models and fine-tuned chatbotsWhy consciousness in LLMs remains unlikely despite other mental states Featured paper: "Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality"Authored by Alex Grzankowski, Geoff Keeling, Henry Shevlin and Winnie Street Guests:Henry Shevlin - Philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence, University of CambridgeAlex Grzankowski - Philosopher at King's College London#AI #Philosophy #Consciousness #LLM #ArtificialIntelligence #ChatGPT #MachineLearning #CognitiveScience

    1h 17m

About

Interviews about AI: what's going right, what's going wrong, and where we're all headed.