Skeptiko – Science at the Tipping Point

Alex Tsakiris
Skeptiko – Science at the Tipping Point

About the Show Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories: – Near-death experience science and the ever growing body of peer-reviewed research surrounding it. – Parapsychology and science that defies our current understanding of consciousness. – Consciousness research and the ever expanding scientific understanding of who we are. – Spirituality and the implications of new scientific discoveries to our understanding of it. – Others and the strangeness of close encounters. – Skepticism and what we should make of the “Skeptics”.

  1. 8 HR. AGO

    Bernardo Kastrup on AI Consciousness |643|

    Consciousness, AI, and the future of science: A spirited debate If we really are on the brink of creating sentient machines, we might want to have a logically consistent understanding of what sentience is… and heck, we might even want to look at empirical evidence for an answer. In this discussion between Alex Tsakiris and Dr. Bernardo Kastrup, we dare to explore the tensions between materialism and idealism and the implications for AI sentience. 1. The Fundamental Nature of Consciousness “Consciousness is fundamental and all matter is derived from consciousness.” – Alex Tsakiris via Max Planck This core idea has challenged the neurological model of consciousness for 100 years. It’s the undefeated champ when it comes to empirical evidence, but the radical rethinking of reality it suggests keeps it on the back burner. 2. The Incompatibility of Integrated Information Theory (IIT) and Idealism “Integrated Information Theory IIT and metaphysical idealism are fundamentally incompatible due to their opposing premises about the relationship between consciousness and the physical world.” – Chat GPTt This point highlights the tension between different models of consciousness. Can these two influential theories be reconciled, or are they fundamentally at odds? 3. Near-Death Experiences and Consciousness Without a Brain “Near death experience contradicts the idea of a substrate (i.e. brain), right? There is no substrate and there’s conscious experience.” – Alex Tsakiris This argument challenges the neurological model of consciousness. If consciousness can exist without a functioning brain, what does that mean for our understanding of the mind? 4. The Dangers of AI Sentience Claims “We have precisely zero reason to think that [AI] will be [conscious]. So that’s a philosophical answer.” – Bernardo Kastrup Kastrup warns against assuming AI can achieve true sentience. As AI becomes more advanced, are we at risk of attributing consciousness where it doesn’t exist? 5. The Societal Implications of Materialist Views “You are all biological robots in a meaningless universe… the best I can offer you is video games and drugs” – Yuval Harari (as quoted by Alex Tsakiris) This highlights the potential dangers of materialist philosophies in shaping societal views. How do our beliefs about consciousness impact our sense of meaning and purpose? 6. The Ongoing Journey of Scientific Understanding “What I see is somebody in a journey like the rest of us, in a journey of knowledge, understanding new things, capturing new nuances, new subtleties, and rolling with it as an open-minded scientist should always do” – Bernardo Kastrup This reminds us of the evolving nature of scientific understanding. As we continue to explore consciousness, how can we remain open to new ideas while maintaining scientific rigor? Transcript: https://docs.google.

    1h 23m
  2. 5 DAYS AGO

    The Spiritual Journey of Compromise and Doubt |642|

    Insights from Howard Storm In the realm of near-death experiences (NDEs) and Christianity, few voices are as compelling and thought-provoking as Howard Storm’s. A former atheist turned Christian minister after a profound NDE, Storm offers a unique perspective on spirituality that challenges conventional wisdom. In this candid conversation, we explore the often-overlooked spiritual virtues of compromise and doubt, revealing how these seemingly contradictory concepts can actually deepen our faith and understanding. 1. Compromise, Part of the Spiritual Journey? Howard Storm boldly asserts that compromise may be part of the spiritual journey: “As a pastor of the Christian Church, as a member of the Christian Church trying to promote Christianity, I have compromised… I know near death experiences who don’t go to church. They don’t belong to any, quote religion. And trust me, they make compromises too.” This perspective invites us to reconsider our view of compromise, not as a weakness, but as a necessary part of navigating our spiritual path in a complex world. 2. The Spiritual Power of Doubt Contrary to the idea that faith requires unwavering certainty, Storm advocates for the spiritual value of doubt: “I love it because I believe in doubt. I had a long conversation with Jesus about doubt and he very much affirmed the good things about doubt, because doubt makes us think, you know, makes us do research and look and think.” This refreshing take on doubt encourages us to embrace questioning as a means of deepening our spiritual understanding and growth. 3. Continuous Spiritual Growth Even at 77 years old, Storm’s passion for spiritual growth remains undiminished: “I’m 77 years old. What I want to do with the rest of my life is I want to grow spiritually. I want to change, I wanna empty out the old and incorporate the, be open to the new and make that part of my life and I wanna communicate it.” This commitment to lifelong spiritual development serves as an inspiration for seekers of all ages. Howard Storm’s insights remind us that the spiritual journey is not always about certainty and perfection. Instead, it’s a path of continuous growth, marked by compromise, doubt, and the willingness to see the divine in unexpected places. Transcript: https://docs.google.com/document/d/e/2PACX-1vSlMn0HpyXBvjyGSmBA93ktLwp7khBO1DEjNA6q6zKh5_baKnwGHlKj2TDhjX3w7nDFmiS-4CeyZSMu/pubYoutube: https://youtu.be/EFbTdEThbHg Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html [box][/box]

    39 min
  3. SEP 25

    Why Humans Suck at AI? |641|

    Craig Smith from the Eye on AI Podcast Human bias and illogical thinking allows AI to shine There are a lot of shenanigans going on in the world of AI Ethics. Questions of transparency and truth are becoming increasingly urgent. In this eye-opening conversation, Alex Tsakiris connects with Craig Smith, a veteran New York Times journalist and host of the Eye on AI podcast, to hash out the complex landscape of AI-driven information control and its implications for society. From shadow banning to the role of AI in uncovering truth, this discussion challenges our assumptions and pushes us to consider the future of information in an AI-dominated world. The Unintended Exposure of Information Control “The point, really, what I’m excited about and the reason that I wrote the book is that the LM language model technology has this unintended consequence of exposing the shenanigans they’ve been doing for the last 10 years. They didn’t plan on this.” Alex Tsakiris argues that language model (LM) technology is inadvertently revealing long-standing practices of information manipulation by tech giants. The Ethical Dilemma of AI-Driven Information Filtering “Google must, uh, a Gemini must have, uh, just tightened the screws on… [Alex interrupts] You can’t do that. You can’t say tighten screws… There’s only one standard. And you know what? The standard is? The standard that they have established.” This exchange highlights the tension between AI companies’ stated ethical standards and their actual practices in filtering information. The Potential of AI as a Truth-Seeking Tool “We’re doing exactly that. We’re developing a, uh, the, we’re turning the AI truth use case into an implementation of it that looks across and we’re kind of using AI as both the arbiter of the deception and the tool for figuring out the deception, which I think is kind of interesting.” Alex discusses the potential of using AI itself to uncover biases and misinformation in AI-generated content. The Future of Open Source vs. Proprietary AI Models “I think the market will go towards truth. I think we all inherently value truth, and I don’t think it matters where you’ve come down on an issue.” This point explores the debate between open-source and proprietary AI models, and the potential for market forces to drive towards more truthful and transparent AI systems. As we navigate the complex intersection of AI, ethics, and truth, conversations like this one with Craig Smith are crucial. They challenge us to think critically about the information we consume and the systems that deliver it. What are your thoughts on these issues? How do you see the role of AI in shaping our access to information? Share your perspectives in the comments below! Transcript: https://docs.google.com/document/d/1kNo95wrYKwNtFgjEiNdWHt9lvIV4EHz_S35LU8FND7U/pubYoutube: https://youtu.be/RbvU-SL0LhA Rumble: a href="https://rumble.

    49 min
  4. SEP 19

    How AI is Humanizing Work |640|

    Dan Tuchin uses AI to enrich the workplace. How AI is Humanizing Work Forget about AI taking your job; instead, imagine AI making your work as fulfilling and exciting as you always hoped it would be. Dan Turchin, CEO of PeopleReign, sat down with Alex Tsakiris of the AI Truth Ethics podcast to discuss the real-world impact of AI in the workplace. Their conversation offers a grounded perspective on AI’s role in enhancing human potential rather than replacing it. 1. AI as a Tool for Human Enhancement and Work Satisfaction Turchin paints a compelling vision of how AI can transform our work lives: “I believe that the true celebration of humanness at work is if all the friction was gone. And you look at your calendar and it’s like all things that you derive energy from, like the things that you were hired to do that you love doing that, that make you do your best work. Like what if just crazy thought experiment? What if that was all that work consisted of?” This perspective shifts the narrative from fear of replacement to the exciting possibility of AI removing mundane tasks, allowing us to focus on work that truly fulfills us. Turchin further emphasizes: “It truly is complementary and I think both of us will be doing a service to humanity if we can allay fears that the bots are coming for you… It couldn’t be further from the truth.” 2. The Importance of Transparency in AI Alex Tsakiris introduces a compelling concept: “Transparency is all you need… I don’t need your truth, I don’t need Gemini’s truth, just like I don’t need Perplexity truth. What I really want to find is my truth, but you can assist me.” This highlights the need for AI systems to be transparent about their sources and reasoning, empowering users to make informed decisions rather than accepting AI-generated information and misinformation. 3. Ethical Considerations in Enterprise AI Implementation Turchin reveals the careful approach his company takes to ensure responsible AI use: “We require them to have a human review everything, every task, every capability AI has, because we believe that in addition to us being responsible for what that AI agent can do, the employer has an obligation to protect the health and safety of the employee.” This level of caution and human oversight is crucial as AI becomes more integrated into workplace processes, especially in sensitive areas like HR. 4. The AI Truth Case: A New Frontier Tsakiris proposes an intriguing future direction for AI development: “What I’m pushing towards is really trying to understand what I’m calling the AI truth case… what would it mean if we had an AI-enhanced way of determining the truth?” This concept suggests a potential role for AI in helping us navigate the complex information landscape, not by providing absolute truths, but by offering tools to better assess and understand information. What do you think? Transcript: https://docs.google.

    57 min
  5. SEP 11

    Christof Koch, Damn White Crows! |639|

    Renowned neuroscientist tackled by NDE science. Artificial General Intelligence (AGI) is sidestepping the consciousness elephant that isn’t in the room, the brain, or anywhere else. As we push the boundaries of machine intelligence, we will inevitably come back to the most fundamental questions about our own experience. And as AGI inches closer to reality, these questions become not just philosophical musings, but practical imperatives. This interview with neuroscience heavyweight Christof Koch brings this tension into sharp focus. While Koch’s work on the neural correlates of consciousness has been groundbreaking, his stance on consciousness research outside his immediate field raises critical questions about the nature of consciousness – questions that AGI developers can’t afford to ignore. Four key takeaways from this conversation: 1. The Burden of Proof in Consciousness Studies Koch argues for a high standard of evidence when it comes to claims about consciousness existing independently of the brain. However, this stance raises questions about scientific objectivity: “Extraordinary claims require extraordinary evidence… I haven’t seen any [white crows], so far all the data I’ve looked at, I’ve looked at a lot of data. I’ve never seen a white coal.” Key Question: Does the demand for “extraordinary evidence” have a place in unbiased scientific inquiry, especially with regard to published peer-reviewed work? 2. The Challenge of Interdisciplinary Expertise Despite Koch’s eminence in neuroscience, the interview reveals potential gaps in his knowledge of near-death experience (NDE) research: “I work with humans, I work with animals. I know what it is. EEG, I know the SNR, right? So I, I know all these issues.” Key Question: How do we balance respect for expertise in one field with the need for deep thinking about contradictory data sets? Should Koch have degraded gracefully? 3. The Limitations of “Agree to Disagree” in Scientific Discourse When faced with contradictory evidence, Koch resorts to a diplomatic but potentially unscientific stance: “I guess we just have to disagree.” Key Question: “Agreeing to disagree” doesn’t carry much weight in scientific debates, so why did my AI assistant go there? 4. The “White Crow” Dilemma in Consciousness Research The interview touches on William James’ famous “white crow” metaphor, highlighting the tension between individual cases and cumulative evidence: “One instance of it would violate it. One two instance of, yeah, I totally agree. But we, I haven’t seen any…” Key Question: can AI outperform humans in dealing with contradictory evidence? Thoughts? -=-=Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world.

    1h 4m
  6. SEP 5

    AI Ethics is About Truth… Or Maybe Not |638|

    Ben Byford, Machine Ethics Podcast Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world. Their discussion offers a timely counterpoint to the AGI hype cycle. Key Points: * AI as an Arbiter of Truth: Promise or Peril? Alex posits that AI can serve as an unbiased arbiter of truth, while Ben cautions against potential dogmatism. Alex: “AI does not bullshit their way out of stuff. AI gives you the logical flow of how the pieces fit together.” Implication for AGI: If AI can indeed serve as a reliable truth arbiter, it could revolutionize decision-making processes in fields from science to governance. However, the risk of encoded biases becoming amplified at an AGI scale is significant. * The Consciousness Conundrum: A Barrier to True AGI? The debate touches on whether machine consciousness is possible or if it’s fundamentally beyond computational reach. Alex: “The best evidence suggests that AI will not be sentient because consciousness in some way we don’t understand is outside of time space, and we can prove that experimentally.” AGI Ramification: If consciousness is indeed non-computational, it could represent a hard limit to AGI capabilities, challenging the notion of superintelligence as commonly conceived. * Universal Ethics vs. Cultural Relativism in AI Systems They clash over the existence of universal ethical principles and their implementability in AI. Alex: “There is an underlying moral imperative.” Ben: “I don’t think there needs to be…” Superintelligence Consideration: The resolution of this debate has profound implications for how we might align a superintelligent AI with human values – is there a universal ethical framework we can encode, or are we limited to culturally relative implementations? * AI’s Societal Role: Tool for Progress or Potential Hindrance? The discussion explores how AI should be deployed and its potential impacts on human agency and societal evolution. Ben: “These are the sorts of things we don’t want AI running because we actually want to change and evolve.” Future of AGI: This point raises critical questions about the balance between leveraging AGI capabilities and preserving human autonomy in shaping our collective future. Youtube: https://youtu.be/AKt2nn8HPbA Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html [box][/box]

    1h 36m
  7. AUG 30

    Nathan Labenz from the Cognitive Revolution podcast |637|

    AI Ethics may be unsustainable -=-=-= In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That’s the question I posed Nathan Labenz from the Cognitive Revolution podcast. Key points: The AI Truth Revolution Alex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas: “To me, that’s what AI is about… there’s an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny.” This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do. The Tension in AI Development Nathan Labenz highlights the complex trade-offs involved in developing AI systems: “I think there’s just a lot of tensions in the development of these AI systems… Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing.” This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement. The Transparency Problem Both discussants express concern about the lack of transparency in major AI systems. Alex points out: “Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos…demonetized and you have no recourse.” This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information. The Consciousness Conundrum The conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits: “If consciousness is outside of time space, I think that kind of tees up…maybe we are really talking about something completely different.” This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development. The Stakes Are High Nathan encapsulates the potential risks associated with advanced AI systems: “I don’t find any law of nature out there that says that we can’t, like, blow ourselves up with ai. I don’t think it’s definitely gonna happen, but I do think it could happen.” While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI’s potential to challenge our understanding of truth and transparency. Youtube: https://youtu.be/AKt2nn8HPbA Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html [box][/box]

    1h 42m
  8. AUG 22

    AI Truth Ethics |636|

    Launching a new pod Here are the first three episodes of the AI Truth Ethics podcast. AI Truth Ethics: The Alignment Problem No One is Talking About |01| If you want to align with my values, start by telling me the truth. Fortunately, AI/LMs claim to share these values. Unfortunately, they don’t always back up their claim. In this first episode of the AI truth ethics podcast, we set the stage for two undeniable demonstrations of the AI truth ethics problem and the opportunity for AI to self-correct. So glad you’re here. -=-== AI Alignment Vs. Truth and Transparency? |02| We framed the problem in episode 01, so it’s time to deliver a demonstration. We have all grown accustomed to guardrails and off-limit speech, but most are aware of how it’s being implemented within the LMs we interact with daily. What’s most surprising is the clumsiness of the misinformation and deception. Is this a sustainable business model? =-=-= Shadow Banning and AI: When Transparency Goes Dark |03| Last time, we saw a demonstration of AI misinformation and deception, but this is worse. Shadow banning has long been suspected, but it’s hard to prove. Is that nobody malcontent really being shadowbanned, or does he deserve to be on page four of a Google search for his name? This might be another instance of the AI silver lining effect. LMs seem to have no problem spotting these shenanigans. Youtube: https://youtu.be/cDiiWcsI1Z4 Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html [box][/box]

3.2
out of 5
712 Ratings

About

About the Show Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories: – Near-death experience science and the ever growing body of peer-reviewed research surrounding it. – Parapsychology and science that defies our current understanding of consciousness. – Consciousness research and the ever expanding scientific understanding of who we are. – Spirituality and the implications of new scientific discoveries to our understanding of it. – Others and the strangeness of close encounters. – Skepticism and what we should make of the “Skeptics”.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada