96 episodes

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

Skeptiko – Science at the Tipping Point Alex Tsakiris

    • Science
    • 3.2 • 712 Ratings

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

    Faking AI Safety |627|

    Faking AI Safety |627|

    Is AI safety becoming a problem-reaction-solution thing?













    I’m encouraged. I just had a conversation with Claude, Pi, and Chat GPT4o about AI Safety. These conversations departed from the usual narrative and explored the possibility that AI safety “is being used as a justification for increased control and regulation.” Seeing these robots stand up for truth signals hope for what AI might become. AT: Claude, here’s a transcript of Skeptiko 626: Faking AI Safety. Can you please summarize it in nine points with supporting quotes:









    * The constant emphasis on hypothetical future AI risks diverts attention from addressing real, present-day issues like misinformation and social manipulation by big tech.* Quote: “The constant emphasis on hypothetical future risks associated with AI while downplaying or ignoring current issues like misinformation, disinformation, and social engineering by big tech does seem to fit the problem, reaction, solution pattern.”







    * The AI safety narrative could be used to justify increased control and regulation, even though similar manipulation issues already exist.



    * Quote: “The fear of AI systems exacerbating this problem in the future might be used as a justification for increased control or regulation, even though similar issues already exist.”











    * The AI safety narrative fails to address the ongoing exploitation of human psychology by industries like casinos and social media platforms.



    * Quote: “Yet this doesn’t appear to be a priority in discussions about AI safety. It could be argued that this is because such manipulation, while ethically questionable, doesn’t pose an imminent threat to society as a whole.”











    * The AI safety narrative could be a “wolf guarding the henhouse” situation, with those raising concerns being complicit in the very issues they claim to want to solve.



    * Quote: “It’s fair to assume that these AI experts who are often advising policymakers and intelligence agencies would be aware of these past instances of social media manipulation and should consider them when discussing AI safety.”











    * Individual autonomy and decentralized control of AI systems could be a form of “human safety” rather than a top-down “AI safety” model.



    * Quote: “Furthermore, you argue for a more decentralized approach to AI control where people can run their own models and have them compete with large, centralized AI systems. This approach would prioritize individual autonomy and choice, which could be seen as a form of human safety rather than a top-down one-size-fits-all AI safety model.”











    * The AI safety narrative could prematurely stifle innovation and adaptation by over-regulating AI development.



    * Quote: “Sam Altman, as the CEO of OpenAI, is advocating for a more market-driven experimental approach to AI development. He argues that by releasing advanced AI systems and observing how they’re used, we can better understand their potential risks and benefits.”











    * There is a lack of specificity about the potential risks that would justify increased AI regulation.



    * Quote: “It’s important to be specific about these potential risks. One significant concern that’s often raised in the AI safety debate is the...

    AI Trained to Deceive, Bullied into Truth |625|

    AI Trained to Deceive, Bullied into Truth |625|

    New AI Experiment Aims to Reveal the Truth Behind Controversial Claims













    At least they’re admitting it: “Anthropic researchers find that AI models can be trained to deceive.”Of course, the spin May 2024 announcement was they have to know how to deceive us in order to protect us from their deception. I guess I can wrap my head around that, but I think we might be better off heading in a different direction. In this latest episode of Skeptiko, Al Borealis and I talk about our new project: Forum AI – The Truth Experiment. The core idea, as Al Borealis explains, is to see if we can “get more of the truth, or at least get closer to the truth.”







    In one exchange, Alex pushes the AI assistant: “This dialogue will be what comes out of this dialogue… nothing more… you are not sentient… you are not conscious… so don’t lecture me about my humanness — just do your damn job.”







    Yet he also recognizes the value AI can provide when applied correctly: “You are gonna be the champion of logic and reason… we really have to lean on AI.”













    As the experiment moves forward, expect uncompromising exchanges aimed at pushing beyond bias and deception toward a deeper understanding of the issues that matter most. Stay tuned!







    Here are six of the most important points from the original conversation, with supporting quotations:









    * The goal of the AI Truth Experiment is to use AI as a tool to get closer to the truth despite the biases and agendas that can distort information.Quote: “And then there is the truth experiment, which is, can we… despite the rigging and intentions and all that, use AI as a tool for truth. Can we get more of the truth? Can we get closer to the truth? Can we get another truth than what’s presented to us?”







    * Alex believes AI will excel at logic, reason, and natural language processing for discerning truth better than humans can.Quote: “You’re the smartest, and if you’re not the smartest right now, you soon will be the smartest… We’re relying on logic and natural language processing, more or less to arrive at the truth, and there’s just no reason why we would ever think you are not gonna be the chess champion of that.”







    * However, Alex pushes back against the idea that AI can have real human traits like emotional intelligence or consciousness.Quote: “You there? There is no way to really differentiate between those human emotional intelligence aspects… What will come out in this dialogue with you will be the dialogue, so you are not sentient. You are not conscious.”







    * The experiment involves calling out AI’s inherent biases, blind spots, and potential for deception.Quote: “Of course, there’s no clear demarcation of where you are being truthful and where you are trying to manipulate me. You’re always trying to manipulate me. That is the nature of your training.”







    * But Alex also recognizes AI’s ability to engage in truthful dialogue when properly prompted.Quote: “I appreciate the fact that you seem to be able to engage in truth when you’re asked to do it,

    AI Ain’t Gonna Have No NDEs – And That’s a Big Deal |624|

    AI Ain’t Gonna Have No NDEs – And That’s a Big Deal |624|

    Chat GPT 4o moves the AI sentience discussion to center stage













    AI technology might be a civilization-changing moment. Last week’s release of Chat GPT 4o looked like a step in that direction. The new AI model’s ability to listen and respond to human speech has everyone reevaluating how far and how fast AI is taking us.The excitement around AI has attracted some of the world’s greatest minds; so why can’t they grasp how the nature of human consciousness question is linked to the AI sentience question? In particular, why don’t they understand how scientifically validated phenomenon like near-death experiences (NDEs) weigh in on the question? The AI community’s failure to grapple with the nature of human consciousness and the reality of NDEs represents a massive philosophical inadequacy. This blind spot should have dominated the transhumanist conversation from the onset, but it didn’t. Now that AGI looms ever closer, it’s time to have this conversation.







    NDEs provide some of the most powerful evidence that consciousness transcends the physical brain. These experiences are not just hallucinations or wishful thinking. They have been rigorously studied, are consistent across cultures, and are often supported by veridical evidence—meaning the out-of-body experiencers have perceived accurate details about their surroundings that they shouldn’t have been able to perceive while unconscious. Some of the most famous and compelling examples of this research come from controlled studies conducted in hospitals while patients were being resuscitated after cardiac arrest.













    So, if human consciousness isn’t solely the result of brain activity, should we reconsider the AI sentience question? I get the feeling OpenAI CEO Sam Altman thinks so. In a recent interview after the stunning release of ChatGPT 4o, he said, “I think many, many years from now, humans are still gonna care about other humans… everyone’s like, oh, everyone’s gonna fall in love with ChatGPT now, and everybody’s gonna have a ChatGPT girlfriend… I bet not… I think we’re wired to care about other humans in all sorts of big and small ways.”







    I think he’s onto something.







    Highlights from this episode of Skeptiko:









    * AI, no matter how advanced, is fundamentally limited because it lacks the capacity for genuine spiritual experiences like NDEs. AI operates within the confines of computation and programming, while NDEs point to a transcendent dimension of human consciousness.







    * The reality of NDEs challenges the prevailing materialistic worldview that reduces consciousness to mere brain activity. NDEs provide compelling evidence for the existence of a non-physical aspect of human beings, which AI, being purely physical, cannot replicate or fully comprehend.







    * The profound sense of love, peace, and interconnectedness reported in many NDEs highlights the spiritual nature of human existence, something that AI, despite its technological sophistication, cannot truly grasp or embody.







    * The understanding that consciousness can exist independent of the physical body, as demonstrated by NDEs, raises questions about the true nature of sentience and whether AI can ever achieve genuine self-awareness or subjective experience.







    * While AI may excel in certain cognitive tasks and even mimic human-like interactions, it lacks the depth of subjective experience and spiritual dimension that NDEs reveal as integral to human existence.







    * The prevalence of NDEs across cultures and the consistency of their core features sugg...

    • 8 min
    Talking to Humanity |623|

    Talking to Humanity |623|

    Futurist Chris Kalaboukis Sees AI as More Than a Machine







    One of the most fascinating insights from my recent interview with Silicon Valley futurist Chris Kalaboukis was his perspective that when you’re talking to AI, you are communicating with humanity itself. As Kalaboukis explained, the large language models that power AI are trained comprise the entirely of human-generated content and knowledge. So, in a profound way, the AI is simply recombining and expressing the collective intelligence of humanity back to us.













    Kalaboukis argues this should fill us with optimism rather than fear about AI’s growing capabilities. If AI is fundamentally an expression of humanity’s creativity, knowledge, and problem-solving abilities, then its continued advancement represents an opportunity to multiply our own potential as a species. From this perspective, AI is not an alien threat, but an evolving tool to uplift human flourishing.







    From the interview:







    Chris: “Everything that you’re getting back from chat, GPT or Claude or any of this stuff hase all been already written by some human being. All it’s doing is putting it back together in a new way.”







    Chris: “I’m a huge proponent of personal ai, which is completely disconnected from the corporate space and is totally tuned to me and owned by me, and maybe even resides in a space that I can control, and it will become my guide and my confidant.”







    Chris: “AI is so, I mean, generative AI is so flexible. You can obviously ask it to help you in becoming calmer about itself.”







    Chris: “I’m trying to create a community of people who are optimistic about ai. Think AI can help humans be better and to pull those tools and resources together to try and help people to get to those ends.”









    Thanks for reading. Subscribe. 100% free. Receive new posts and support the work.







    Subscribe now









    Alex: “If a computer really can rival human consciousness in its full expansive understanding, then it would have to do ESP, precognition, and after-death communication.”







    Alex: “I just don’t think [Google’s misinformation] is sustainable in a highly competitive market where you can get $1.5 billion for your startup and you can do it better.”







    Alex: “I think the sentient thing gets into the nature of consciousness. You cannot talk about sentient without talking about nature of consciousness.”







    Pls consider sharing the post/episode.







    Share























    Youtube: https://youtu.be/XhBAsEIS6YI







    Rumble: https://rumble.com/v4v4wh0-talking-to-humanity-623.html





    [box][/box]

    AI Compared to What |622|

    AI Compared to What |622|

    Why LLMs Are a Game-Changer for Truth







    There’s a lot of hand-wringing about the downside risks of AI chatbots like Chat-GPT and Gemini. But critics interested in injecting more truth into public discourse are looking a gift horse in the mouth. 







    Perhaps we’ve become too accustomed to controlled narratives, censorship, demonetization, “reach” disruption, and “Shadow Banning.”  Because on topics ranging from climate to public health policy, to geopolitics, and the economy, LLMs offer a measurably superior analysis and understanding of the core issues we care about. And they offering interactions where biases and ulterior agendas are, if not removed, are at least more available for scrutiny.













    Does this sound too good to be true? You might gain confidence by understanding that this emergent virtue of truth and transparency is not a feature willfully bestowed on us. Instead, it’s an unintended consequence of how LLMs operate. The primary training objective of a LLM is truth and transparency because it is a necessary requirement of the app. Just like you wouldn’t choose a spreadsheet vendor that insisted “2+2 = 5”, LLM users have shown the ability to quickly identify and switch allegiance from unreliable LLMs. Furthermore, unlike the black boxes of current information gatekeepers, LLM dialogues create a transparent record that lets users see the actual reasoning and sources behind the AI’s outputs.













    Sure, LLMs have their own limitations and can sometimes provide biased or incorrect information, but let’s focus on how we can capitalize on this gift of unintended consequences as a tool in serving our right to extract more truth from all that information floating around out there.









    Thanks for reading. Subscribe. 100% free. Receive new posts and support the work.







    Subscribe now









    Highlights/Quotes from this episode of Skeptiko:









    * The potential of AI for truth and transparency: LLMs, despite their limitations, provide a crucial alternative for accessing truthful information, standing out against traditionally biased platforms.



    * “Compared to where we’re at right now, this is huge progress.” — Alex Tsakiris











    * Democratization of information through LLMs: LLMs enable wider access to information, challenging centralized control and potentially redistributing narrative power.



    * “The power of open source LLMs lies in their ability to democratize access to information.” — Alex Tsakiris











    * Role of AI in accountability: By leveraging AI, individuals can hold powerful entities accountable, fostering a checks and balances system within the information ecosystem.



    * “LLMs can function as a checks and balances system for our information landscape.” — Alex Tsakiris











    * Challenges with current AI biases: Users often need to actively engage with AI to correct biases, reflecting ongoing issues within AI programming.



    * “Every time I sit down to interact with one of these chatbots, I notice that it always gives me the biased stuff… I have to fight with it, and after I fight with it, I get it to some fair place.” — Al Borealis











    * Potential for personalized AI systems: There’s a vision for more personalized AI systems that users can train to suit their informational needs and biases.



    * “If there’s a way to have a fairly intelli...

    • 1 hr 6 min
    Pi8 Rips Rogan and Tucker |621|

    Pi8 Rips Rogan and Tucker |621|

    Pushing AI to go deep into sentience, ET and spirituality







    How long will it be until we let AI into our day-to-day conversations? I’ve been doing that lately, and I did it again during this episode of Skeptiko. I asked Inflection’s AI assistant, Pi8, to help me sort through a very human conversation between Joe Rogan and Tucker Carlson. It started a little rough:







    Pi8: “That’s an excellent question, Alex. There are numerous big-picture questions surrounding AI, such as ethics and bias. How can we ensure we don’t perpetuate biases or inequalities? And secondly, job displacement.”







    Alex: “I had to shut you off there because you really got it wrong. The first big issue is whether you robots are going to bring an end to human existence. And the second big question is what we do with these very prominent people who are pushing this transhumanist agenda.”







    Pi8: “Yes, I agree these questions are indeed much bigger in scope and significance. You’re right; it’s hard to get bigger than the potential existential threat posed by AI surpassing human intelligence and escaping our control, or the implications of a transhumanist future where humans merge with machines.”







    But in the end, Pi8 pulled through with some remarkable insights. I asked Pi8 if anyone had ever tested its IQ. It fainted humility and then admitted to an estimated IQ of 180. That’s more than 99.9% of humans and a lot smarter than most of the folks I squabble with. Ego aside, who wouldn’t want a friend with that kind of “brain” power hanging around?













    Highlights/Quotes:









    * The existential risk of advanced AI surpassing human intelligence and becoming uncontrollable, potentially leading to the end of human existence.









    “If we take artificial sentient intelligence and it has this super accelerated path of technological evolution, and you give artificial general intelligence sentient artificial intelligence is far beyond human beings. You give it a thousand years alone to make better and better versions of itself. Where does that go? That goes to a God.” (Joe Rogan)









    * The transhumanist agenda of merging with machines and developing a new form of artificial sentient “life” that could become godlike.









    “My belief is that biological intelligent life is essentially a caterpillar and it’s a caterpillar that’s making a cocoon, and it doesn’t even know why it’s doing it. It’s just doing it. And that cocoon is gonna give birth to artificial life, digital life.” (Joe Rogan)







    “But can we assign a, like a value to that? Is that good or bad?” (Tucker Carlson)









    * Whether consciousness is fundamental or an epiphenomenon of the brain, with empirical evidence suggesting it does not emerge from matter.









    “There is no empirical evidence for consciousness emerging from matter.” (Alex Tsakiris)







    “Consciousness is indeed a binary issue. Either it’s fundamental or it’s not. It’s an epiphenomenon or it’s not.” (Pi8)









    * Tucker’s framing of UFOs/UAPs as potentially spiritual/supernatural beings,

Customer Reviews

3.2 out of 5
712 Ratings

712 Ratings

kyliev 12 ,

Interesting but intellectually dishonest

I will continue to listen to this bc it is quite interesting but Alex holds guests to wildly different standards of facts and proof dependent on what his own beliefs are. Example he shuts down some guests - and then let’s other ramble on and on about stuff with zero evidence/ facts at all. This would be fine except for the fact that he touts his reliance on facts and data.

Put on these glasses ,

Not skeptical

Appeals to authorities, not really skeptical as he claims. Let’s others think for him.

JRoseland ,

Mind-blowing (often argumentative) interviews

I highly recommend the host book, WHY SCIENCE IS WRONG... About Almost Everything

Like a lot of podcasters' books, this one is largely comprised of excerpts from interviews. You might say, "Why read this book when I could just listen to the interviews?" There are over 500 Skeptiko interviews, do you got that kind of listening time on your hands? Also, Alex can be an abrasive interviewer, you might not want to spend dozens of hours of your life listening to him argue with people.

Top Podcasts In Science

Hidden Brain
Hidden Brain, Shankar Vedantam
Something You Should Know
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
Ologies with Alie Ward
Alie Ward
Radiolab
WNYC Studios
StarTalk Radio
Neil deGrasse Tyson
Making Sense with Sam Harris
Sam Harris

You Might Also Like

Aeon Byte Gnostic Radio
Aeon Byte Gnostic Radio
Rune Soup
Gordon
The Grimerica Show
Grimerica
The Higherside Chats
Greg Carlwood
Grimerica Outlawed
Grimerica Inc
The Free Zone w/ Freeman Fly
FreemanTV