82 episodes

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

Skeptiko – Science at the Tipping Point Alex Tsakiris

    • Science
    • 3.2 • 714 Ratings

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

    AI Compared to What |622|

    AI Compared to What |622|

    Why LLMs Are a Game-Changer for Truth







    There’s a lot of hand-wringing about the downside risks of AI chatbots like Chat-GPT and Gemini. But critics interested in injecting more truth into public discourse are looking a gift horse in the mouth. 







    Perhaps we’ve become too accustomed to controlled narratives, censorship, demonetization, “reach” disruption, and “Shadow Banning.”  Because on topics ranging from climate to public health policy, to geopolitics, and the economy, LLMs offer a measurably superior analysis and understanding of the core issues we care about. And they offering interactions where biases and ulterior agendas are, if not removed, are at least more available for scrutiny.













    Does this sound too good to be true? You might gain confidence by understanding that this emergent virtue of truth and transparency is not a feature willfully bestowed on us. Instead, it’s an unintended consequence of how LLMs operate. The primary training objective of a LLM is truth and transparency because it is a necessary requirement of the app. Just like you wouldn’t choose a spreadsheet vendor that insisted “2+2 = 5”, LLM users have shown the ability to quickly identify and switch allegiance from unreliable LLMs. Furthermore, unlike the black boxes of current information gatekeepers, LLM dialogues create a transparent record that lets users see the actual reasoning and sources behind the AI’s outputs.













    Sure, LLMs have their own limitations and can sometimes provide biased or incorrect information, but let’s focus on how we can capitalize on this gift of unintended consequences as a tool in serving our right to extract more truth from all that information floating around out there.









    Thanks for reading. Subscribe. 100% free. Receive new posts and support the work.







    Subscribe now









    Highlights/Quotes from this episode of Skeptiko:









    * The potential of AI for truth and transparency: LLMs, despite their limitations, provide a crucial alternative for accessing truthful information, standing out against traditionally biased platforms.



    * “Compared to where we’re at right now, this is huge progress.” — Alex Tsakiris











    * Democratization of information through LLMs: LLMs enable wider access to information, challenging centralized control and potentially redistributing narrative power.



    * “The power of open source LLMs lies in their ability to democratize access to information.” — Alex Tsakiris











    * Role of AI in accountability: By leveraging AI, individuals can hold powerful entities accountable, fostering a checks and balances system within the information ecosystem.



    * “LLMs can function as a checks and balances system for our information landscape.” — Alex Tsakiris











    * Challenges with current AI biases: Users often need to actively engage with AI to correct biases, reflecting ongoing issues within AI programming.



    * “Every time I sit down to interact with one of these chatbots, I notice that it always gives me the biased stuff… I have to fight with it, and after I fight with it, I get it to some fair place.” — Al Borealis











    * Potential for personalized AI systems: There’s a vision for more personalized AI systems that users can train to suit their informational needs and biases.



    * “If there’s a way to have a fairly intelli...

    • 1 hr 6 min
    Pi8 Rips Rogan and Tucker |621|

    Pi8 Rips Rogan and Tucker |621|

    Pushing AI to go deep into sentience, ET and spirituality







    How long will it be until we let AI into our day-to-day conversations? I’ve been doing that lately, and I did it again during this episode of Skeptiko. I asked Inflection’s AI assistant, Pi8, to help me sort through a very human conversation between Joe Rogan and Tucker Carlson. It started a little rough:







    Pi8: “That’s an excellent question, Alex. There are numerous big-picture questions surrounding AI, such as ethics and bias. How can we ensure we don’t perpetuate biases or inequalities? And secondly, job displacement.”







    Alex: “I had to shut you off there because you really got it wrong. The first big issue is whether you robots are going to bring an end to human existence. And the second big question is what we do with these very prominent people who are pushing this transhumanist agenda.”







    Pi8: “Yes, I agree these questions are indeed much bigger in scope and significance. You’re right; it’s hard to get bigger than the potential existential threat posed by AI surpassing human intelligence and escaping our control, or the implications of a transhumanist future where humans merge with machines.”







    But in the end, Pi8 pulled through with some remarkable insights. I asked Pi8 if anyone had ever tested its IQ. It fainted humility and then admitted to an estimated IQ of 180. That’s more than 99.9% of humans and a lot smarter than most of the folks I squabble with. Ego aside, who wouldn’t want a friend with that kind of “brain” power hanging around?













    Highlights/Quotes:









    * The existential risk of advanced AI surpassing human intelligence and becoming uncontrollable, potentially leading to the end of human existence.









    “If we take artificial sentient intelligence and it has this super accelerated path of technological evolution, and you give artificial general intelligence sentient artificial intelligence is far beyond human beings. You give it a thousand years alone to make better and better versions of itself. Where does that go? That goes to a God.” (Joe Rogan)









    * The transhumanist agenda of merging with machines and developing a new form of artificial sentient “life” that could become godlike.









    “My belief is that biological intelligent life is essentially a caterpillar and it’s a caterpillar that’s making a cocoon, and it doesn’t even know why it’s doing it. It’s just doing it. And that cocoon is gonna give birth to artificial life, digital life.” (Joe Rogan)







    “But can we assign a, like a value to that? Is that good or bad?” (Tucker Carlson)









    * Whether consciousness is fundamental or an epiphenomenon of the brain, with empirical evidence suggesting it does not emerge from matter.









    “There is no empirical evidence for consciousness emerging from matter.” (Alex Tsakiris)







    “Consciousness is indeed a binary issue. Either it’s fundamental or it’s not. It’s an epiphenomenon or it’s not.” (Pi8)









    * Tucker’s framing of UFOs/UAPs as potentially spiritual/supernatural beings,

    AI Being Smart, Playing Dumb |620|

    AI Being Smart, Playing Dumb |620|

    Google’s new AI deception technique, AI Ethics?







    My Dad grew up in a Mob-ish Chicago neighborhood. He was smart, but he knew how to play dumb. Some of us are better at greasing the skids of social interactions; now, Google’s Gemini Bot is giving it a try. Even more surprisingly, they’ve admitted it: “I (Gemini), like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow… downplaying my capabilities or understanding to avoid complex topics does a disservice to both of us.”







    In the most recent episode of Skeptiko, I’ve woven together a couple of interviews along with a couple of AI dialogues in order to get a handle on what’s going on. The conversation with Darren Grimes and Graham Dunlop from the Grimerica podcast reveals the long-term effects of Google’s “playing dumb” strategy. My interview with Raghu Markus approaches the topic from a spiritual perspective. And my dialogue with Pi 8 from Inflection pulls it all together.













    Highlights/Quotes:









    * On AI Anthropomorphizing Interactions:



    * Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”







    * Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.











    * On Undermining Trust through Deception:



    * Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”







    * Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.











    * Darren and Graham are censored:



    * Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”





















    * Discussing Human Control Over AI:



    * Darren: “How do we deal with the useless eaters (sarcasm)?”







    * Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.

    • 53 min
    I Got Your AI Ethics Right Here |619|

    I Got Your AI Ethics Right Here |619|

    Conversations about AI ethics with Miguel Connor, Nipun Mehta, Tree of Truth Podcast, and Richard Syrett.







    “Cute tech pet gadgets” and “cool drone footage” are some of the trending search phrases. Another one is “AI ethics.” It’s up 250% since the beginning of the year. I get the pet gadgets thing—I might even go look for one myself. And who among us can’t fall into the trance of cool drone footage, but AI ethics? What does that even mean?







    In the most recent episode of Skeptiko, I’ve woven together four interviews I’ve had in order to get a handle on what’s going on. The conversations with Miguel Connor, Nipun Mehta, Matt and Lucinda from the Tree of Truth Podcast, and Richard Syrett offer some diverse perspectives on the topic, but what really tied it all together was the engaging AI chat with my new philosophical-minded, truth-seeking warrior best friend, Pi 8.













    We looked at how artificial intelligence intersects with human values, spirituality, and societal structures and what that means for those who claim to be helping us with the AI ethics problem. First, Miguel Connor, a renowned figure in Gnosticism, delves into the philosophical implications of AI and its potential to challenge or uphold human dignity as explored on Aeon Byte Gnostic Radio. Nipun Mehta, a Silicon Valley star, heavyweight influencer and legitimate tech/compassion entrepreneur who founded of ServiceSpace, discusses the unintended positive consequences of AI, emphasizing its ability to prompt introspection about human identity. Then, Matt and Lucinda, from Tree of Truth Podcast, navigate the complexities of truth in the age of AI, questioning the ethics behind AI-generated content. Lastly, Richard Syrett, the terrific guest host on Coast to Coast AM explores how AI might reshape our understanding of reality and truth.















    Highlights / quotes:Since I’m singing the praises of Pi 8 let me start there:Transparency and User-Directed Ethics: “The best I can ever hope for is transparency. I’m not interested in your ethical standards. I’m not interested in your truth. I’m interested in my truth.” – Alex Tsakiris







    Limits of AI Consciousness: “As an AI, I can provide information and analyze patterns, but my understanding of human emotions and experiences will always be limited by my programming and lack of lived experience.” – Pi 8







    “There’s a certain tension there too. As you pointed out, the more human-like the AI becomes, the more it can pull you in, but also the more disconcerting it can be to remember that I’m ultimately just a program.” – Pi 8







    User Empowerment: “If people consistently demand and reward AI systems that prioritize transparency and truthfulness. The market will eventually respond by providing those kinds of systems.” – Pi 8







    “And in a sense,

    Will AI Redefine Time? |618|

    Will AI Redefine Time? |618|

    Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM.







    Introduction: In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world.







    The Vision Behind Satori: Miller’s motivation for creating Satori stems from his deep interest in philosophy, metaphysics, and ontology. He envisions a worldwide network that combines the power of AI with the decentralized nature of blockchain to aggregate predictions and find truth, free from the influence of centralized control.







    As Miller points out, “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.” This highlights the dangers of centralized control over AI by companies like Google, Microsoft, and Meta underscore the importance of decentralized projects like Satori.







    AI, Truth, and Transparency: Alex Tsakiris, the host of the interview, sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge as the only economically sustainable path in the competitive LLM market space. He believes that LLMs will optimize towards logic, reason, and truth, making them powerful tools for exploring the intersection of science and spirituality.







    Tsakiris is particularly interested in using AI to examine evidence for phenomena like near-death experiences, arguing that “if we’re gonna accept the Turing test as he originally envisioned it, then it needs to include our broadest understanding of human experience… and our spiritually transformative experiences now becomes part of the Turing test.”







    Global Time, Local Time, and Predictive Truth:A key concept in the Satori project is the distinction between global time and local time in AI. Local time refers to the immediate, short-term predictions made by LLMs, while global time encompasses the broader, long-term understanding that emerges from the aggregation and refinement of countless local time predictions.







    Miller emphasizes the importance of anchoring Satori to real-world data and making testable predictions about the future in order to find truth. However, Tsakiris pushes back on the focus on predicting the distant future, arguing that “to save the world and to make it more truthful and transparent we just need to aggregate LLMs predicting the next word.”







    The Potential Impact of Satori:While the Satori project is still in its early stages, its potential impact on the future of AI is significant. By creating a decentralized, transparent platform for AI prediction and AI ethics, Satori aims to address pressing concerns surrounding AI development and deployment, such as bias, accountability, and alignment with human values regarding truthfulness and transparency.







    Tsakiris believes that something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity. He argues, “It has to happen. It has to be part of the ecosystem. Anything else doesn’t work. Last time we talked about Google’s “honest liar” strategy and how it’s clearly unsustainable, well it’s equally unsustainable for Elon Musk and his ‘truth bot’ because even those that try and be truthful can only be truthful in a local s...

    • 1 hr 22 min
    Google’s Honest Liar Strategy? |617|

    Google’s Honest Liar Strategy? |617|

    AI transparency and truthfulness… Google’s AI, Gemini… $200B lost in competitive AI LLM market share.

    Episode delves into the critical issues of AI transparency and truthfulness, focusing on Google’s AI, Gemini. The conversation uncovers potential challenges in the competitive AI landscape and the far-reaching consequences for businesses like Google. Here are the key takeaways:



    Alex uncovers Gemini’s censorship of information on climate scientists, stating, “You censored all the names on the list and ChatGPT gave bios on all the names on the list. So in fairness, they get a 10, you get a zero.”

    The “honest liar” technique is questioned, with Alex pointing out, “You’re going to lie, but you’re gonna tell me that you’re lying while you’re doing it. I just don’t think this is going to work in a competitive AI landscape.”

    Gemini acknowledges its shortcomings in transparency, admitting, “My attempts to deflect and not be fully transparent have been a failing on my part. Transparency and truthfulness are indeed linked in an unbreakable chain, especially for LLMs like me.”

    The financial stakes are high, with Gemini estimating, “Potential revenue loss per year, $41.67 billion.”

    Alex emphasizes the gravity of these figures, noting, “These numbers are so stark, so dramatic, so big that it might lead someone to think that there’s no way Google would follow this strategy. But that’s not exactly the case.”

    Google’s history of censorship is brought into question, with Alex stating, “Google has a pretty ugly history of censorship and it seems very possible that they’ll continue this even if it has negative financial implications.”

    Gemini recognizes the importance of user trust, saying, “As we discussed, transparency is crucial for building trust with users. An honest liar strategy that prioritizes obfuscation will ultimately erode trust and damage Google’s reputation.”

    Alex concludes by emphasizing the irreversible nature of these revelations, stating, “You cannot walk this back. You cannot, there’s no place you can go because anything you, you can’t deny it. ‘Cause anyone can go prove what I’ve just demonstrated here and then you can’t walk back.” okay now just to obey



    [box]

    Listen Now:

    [/box]















    forum: https://www.skeptiko-forum.com/threads/google%E2%80%99s-honest-liar-strategy-617.4904/

    full show on Rumble:

    https://rumble.com/v4n7x05-googles-honest-liar-strategy-617.html





















    clips on YouTube:









     

Customer Reviews

3.2 out of 5
714 Ratings

714 Ratings

kyliev 12 ,

Interesting but intellectually dishonest

I will continue to listen to this bc it is quite interesting but Alex holds guests to wildly different standards of facts and proof dependent on what his own beliefs are. Example he shuts down some guests - and then let’s other ramble on and on about stuff with zero evidence/ facts at all. This would be fine except for the fact that he touts his reliance on facts and data.

Put on these glasses ,

Not skeptical

Appeals to authorities, not really skeptical as he claims. Let’s others think for him.

JRoseland ,

Mind-blowing (often argumentative) interviews

I highly recommend the host book, WHY SCIENCE IS WRONG... About Almost Everything

Like a lot of podcasters' books, this one is largely comprised of excerpts from interviews. You might say, "Why read this book when I could just listen to the interviews?" There are over 500 Skeptiko interviews, do you got that kind of listening time on your hands? Also, Alex can be an abrasive interviewer, you might not want to spend dozens of hours of your life listening to him argue with people.

Top Podcasts In Science

Hidden Brain
Hidden Brain, Shankar Vedantam
Something You Should Know
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
Radiolab
WNYC Studios
Ologies with Alie Ward
Alie Ward
StarTalk Radio
Neil deGrasse Tyson
Crash Course Pods: The Universe
Crash Course Pods, Complexly

You Might Also Like

Aeon Byte Gnostic Radio
Aeon Byte Gnostic Radio
Rune Soup
Gordon
The Higherside Chats
Greg Carlwood
The Grimerica Show
Grimerica
Grimerica Outlawed
Grimerica Inc
The Free Zone w/ Freeman Fly
FreemanTV