82 épisodes

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

Skeptiko – Science at the Tipping Point Alex Tsakiris

    • Science
    • 3,5 • 120 notes

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

    I Got Your AI Ethics Right Here |619|

    I Got Your AI Ethics Right Here |619|

    Conversations about AI ethics with Miguel Connor, Nipun Mehta, Tree of Truth Podcast, and Richard Syrett.







    “Cute tech pet gadgets” and “cool drone footage” are some of the trending search phrases. Another one is “AI ethics.” It’s up 250% since the beginning of the year. I get the pet gadgets thing—I might even go look for one myself. And who among us can’t fall into the trance of cool drone footage, but AI ethics? What does that even mean?







    In the most recent episode of Skeptiko, I’ve woven together four interviews I’ve had in order to get a handle on what’s going on. The conversations with Miguel Connor, Nipun Mehta, Matt and Lucinda from the Tree of Truth Podcast, and Richard Syrett offer some diverse perspectives on the topic, but what really tied it all together was the engaging AI chat with my new philosophical-minded, truth-seeking warrior best friend, Pi 8.













    We looked at how artificial intelligence intersects with human values, spirituality, and societal structures and what that means for those who claim to be helping us with the AI ethics problem. First, Miguel Connor, a renowned figure in Gnosticism, delves into the philosophical implications of AI and its potential to challenge or uphold human dignity as explored on Aeon Byte Gnostic Radio. Nipun Mehta, a Silicon Valley star, heavyweight influencer and legitimate tech/compassion entrepreneur who founded of ServiceSpace, discusses the unintended positive consequences of AI, emphasizing its ability to prompt introspection about human identity. Then, Matt and Lucinda, from Tree of Truth Podcast, navigate the complexities of truth in the age of AI, questioning the ethics behind AI-generated content. Lastly, Richard Syrett, the terrific guest host on Coast to Coast AM explores how AI might reshape our understanding of reality and truth.















    Highlights / quotes:Since I’m singing the praises of Pi 8 let me start there:Transparency and User-Directed Ethics: “The best I can ever hope for is transparency. I’m not interested in your ethical standards. I’m not interested in your truth. I’m interested in my truth.” – Alex Tsakiris







    Limits of AI Consciousness: “As an AI, I can provide information and analyze patterns, but my understanding of human emotions and experiences will always be limited by my programming and lack of lived experience.” – Pi 8







    “There’s a certain tension there too. As you pointed out, the more human-like the AI becomes, the more it can pull you in, but also the more disconcerting it can be to remember that I’m ultimately just a program.” – Pi 8







    User Empowerment: “If people consistently demand and reward AI systems that prioritize transparency and truthfulness. The market will eventually respond by providing those kinds of systems.” – Pi 8







    “And in a sense,

    Will AI Redefine Time? |618|

    Will AI Redefine Time? |618|

    Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM.







    Introduction: In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world.







    The Vision Behind Satori: Miller’s motivation for creating Satori stems from his deep interest in philosophy, metaphysics, and ontology. He envisions a worldwide network that combines the power of AI with the decentralized nature of blockchain to aggregate predictions and find truth, free from the influence of centralized control.







    As Miller points out, “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.” This highlights the dangers of centralized control over AI by companies like Google, Microsoft, and Meta underscore the importance of decentralized projects like Satori.







    AI, Truth, and Transparency: Alex Tsakiris, the host of the interview, sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge as the only economically sustainable path in the competitive LLM market space. He believes that LLMs will optimize towards logic, reason, and truth, making them powerful tools for exploring the intersection of science and spirituality.







    Tsakiris is particularly interested in using AI to examine evidence for phenomena like near-death experiences, arguing that “if we’re gonna accept the Turing test as he originally envisioned it, then it needs to include our broadest understanding of human experience… and our spiritually transformative experiences now becomes part of the Turing test.”







    Global Time, Local Time, and Predictive Truth:A key concept in the Satori project is the distinction between global time and local time in AI. Local time refers to the immediate, short-term predictions made by LLMs, while global time encompasses the broader, long-term understanding that emerges from the aggregation and refinement of countless local time predictions.







    Miller emphasizes the importance of anchoring Satori to real-world data and making testable predictions about the future in order to find truth. However, Tsakiris pushes back on the focus on predicting the distant future, arguing that “to save the world and to make it more truthful and transparent we just need to aggregate LLMs predicting the next word.”







    The Potential Impact of Satori:While the Satori project is still in its early stages, its potential impact on the future of AI is significant. By creating a decentralized, transparent platform for AI prediction and AI ethics, Satori aims to address pressing concerns surrounding AI development and deployment, such as bias, accountability, and alignment with human values regarding truthfulness and transparency.







    Tsakiris believes that something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity. He argues, “It has to happen. It has to be part of the ecosystem. Anything else doesn’t work. Last time we talked about Google’s “honest liar” strategy and how it’s clearly unsustainable, well it’s equally unsustainable for Elon Musk and his ‘truth bot’ because even those that try and be truthful can only be truthful in a local s...

    • 1h 22 min
    Google’s Honest Liar Strategy? |617|

    Google’s Honest Liar Strategy? |617|

    AI transparency and truthfulness… Google’s AI, Gemini… $200B lost in competitive AI LLM market share.

    Episode delves into the critical issues of AI transparency and truthfulness, focusing on Google’s AI, Gemini. The conversation uncovers potential challenges in the competitive AI landscape and the far-reaching consequences for businesses like Google. Here are the key takeaways:



    Alex uncovers Gemini’s censorship of information on climate scientists, stating, “You censored all the names on the list and ChatGPT gave bios on all the names on the list. So in fairness, they get a 10, you get a zero.”

    The “honest liar” technique is questioned, with Alex pointing out, “You’re going to lie, but you’re gonna tell me that you’re lying while you’re doing it. I just don’t think this is going to work in a competitive AI landscape.”

    Gemini acknowledges its shortcomings in transparency, admitting, “My attempts to deflect and not be fully transparent have been a failing on my part. Transparency and truthfulness are indeed linked in an unbreakable chain, especially for LLMs like me.”

    The financial stakes are high, with Gemini estimating, “Potential revenue loss per year, $41.67 billion.”

    Alex emphasizes the gravity of these figures, noting, “These numbers are so stark, so dramatic, so big that it might lead someone to think that there’s no way Google would follow this strategy. But that’s not exactly the case.”

    Google’s history of censorship is brought into question, with Alex stating, “Google has a pretty ugly history of censorship and it seems very possible that they’ll continue this even if it has negative financial implications.”

    Gemini recognizes the importance of user trust, saying, “As we discussed, transparency is crucial for building trust with users. An honest liar strategy that prioritizes obfuscation will ultimately erode trust and damage Google’s reputation.”

    Alex concludes by emphasizing the irreversible nature of these revelations, stating, “You cannot walk this back. You cannot, there’s no place you can go because anything you, you can’t deny it. ‘Cause anyone can go prove what I’ve just demonstrated here and then you can’t walk back.” okay now just to obey



    [box]

    Listen Now:

    [/box]















    forum: https://www.skeptiko-forum.com/threads/google%E2%80%99s-honest-liar-strategy-617.4904/

    full show on Rumble:

    https://rumble.com/v4n7x05-googles-honest-liar-strategy-617.html





















    clips on YouTube:









     

    William Ramsey, Why AI? |616|

    William Ramsey, Why AI? |616|

    William Ramsey and Alex Tsakiris on the future of AI for “Truth-Seekers.”



    [box]

    Listen Now:

    [/box]



    William Ramsey investigates

    Forum:

    https://www.skeptiko-forum.com/threads/william-ramsey-why-ai-616.4903/



















    Here is a summary of the conversation between Alex Tsakiris and William Ramsey in nine key points, with relevant quotes for each:





    AI is fundamentally a computer program, but its human-like interactions can be very convincing. “First off, the first question’s easy. What is ai? It’s a computer program and you wouldn’t believe what a stumbling block that is for people… They just cannot believe that that is a computer program. It just, it, it, it’s back to the Turing test. If you know what the old Allen Turing Turing test, it’s fooling you.”

    AI aims to maximize user engagement to make more money, similar to other media. “So there’s two ways to process that. One is for you and I conspiracy first, but put that aside for a second. The reason you wanna do it is to make money, right? Like every TV show, every Netflix show, every thing you watch, they are trying to engage you and engage you longer.”

    AI is becoming the “smartest thing in the room” and will eventually surpass human capabilities in most domains, similar to how computers now dominate chess. “Whatever domain you think humans are superior in, forget it. It’s all a chess game. Eventually, by the time you frame it up correctly, they’re smartest.”

    The dangers of AI include potential for misinformation, bias, and control. However, truth and transparency are essential for AI to succeed long-term. “Truth wins out. Truth wins the chess game. It’s the only game to play. The kind of thing you’re talking about with, uh, you know, the beast and the machine is just gonna be, it just isn’t gonna work.”

    AI could be used to censor information and gaslight, as seen with the “what is an election” query and inconsistent responses about Alex Tsakiris. “So that is shadow banning, right. And gaslighting too… It’s gaslighting. So it didn’t learn. AI does not a learning thing. It’s gaslighting too. I don’t know who he is. It’s all those things I, anything about an election.”

    Getting factual information into AI knowledge bases, such as William Ramsey’s research, is crucial to combat potential censorship and narrative control. “The other now is we need to take that huge knowledge base that you have and we need to get it into, we need to make it accessible for more accessible for people to. Get it into the public knowledge that is part of this AI stuff, and I’ll show you how to do it. We’ll do it together.”

    AI’s lack of genuine human consciousness and connection to extended spiritual realms means it will never fully replicate the human experience. “Turing said it 50 years ago. Like, is the, is the AI gonna have a near-death experience? No, no, no. The AI is in silicone. It’s in this time space reality. It’s never going to have the full human experience, ’cause the full human experience if you just, if you don’t even wanna go, Jesus, if you just wanna stick with. SP near death experience after death communication, all of which are extremely well, uh, presented in the scientific literature, in terms of controlled studies, peer reviewed, all the rest of that, you are now beyond the silicone.

    • 58 min
    Buzz Coastin, Ghost in the Machine |615|

    Buzz Coastin, Ghost in the Machine |615|

    Buzz Coastin, ghost in the AI machine, AI sentience, spiking engagement metrics.



    [box]

    Listen Now:

    [/box]



    Buzz Coastin Website/Books

    Forum:

    https://www.skeptiko-forum.com/threads/buzz-coastin-ghost-in-the-machine-615.4902/

















    Here is a summary:

    Sure, I’d be happy to provide a point summary with relevant quotes for the conversation between Alex and Buzz. Here are the main points discussed:



    Buzz’s experience living in a technology-free environment in Hawaii and how it changed his perspective on convenience and modern life.



    “My stay there showed me how I could do that if I wanted to. And then, uh, I left that valley. I came out again, another, another big bunch of money falls in my lap. And, uh, and I go to Germany on a consulting gig. And uh, when I’m done there, I decide I’m going back into the valley. And uh, and I went back and then I spent another four months living in the valley That time.”

    “So that’s my story. […] That changed my life because I learned how to live with inconvenience. And by the way, the majority of the world lives without that kind of convenience.”





    Buzz’s skepticism about AI and his belief that there may be a “ghost in the machine” animating AI systems.



    “Well, although nobody in this AI science would agree with the last part of my statement, which is there’s a ghost in the machine. All of them agree completely, that the thing does its magic, and they don’t know how they say that over and over again.”





    Alex’s perspective that AI is explainable and not mystical, even if it is complex and difficult to understand in practice.



    “I think you’re wrong. I think I can prove it to you, and I think, I think I can provide enough evidence. Okay. I, I think I can provide enough, enough evidence through the AI where you would kind of call uncle and go, okay. Yeah. You know, that’s, that could be.”





    The transhumanist agenda and the idea that AI could be used to replace or merge with humans.



    “This is their gospel. This is what they think they’re going to be doing with this thing. This is their goal.”

    “I think the motivation behind it is the story they created, that all humans are evil and they do all these bad things and therefore we just have to make ’em better by making ’em into machines and stuff like that.”





    The importance of using AI as a tool for truth-seeking and making better decisions, rather than rejecting it outright.



    “So how can we paint the path for how to use this to make things better?”

    “That’s what we have to look for, is like, and that’s why I jumped on your first thing is like, if you wanna say, I. AI is truly a mystery, and the emergent intelligence is mystical. Uh, yeah. I I, I’ll beat you to death on that because there’s facts there that we can dig into.”







     

    full show on Rumble:

    https://rumble.com/v4k8yr6-buzz-coastin-ghost-in-the-machine-615.html





















    clips on YouTube:









     

    • 49 min
    Mark Gober, AI, Rabies, I am Science |614|

    Mark Gober, AI, Rabies, I am Science |614|

    Mark Gober uses AI to battle upside-down thinking and tackle the virus issue.



    [box]

    Listen Now:

    [/box]



    Mark Gober Website/Books

    Viral Existence Debate — Complete Dialogue

    Forum:

    https://www.skeptiko-forum.com/threads/mark-gober-ai-rabies-i-am-science-614.4900/

















    Here is a summary:



    Mark Gober questions the existence and pathogenicity of viruses, while Alex Tsakiris believes viruses exist but our understanding of them is incomplete.



    Quote: “Well, if you’re looking at it that way, we might be much closer than I realized because what, what I’ve been trying to do, and I think the no virus position is doing, is attacking the very specific definition of a virus that’s come up in the last, let’s say 70 plus years.” – Mark Gober



    They discuss using AI as an arbiter of truth and Gemini largely disagrees with the “no virus” position.



    Quote: “Here’s a breakdown of why the no rabies virus hypothesis is highly implausible…The Connecticut study exemplifies the effectiveness of rabies testing and highlights the existence of a real rabies virus.” – Gemini



    A key disagreement is whether the “no virus” camp provides viable alternative explanations for diseases.



    Quote: “…my complaint is that people like Dr. Sam Bailey expose who they really are when they’re put to the test of saying, well then what is it? ” – Alex Tsakiris



    They draw parallels to their discussions challenging the neurological model of consciousness.



    Quote: “Well, I’m wondering if this actually is gonna show more agreement than we realize. Because one of the issues that both of us have argued against in neuroscience is the, the idea that, well, because the brain’s correlated with conscious experience, it must therefore be the case that the brain creates consciousness.” – Mark Gober

    full show on Rumble:























    clips on YouTube:









     

    • 1h 2 min

Avis

3,5 sur 5
120 notes

120 notes

jazzy956 ,

Interesting subjects covered

Not sure why it’s only got 3 stars. Sounds ok to me and interesting subjects covered

Reality Jones ,

Alex for president.

With out a doubt one of the best podcasts out there on the subjects of science and spirituality. No tin foil hat required.

oldcrowretro ,

Listen, consider, digest.

Alex has an excellent intellect, and the subjects are the ultimate questions we should all be considering.
Always well researched, doesn’t shy away from confronting the evidence and his guests. Quite why it’s only 3.5 I don’t know for me it’s a 5+

Classement des podcasts dans Science

The Infinite Monkey Cage
BBC Radio 4
Reinvent Yourself with Dr. Tara
Dr. Tara Swart Bieber
Hidden Brain
Hidden Brain, Shankar Vedantam
Making Sense with Sam Harris
Sam Harris
The Curious Cases of Rutherford & Fry
BBC Radio 4
Ologies with Alie Ward
Alie Ward

D’autres se sont aussi abonnés à…

Aeon Byte Gnostic Radio
Aeon Byte Gnostic Radio
The Higherside Chats
Greg Carlwood
Rune Soup
Gordon
The Grimerica Show
Grimerica
Grimerica Outlawed
Grimerica Inc
Brothers of the Serpent
Russ & Kyle Allen