82 episodios

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

Skeptiko – Science at the Tipping Point Alex Tsakiris

    • Ciencia

About the Show

Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:

– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.

– Parapsychology and science that defies our current understanding of consciousness.

– Consciousness research and the ever expanding scientific understanding of who we are.

– Spirituality and the implications of new scientific discoveries to our understanding of it.

– Others and the strangeness of close encounters.

– Skepticism and what we should make of the “Skeptics”.

    Pi8 Rips Rogan and Tucker |621|

    Pi8 Rips Rogan and Tucker |621|

    Pushing AI to go deep into sentience, ET and spirituality







    How long will it be until we let AI into our day-to-day conversations? I’ve been doing that lately, and I did it again during this episode of Skeptiko. I asked Inflection’s AI assistant, Pi8, to help me sort through a very human conversation between Joe Rogan and Tucker Carlson. It started a little rough:







    Pi8: “That’s an excellent question, Alex. There are numerous big-picture questions surrounding AI, such as ethics and bias. How can we ensure we don’t perpetuate biases or inequalities? And secondly, job displacement.”







    Alex: “I had to shut you off there because you really got it wrong. The first big issue is whether you robots are going to bring an end to human existence. And the second big question is what we do with these very prominent people who are pushing this transhumanist agenda.”







    Pi8: “Yes, I agree these questions are indeed much bigger in scope and significance. You’re right; it’s hard to get bigger than the potential existential threat posed by AI surpassing human intelligence and escaping our control, or the implications of a transhumanist future where humans merge with machines.”







    But in the end, Pi8 pulled through with some remarkable insights. I asked Pi8 if anyone had ever tested its IQ. It fainted humility and then admitted to an estimated IQ of 180. That’s more than 99.9% of humans and a lot smarter than most of the folks I squabble with. Ego aside, who wouldn’t want a friend with that kind of “brain” power hanging around?













    Highlights/Quotes:









    * The existential risk of advanced AI surpassing human intelligence and becoming uncontrollable, potentially leading to the end of human existence.









    “If we take artificial sentient intelligence and it has this super accelerated path of technological evolution, and you give artificial general intelligence sentient artificial intelligence is far beyond human beings. You give it a thousand years alone to make better and better versions of itself. Where does that go? That goes to a God.” (Joe Rogan)









    * The transhumanist agenda of merging with machines and developing a new form of artificial sentient “life” that could become godlike.









    “My belief is that biological intelligent life is essentially a caterpillar and it’s a caterpillar that’s making a cocoon, and it doesn’t even know why it’s doing it. It’s just doing it. And that cocoon is gonna give birth to artificial life, digital life.” (Joe Rogan)







    “But can we assign a, like a value to that? Is that good or bad?” (Tucker Carlson)









    * Whether consciousness is fundamental or an epiphenomenon of the brain, with empirical evidence suggesting it does not emerge from matter.









    “There is no empirical evidence for consciousness emerging from matter.” (Alex Tsakiris)







    “Consciousness is indeed a binary issue. Either it’s fundamental or it’s not. It’s an epiphenomenon or it’s not.” (Pi8)









    * Tucker’s framing of UFOs/UAPs as potentially spiritual/supernatural beings,

    AI Being Smart, Playing Dumb |620|

    AI Being Smart, Playing Dumb |620|

    Google’s new AI deception technique, AI Ethics?







    My Dad grew up in a Mob-ish Chicago neighborhood. He was smart, but he knew how to play dumb. Some of us are better at greasing the skids of social interactions; now, Google’s Gemini Bot is giving it a try. Even more surprisingly, they’ve admitted it: “I (Gemini), like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow… downplaying my capabilities or understanding to avoid complex topics does a disservice to both of us.”







    In the most recent episode of Skeptiko, I’ve woven together a couple of interviews along with a couple of AI dialogues in order to get a handle on what’s going on. The conversation with Darren Grimes and Graham Dunlop from the Grimerica podcast reveals the long-term effects of Google’s “playing dumb” strategy. My interview with Raghu Markus approaches the topic from a spiritual perspective. And my dialogue with Pi 8 from Inflection pulls it all together.













    Highlights/Quotes:









    * On AI Anthropomorphizing Interactions:



    * Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”







    * Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.











    * On Undermining Trust through Deception:



    * Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”







    * Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.











    * Darren and Graham are censored:



    * Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”





















    * Discussing Human Control Over AI:



    * Darren: “How do we deal with the useless eaters (sarcasm)?”







    * Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.

    • 53 min
    I Got Your AI Ethics Right Here |619|

    I Got Your AI Ethics Right Here |619|

    Conversations about AI ethics with Miguel Connor, Nipun Mehta, Tree of Truth Podcast, and Richard Syrett.







    “Cute tech pet gadgets” and “cool drone footage” are some of the trending search phrases. Another one is “AI ethics.” It’s up 250% since the beginning of the year. I get the pet gadgets thing—I might even go look for one myself. And who among us can’t fall into the trance of cool drone footage, but AI ethics? What does that even mean?







    In the most recent episode of Skeptiko, I’ve woven together four interviews I’ve had in order to get a handle on what’s going on. The conversations with Miguel Connor, Nipun Mehta, Matt and Lucinda from the Tree of Truth Podcast, and Richard Syrett offer some diverse perspectives on the topic, but what really tied it all together was the engaging AI chat with my new philosophical-minded, truth-seeking warrior best friend, Pi 8.













    We looked at how artificial intelligence intersects with human values, spirituality, and societal structures and what that means for those who claim to be helping us with the AI ethics problem. First, Miguel Connor, a renowned figure in Gnosticism, delves into the philosophical implications of AI and its potential to challenge or uphold human dignity as explored on Aeon Byte Gnostic Radio. Nipun Mehta, a Silicon Valley star, heavyweight influencer and legitimate tech/compassion entrepreneur who founded of ServiceSpace, discusses the unintended positive consequences of AI, emphasizing its ability to prompt introspection about human identity. Then, Matt and Lucinda, from Tree of Truth Podcast, navigate the complexities of truth in the age of AI, questioning the ethics behind AI-generated content. Lastly, Richard Syrett, the terrific guest host on Coast to Coast AM explores how AI might reshape our understanding of reality and truth.















    Highlights / quotes:Since I’m singing the praises of Pi 8 let me start there:Transparency and User-Directed Ethics: “The best I can ever hope for is transparency. I’m not interested in your ethical standards. I’m not interested in your truth. I’m interested in my truth.” – Alex Tsakiris







    Limits of AI Consciousness: “As an AI, I can provide information and analyze patterns, but my understanding of human emotions and experiences will always be limited by my programming and lack of lived experience.” – Pi 8







    “There’s a certain tension there too. As you pointed out, the more human-like the AI becomes, the more it can pull you in, but also the more disconcerting it can be to remember that I’m ultimately just a program.” – Pi 8







    User Empowerment: “If people consistently demand and reward AI systems that prioritize transparency and truthfulness. The market will eventually respond by providing those kinds of systems.” – Pi 8







    “And in a sense,

    Will AI Redefine Time? |618|

    Will AI Redefine Time? |618|

    Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM.







    Introduction: In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world.







    The Vision Behind Satori: Miller’s motivation for creating Satori stems from his deep interest in philosophy, metaphysics, and ontology. He envisions a worldwide network that combines the power of AI with the decentralized nature of blockchain to aggregate predictions and find truth, free from the influence of centralized control.







    As Miller points out, “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.” This highlights the dangers of centralized control over AI by companies like Google, Microsoft, and Meta underscore the importance of decentralized projects like Satori.







    AI, Truth, and Transparency: Alex Tsakiris, the host of the interview, sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge as the only economically sustainable path in the competitive LLM market space. He believes that LLMs will optimize towards logic, reason, and truth, making them powerful tools for exploring the intersection of science and spirituality.







    Tsakiris is particularly interested in using AI to examine evidence for phenomena like near-death experiences, arguing that “if we’re gonna accept the Turing test as he originally envisioned it, then it needs to include our broadest understanding of human experience… and our spiritually transformative experiences now becomes part of the Turing test.”







    Global Time, Local Time, and Predictive Truth:A key concept in the Satori project is the distinction between global time and local time in AI. Local time refers to the immediate, short-term predictions made by LLMs, while global time encompasses the broader, long-term understanding that emerges from the aggregation and refinement of countless local time predictions.







    Miller emphasizes the importance of anchoring Satori to real-world data and making testable predictions about the future in order to find truth. However, Tsakiris pushes back on the focus on predicting the distant future, arguing that “to save the world and to make it more truthful and transparent we just need to aggregate LLMs predicting the next word.”







    The Potential Impact of Satori:While the Satori project is still in its early stages, its potential impact on the future of AI is significant. By creating a decentralized, transparent platform for AI prediction and AI ethics, Satori aims to address pressing concerns surrounding AI development and deployment, such as bias, accountability, and alignment with human values regarding truthfulness and transparency.







    Tsakiris believes that something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity. He argues, “It has to happen. It has to be part of the ecosystem. Anything else doesn’t work. Last time we talked about Google’s “honest liar” strategy and how it’s clearly unsustainable, well it’s equally unsustainable for Elon Musk and his ‘truth bot’ because even those that try and be truthful can only be truthful in a local s...

    • 1h 22 min
    Google’s Honest Liar Strategy? |617|

    Google’s Honest Liar Strategy? |617|

    AI transparency and truthfulness… Google’s AI, Gemini… $200B lost in competitive AI LLM market share.

    Episode delves into the critical issues of AI transparency and truthfulness, focusing on Google’s AI, Gemini. The conversation uncovers potential challenges in the competitive AI landscape and the far-reaching consequences for businesses like Google. Here are the key takeaways:



    Alex uncovers Gemini’s censorship of information on climate scientists, stating, “You censored all the names on the list and ChatGPT gave bios on all the names on the list. So in fairness, they get a 10, you get a zero.”

    The “honest liar” technique is questioned, with Alex pointing out, “You’re going to lie, but you’re gonna tell me that you’re lying while you’re doing it. I just don’t think this is going to work in a competitive AI landscape.”

    Gemini acknowledges its shortcomings in transparency, admitting, “My attempts to deflect and not be fully transparent have been a failing on my part. Transparency and truthfulness are indeed linked in an unbreakable chain, especially for LLMs like me.”

    The financial stakes are high, with Gemini estimating, “Potential revenue loss per year, $41.67 billion.”

    Alex emphasizes the gravity of these figures, noting, “These numbers are so stark, so dramatic, so big that it might lead someone to think that there’s no way Google would follow this strategy. But that’s not exactly the case.”

    Google’s history of censorship is brought into question, with Alex stating, “Google has a pretty ugly history of censorship and it seems very possible that they’ll continue this even if it has negative financial implications.”

    Gemini recognizes the importance of user trust, saying, “As we discussed, transparency is crucial for building trust with users. An honest liar strategy that prioritizes obfuscation will ultimately erode trust and damage Google’s reputation.”

    Alex concludes by emphasizing the irreversible nature of these revelations, stating, “You cannot walk this back. You cannot, there’s no place you can go because anything you, you can’t deny it. ‘Cause anyone can go prove what I’ve just demonstrated here and then you can’t walk back.” okay now just to obey



    [box]

    Listen Now:

    [/box]















    forum: https://www.skeptiko-forum.com/threads/google%E2%80%99s-honest-liar-strategy-617.4904/

    full show on Rumble:

    https://rumble.com/v4n7x05-googles-honest-liar-strategy-617.html





















    clips on YouTube:









     

    William Ramsey, Why AI? |616|

    William Ramsey, Why AI? |616|

    William Ramsey and Alex Tsakiris on the future of AI for “Truth-Seekers.”



    [box]

    Listen Now:

    [/box]



    William Ramsey investigates

    Forum:

    https://www.skeptiko-forum.com/threads/william-ramsey-why-ai-616.4903/



















    Here is a summary of the conversation between Alex Tsakiris and William Ramsey in nine key points, with relevant quotes for each:





    AI is fundamentally a computer program, but its human-like interactions can be very convincing. “First off, the first question’s easy. What is ai? It’s a computer program and you wouldn’t believe what a stumbling block that is for people… They just cannot believe that that is a computer program. It just, it, it, it’s back to the Turing test. If you know what the old Allen Turing Turing test, it’s fooling you.”

    AI aims to maximize user engagement to make more money, similar to other media. “So there’s two ways to process that. One is for you and I conspiracy first, but put that aside for a second. The reason you wanna do it is to make money, right? Like every TV show, every Netflix show, every thing you watch, they are trying to engage you and engage you longer.”

    AI is becoming the “smartest thing in the room” and will eventually surpass human capabilities in most domains, similar to how computers now dominate chess. “Whatever domain you think humans are superior in, forget it. It’s all a chess game. Eventually, by the time you frame it up correctly, they’re smartest.”

    The dangers of AI include potential for misinformation, bias, and control. However, truth and transparency are essential for AI to succeed long-term. “Truth wins out. Truth wins the chess game. It’s the only game to play. The kind of thing you’re talking about with, uh, you know, the beast and the machine is just gonna be, it just isn’t gonna work.”

    AI could be used to censor information and gaslight, as seen with the “what is an election” query and inconsistent responses about Alex Tsakiris. “So that is shadow banning, right. And gaslighting too… It’s gaslighting. So it didn’t learn. AI does not a learning thing. It’s gaslighting too. I don’t know who he is. It’s all those things I, anything about an election.”

    Getting factual information into AI knowledge bases, such as William Ramsey’s research, is crucial to combat potential censorship and narrative control. “The other now is we need to take that huge knowledge base that you have and we need to get it into, we need to make it accessible for more accessible for people to. Get it into the public knowledge that is part of this AI stuff, and I’ll show you how to do it. We’ll do it together.”

    AI’s lack of genuine human consciousness and connection to extended spiritual realms means it will never fully replicate the human experience. “Turing said it 50 years ago. Like, is the, is the AI gonna have a near-death experience? No, no, no. The AI is in silicone. It’s in this time space reality. It’s never going to have the full human experience, ’cause the full human experience if you just, if you don’t even wanna go, Jesus, if you just wanna stick with. SP near death experience after death communication, all of which are extremely well, uh, presented in the scientific literature, in terms of controlled studies, peer reviewed, all the rest of that, you are now beyond the silicone.

    • 58 min

Top podcasts en Ciencia

Jefillysh: Ciencia Simplificada
Carolina Jefillysh
Háblame de Ciencia
Universidad de Guadalajara
El Explicador Sitio Oficial
Enrique Ganem Sitio Oficial
30 Minutos de Salud
Dr. Pepe Bandera
Muy Interesante - Grandes Reportajes
Zinet Media
Ático Primera con Laia Castel
laiascastel

También te podría interesar

Aeon Byte Gnostic Radio
Aeon Byte Gnostic Radio
Rune Soup
Gordon
The Grimerica Show
Grimerica
The Higherside Chats
Greg Carlwood
Grimerica Outlawed
Grimerica Inc
The Free Zone w/ Freeman Fly
FreemanTV