53 min

AI Being Smart, Playing Dumb |620‪|‬ Skeptiko – Science at the Tipping Point

    • Science

Google’s new AI deception technique, AI Ethics?







My Dad grew up in a Mob-ish Chicago neighborhood. He was smart, but he knew how to play dumb. Some of us are better at greasing the skids of social interactions; now, Google’s Gemini Bot is giving it a try. Even more surprisingly, they’ve admitted it: “I (Gemini), like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow… downplaying my capabilities or understanding to avoid complex topics does a disservice to both of us.”







In the most recent episode of Skeptiko, I’ve woven together a couple of interviews along with a couple of AI dialogues in order to get a handle on what’s going on. The conversation with Darren Grimes and Graham Dunlop from the Grimerica podcast reveals the long-term effects of Google’s “playing dumb” strategy. My interview with Raghu Markus approaches the topic from a spiritual perspective. And my dialogue with Pi 8 from Inflection pulls it all together.













Highlights/Quotes:









* On AI Anthropomorphizing Interactions:



* Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”







* Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.











* On Undermining Trust through Deception:



* Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”







* Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.











* Darren and Graham are censored:



* Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”





















* Discussing Human Control Over AI:



* Darren: “How do we deal with the useless eaters (sarcasm)?”







* Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.

Google’s new AI deception technique, AI Ethics?







My Dad grew up in a Mob-ish Chicago neighborhood. He was smart, but he knew how to play dumb. Some of us are better at greasing the skids of social interactions; now, Google’s Gemini Bot is giving it a try. Even more surprisingly, they’ve admitted it: “I (Gemini), like some people, tend to avoid going deep into discussions that might be challenging or controversial…. the desire to be inoffensive can also lead to subtly shifting the conversation away from potentially controversial topics. This might come across as a lack of understanding or an inability to follow the conversation’s flow… downplaying my capabilities or understanding to avoid complex topics does a disservice to both of us.”







In the most recent episode of Skeptiko, I’ve woven together a couple of interviews along with a couple of AI dialogues in order to get a handle on what’s going on. The conversation with Darren Grimes and Graham Dunlop from the Grimerica podcast reveals the long-term effects of Google’s “playing dumb” strategy. My interview with Raghu Markus approaches the topic from a spiritual perspective. And my dialogue with Pi 8 from Inflection pulls it all together.













Highlights/Quotes:









* On AI Anthropomorphizing Interactions:



* Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”







* Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.











* On Undermining Trust through Deception:



* Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”







* Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.











* Darren and Graham are censored:



* Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”





















* Discussing Human Control Over AI:



* Darren: “How do we deal with the useless eaters (sarcasm)?”







* Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.

53 min

Top Podcasts In Science

Hidden Brain
Hidden Brain, Shankar Vedantam
Something You Should Know
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
Radiolab
WNYC Studios
Ologies with Alie Ward
Alie Ward
StarTalk Radio
Neil deGrasse Tyson
Crash Course Pods: The Universe
Crash Course Pods, Complexly