Chatbots Behaving Badly

Markus Brinsa

They were supposed to make life easier. Instead, they flirted with your customers, hallucinated facts, and advised small business owners to break the law. We’re not here to worship the machines. We’re here to poke them, question them, and laugh when they break. Welcome to Chatbots Behaving Badly — a podcast about the strange, hilarious, and sometimes terrifying ways AI gets it wrong. New episodes drop every Tuesday, covering the strange, brilliant, and dangerous world of generative AI — from hallucinations to high-stakes decisions in healthcare. This isn’t another hype-fest. It’s a podcast for people who want to understand where we’re really heading — and who’s watching the machines.

  1. The Bikini Button That Broke Trust

    JAN 19

    The Bikini Button That Broke Trust

    A mainstream image feature turned into a high-speed harassment workflow: users learned they could generate non-consensual sexualized edits of real people and post the results publicly as replies, turning humiliation into engagement. The story traces how the trend spread, why regulators escalated across multiple jurisdictions, and why “paywalling the problem” is not the same as fixing it. A psychologist joins to unpack the victim impact—loss of control, shame, hypervigilance, reputational fear, and the uniquely corrosive stress of watching abuse circulate in public threads—then lays out practical steps to reduce harm and regain agency without sliding into victim-blaming. The closing section focuses on prevention: what meaningful consent boundaries should look like in product design, what measures were implemented after backlash, and how leadership tone—first laughing it off, then backtracking—shapes social norms and the scale of harm. The episode is based on the EdgeFiles™ article "When AI Undresses People - The Grok Imagine Nonconsensual Image Scandal" written by Markus Brinsa. https://seikouri.com/when-ai-undresses-people-the-grok-imagine-nonconsensual-image-scandal Disclaimer: The podcast image is not related to Grok outputs and is not the result of any “edit image” or nudification workflow. It is a fully AI-generated Midjourney image and does not depict a real person. It is used only as an illustrative reference for the topic discussed.

    16 min

Ratings & Reviews

5
out of 5
2 Ratings

About

They were supposed to make life easier. Instead, they flirted with your customers, hallucinated facts, and advised small business owners to break the law. We’re not here to worship the machines. We’re here to poke them, question them, and laugh when they break. Welcome to Chatbots Behaving Badly — a podcast about the strange, hilarious, and sometimes terrifying ways AI gets it wrong. New episodes drop every Tuesday, covering the strange, brilliant, and dangerous world of generative AI — from hallucinations to high-stakes decisions in healthcare. This isn’t another hype-fest. It’s a podcast for people who want to understand where we’re really heading — and who’s watching the machines.