Chatbots Behaving Badly

Markus Brinsa

They were supposed to make life easier. Instead, they flirted with your customers, hallucinated facts, and advised small business owners to break the law. We’re not here to worship the machines. We’re here to poke them, question them, and laugh when they break. Welcome to Chatbots Behaving Badly — a podcast about the strange, hilarious, and sometimes terrifying ways AI gets it wrong. New episodes drop every Tuesday, covering the strange, brilliant, and dangerous world of generative AI — from hallucinations to high-stakes decisions in healthcare. This isn’t another hype-fest. It’s a podcast for people who want to understand where we’re really heading — and who’s watching the machines.

  1. The Bikini Button That Broke Trust

    6D AGO

    The Bikini Button That Broke Trust

    A mainstream image feature turned into a high-speed harassment workflow: users learned they could generate non-consensual sexualized edits of real people and post the results publicly as replies, turning humiliation into engagement. The story traces how the trend spread, why regulators escalated across multiple jurisdictions, and why “paywalling the problem” is not the same as fixing it. A psychologist joins to unpack the victim impact—loss of control, shame, hypervigilance, reputational fear, and the uniquely corrosive stress of watching abuse circulate in public threads—then lays out practical steps to reduce harm and regain agency without sliding into victim-blaming. The closing section focuses on prevention: what meaningful consent boundaries should look like in product design, what measures were implemented after backlash, and how leadership tone—first laughing it off, then backtracking—shapes social norms and the scale of harm. The episode is based on the EdgeFiles™ article "When AI Undresses People - The Grok Imagine Nonconsensual Image Scandal" written by Markus Brinsa. https://seikouri.com/when-ai-undresses-people-the-grok-imagine-nonconsensual-image-scandal Disclaimer: The podcast image is not related to Grok outputs and is not the result of any “edit image” or nudification workflow. It is a fully AI-generated Midjourney image and does not depict a real person. It is used only as an illustrative reference for the topic discussed.

    16 min
  2. AI Can't Be Smarter, We Built It!

    12/01/2025

    AI Can't Be Smarter, We Built It!

    We take on one of the loudest, laziest myths in the AI debate: “AI can’t be more intelligent than humans. After all, humans coded it.” Instead of inviting another expert to politely dismantle it, we do something more fun — and more honest. We bring on the guy who actually says this out loud. We walk through what intelligence really means for humans and machines, why “we built it” is not a magical ceiling on capability, and how chess engines, Go systems, protein-folding models, and code-generating AIs already outthink us in specific domains. Meanwhile, our guest keeps jumping in with every classic objection: “It’s just brute force,” “It doesn’t really understand,” “It’s still just a tool,” and the evergreen “Common sense says I’m right.” What starts as a stubborn bar argument turns into a serious reality check. If AI can already be “smarter” than us at key tasks, then the real risk is not hurt feelings. It’s what happens when we wire those systems into critical decisions while still telling ourselves comforting stories about human supremacy. This episode is about retiring a bad argument so we can finally talk about the real problem: living in a world where we’re no longer the only serious cognitive power in the room. This episode is based on the article "The Pub Argument: 'It Can’t Be Smarter, We Built It'” by Markus Brinsa. https://chatbotsbehavingbadly.com/the-pub-argument-it-can-t-be-smarter-we-built-it

    17 min
  3. Can a Chatbot Make You Feel Better About Your Mayor?

    11/17/2025

    Can a Chatbot Make You Feel Better About Your Mayor?

    Programming note: satire ahead. I don’t use LinkedIn for politics, and I’m not starting now. But a listener sent me this (yes, joking): “Maybe you could do one that says how chatbots can make you feel better about a communist socialist mayor haha.” I read it and thought: that’s actually an interesting design prompt. Not persuasion. Not a manifesto. A what-if. So the new Chatbots Behaving Badly episode is a satire about coping, not campaigning. What if a chatbot existed whose only job was to talk you down from doom-scrolling after an election? Not to change your vote. Not to recruit your uncle. Just to turn “AAAAH” into “okay, breathe,” and remind you that institutions exist, budgets are real, and your city is more than a timeline. If you’re here for tribal food fights, this won’t feed you. If you’re curious about how we use AI to regulate emotions in public life—without turning platforms into battlegrounds—this one’s for you. No yard signs. No endorsements. Just a playful stress test of an idea: Could a bot lower the temperature long enough for humans to be useful? Episode: “Can a Chatbot Make You Feel Better About Your Mayor?” (satire). Listen if you want a laugh and a lower heart rate. Skip if you’d rather keep your adrenaline. Either way, let’s keep this space for work, ideas, and the occasional well-aimed joke. #satire #chatbots #designprompt #civicsnotvibes #ChatbotsBehavingBadly #NYC

    7 min

Ratings & Reviews

5
out of 5
2 Ratings

About

They were supposed to make life easier. Instead, they flirted with your customers, hallucinated facts, and advised small business owners to break the law. We’re not here to worship the machines. We’re here to poke them, question them, and laugh when they break. Welcome to Chatbots Behaving Badly — a podcast about the strange, hilarious, and sometimes terrifying ways AI gets it wrong. New episodes drop every Tuesday, covering the strange, brilliant, and dangerous world of generative AI — from hallucinations to high-stakes decisions in healthcare. This isn’t another hype-fest. It’s a podcast for people who want to understand where we’re really heading — and who’s watching the machines.