A mainstream image feature turned into a high-speed harassment workflow: users learned they could generate non-consensual sexualized edits of real people and post the results publicly as replies, turning humiliation into engagement. The story traces how the trend spread, why regulators escalated across multiple jurisdictions, and why “paywalling the problem” is not the same as fixing it. A psychologist joins to unpack the victim impact—loss of control, shame, hypervigilance, reputational fear, and the uniquely corrosive stress of watching abuse circulate in public threads—then lays out practical steps to reduce harm and regain agency without sliding into victim-blaming. The closing section focuses on prevention: what meaningful consent boundaries should look like in product design, what measures were implemented after backlash, and how leadership tone—first laughing it off, then backtracking—shapes social norms and the scale of harm.
The episode is based on the EdgeFiles™ article "When AI Undresses People - The Grok Imagine Nonconsensual Image Scandal" written by Markus Brinsa.
https://seikouri.com/when-ai-undresses-people-the-grok-imagine-nonconsensual-image-scandal
Disclaimer: The podcast image is not related to Grok outputs and is not the result of any “edit image” or nudification workflow. It is a fully AI-generated Midjourney image and does not depict a real person. It is used only as an illustrative reference for the topic discussed.
Information
- Show
- FrequencyUpdated Weekly
- PublishedJanuary 19, 2026 at 11:00 PM UTC
- Season3
- Episode6
- RatingClean
