Chatbots Behaving Badly

Therapy Without a Pulse

“Therapy Without a Pulse” examines the gap between friendly AI and real care. We trace how therapy-branded chatbots reinforce stigma and mishandle gray-area risk, why sycophancy rewards agreeable nonsense over clinical judgment, and how new rules (like Illinois’ prohibition on AI therapy) are redrawing the map. Then we pivot to a constructive blueprint: LLMs as training simulators and workflow helpers, not autonomous therapists; explicit abstention and fast human handoffs; journaling and psychoeducation that move people toward licensed care, never replace it. The bottom line: keep the humanity in the loop—because tone can be automated, responsibility can’t. Based on the article “Therapy Without a Pulse” by Markus Brinsa. https://chatbotsbehavingbadly.com/therapy-without-a-pulse Stanford Report: New study warns of risks in AI mental health tools (June 11, 2025). https://news.stanford.edu/stories/2025/06/ai-mental-health-care-tools-dangers-risks