
Moral Machine - AI Data Privacy, Data Security, Rule 1.6 Compliance & Shadow AI
Moral Machine - AI Data Privacy, Data Security, Compliance & Shadow AI
What really happens to client data when you use tools like ChatGPT, especially if you click “delete” or disable training? In this episode, I’m joined by Cathy Miron, CEO of eSilo and a nationally recognized expert in data protection and cybersecurity, to unpack the privacy, security, and governance realities behind modern LLMs. We discuss litigation and vendor policies can complicate “private” chats, why backups, logs, and engineering choices often outpace contract language, and practical ways lawyers and regulated organizations can use AI without compromising confidentiality or privilege. We cover de-identification workflows, BAAs and vetted tools (think FedRAMP/CMMC contexts), API nuances around ZDR, and the human firewall. Governance for acceptable-use policies, training, and curbing “shadow AI” with sanctioned, enterprise subscriptions. It’s a candid, pragmatic guide to balancing innovation with risk.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit moralmachine.substack.com
Information
- Show
- Published5 September 2025 at 01:28 UTC
- Length12 min
- RatingClean