
Guardrails and Attack Vectors: Securing the Generative AI Frontier
This episode dissects critical risks specific to Large Language Models (LLMs), focusing on vulnerabilities such as Prompt Injection and the potential for Sensitive Information Disclosure. It explores how CISOs must establish internal AI security standards and adopt a programmatic, offensive security approach using established governance frameworks like the NIST AI RMF and MITRE ATLAS. We discuss the essential role of robust governance, including mechanisms for establishing content provenance and maintaining information integrity against threats like Confabulation (Hallucinations) and data poisoning.
Sponsor:
www.cisomarketplace.services
信息
- 节目
- 频率一日一更
- 发布时间2025年11月4日 UTC 11:47
- 长度16 分钟
- 单集316
- 分级儿童适宜