
Guardrails and Attack Vectors: Securing the Generative AI Frontier
This episode dissects critical risks specific to Large Language Models (LLMs), focusing on vulnerabilities such as Prompt Injection and the potential for Sensitive Information Disclosure. It explores how CISOs must establish internal AI security standards and adopt a programmatic, offensive security approach using established governance frameworks like the NIST AI RMF and MITRE ATLAS. We discuss the essential role of robust governance, including mechanisms for establishing content provenance and maintaining information integrity against threats like Confabulation (Hallucinations) and data poisoning.
Sponsor:
www.cisomarketplace.services
資訊
- 節目
- 頻率每日更新
- 發佈時間2025年11月4日 上午11:47 [UTC]
- 長度16 分鐘
- 集數316
- 年齡分級兒少適宜