
Guardrails and Attack Vectors: Securing the Generative AI Frontier
This episode dissects critical risks specific to Large Language Models (LLMs), focusing on vulnerabilities such as Prompt Injection and the potential for Sensitive Information Disclosure. It explores how CISOs must establish internal AI security standards and adopt a programmatic, offensive security approach using established governance frameworks like the NIST AI RMF and MITRE ATLAS. We discuss the essential role of robust governance, including mechanisms for establishing content provenance and maintaining information integrity against threats like Confabulation (Hallucinations) and data poisoning.
Sponsor:
www.cisomarketplace.services
정보
- 프로그램
- 주기매일 업데이트
- 발행일2025년 11월 4일 오전 11:47 UTC
- 길이16분
- 에피소드316
- 등급전체 연령 사용가