Join Steve Wilson and Ben Lorica for a discussion of AI security. We all know that AI brings new vulnerabilities into the software landscape. Steve and Ben talk about what makes AI different, what the big risks are, and how you can use AI safely. Find out how agents introduce their own vulnerabilities, and learn about resources such as OWASP that can help you understand them. Is there a light at the end of the tunnel? Can AI help us build secure systems even as it introduces its own vulnerabilities? Listen to find out.
Points of Interest
- 0:49: Now that AI tools are more accessible, what makes LLM and agentic AI security fundamentally different from traditional software security?
- 1:20: There’s two parts. When you start to build software using AI technologies, there is a new set of things to worry about. When your software is getting near to human-level smartness, the software is subject to the same issues as humans: It can be tricked and deceived. The other part is what the bad guys are doing when they have access to frontier-class AIs.
- 2:16: In your work at OWASP, you listed the top 10 vulnerabilities for LLMs. What are the top one or two risks that are causing the most serious problems?
- 2:42: I’ll give you the top three. The first one is prompt injection. By feeding data to the LLM, you can trick the LLM into doing something the developers didn’t intend.
- 3:03: Next is the AI supply chain. The AI supply chain is much more complicated than the traditional supply chain. It’s not just open source libraries from GitHub. You’re also dealing with gigabytes of model weights and terabytes of training data, and you don’t know where they’re coming from. And sites like Hugging Face have malicious models uploaded to them.
- 3:49: The last one is sensitive information disclosure. Bots are not good at knowing what they should not talk about. When you put them into production and give them access to important information, you run the risk that they will disclose information to the wrong people.
- 4:25: For supply chain security, when you install something in Python, you’re also installing a lot of dependencies. And everything is democratized, so people can do a little on their own. What can people do about supply chain security?
- 5:18: There are two flavors: I’m building software that includes the use of a large language model. If I want to get Llama from Meta as a component, that includes gigabytes of floating point numbers. You need to put some skepticism around what you’re getting.
- 6:01: Another hot topic is vibe coding. People who have never programmed or haven’t programmed in 20 years are coming back. There are problems like hallucinations. With generated code, they will make up the existence of a software package. They’ll write code that imports that. And attackers will create malicious versions of those packages and put them on GitHub so that people will install them.
- 7:28: Our ability to generate code has gone up 10x to 100x. But our ability to security check and quality check hasn’t. For people starting, get some basic awareness of the concepts around application security and what it means to manage the supply chain.
- 7:57: We need a different generation of software composition environment tools that are designed to work with vibe coding and integrate into environments like Cursor.
- 8:44: We have good basic guidelines for users: Does a library have a lot of users? A lot of downloads? A lot of stars on GitHub? There are basic indications. But professional developers augment that with tooling. We need to bring those tools into vibe coding.
- 9:20: What’s your sense of the maturity of guardrails?
- 9:50: The good news is that the ecosystem around guardrails started really soon after ChatGPT came out. Things at the top of the OWASP Top 10, prompt injection and information disclosure, indicated that you needed to police the trust boundaries around your LLM.
정보
- 프로그램
- 주기격주 업데이트
- 발행일2025년 9월 10일 오후 2:00 UTC
- 길이43분
- 시즌1
- 에피소드21
- 등급전체 연령 사용가