Exploring Information Security - Exploring Information Security

Timothy De Block

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.

  1. 17 小時前

    How AI Will Transform Society and Affect the Cybersecurity Field

    Summary: Timothy De Block sits down with Ed Gaudet, CEO of Censinet and a fellow podcaster, for a wide-ranging conversation on the rapid, transformative impact of Artificial Intelligence (AI). Ed Gaudet characterizes AI as a fast-moving "hammer" that will drastically increase productivity and reshape the job market, potentially eliminating junior software development roles. The discussion also covers the societal risks of AI, the dangerous draw of "digital cocaine" (social media), and Censinet's essential role in managing complex cyber and supply chain risks for healthcare organizations. Key Takeaways AI's Transformative & Disruptive Force A Rapid Wave: Ed Gaudet describes the adoption of AI, particularly chat functionalities, as a rapid, transformative wave, surpassing the speed of the internet and cloud adoption due to its instant accessibility. Productivity Gains: AI promises immense productivity, with the potential for tasks requiring 100 people and a year to be completed by just three people in a month. The Job Market Shift: AI is expected to eliminate junior software development roles by abstracting complexity. This raises concerns about a future developer shortage as senior architects retire without an adequate pipeline of talent. Adaptation, Not Doom: While acknowledging significant risks, Ed Gaudet maintains that humanity will adapt to AI as a tool—a "hammer"—that will enhance cognitive capacity and productivity, rather than making people "dumber". The Double-Edged Sword: Concerns exist over the nefarious uses of AI, such as deepfakes being used for fraudulent job applications, underscoring the ongoing struggle between good and evil in technology. Cyber Risk in Healthcare and Patient Safety Cyber Safety is Patient Safety: Due to technology's deep integration into healthcare processes, cyber safety is now directly linked to patient safety. Real-World Consequences: Examples of cyber attacks resulting in canceled procedures and diverted ambulances illustrate the tangible threat to human life. Censinet's Role: Censinet helps healthcare systems manage third-party, enterprise cyber, and supply chain risks at scale, focusing on proactively addressing future threats rather than past ones. Patient Advocacy: AI concierge services have the potential to boost patient engagement, enabling individuals to become stronger advocates for their own health through accessible second opinions. Technology's Impact on Mental Health & Life "Digital Cocaine": Ed Gaudet likened excessive phone and social media use, particularly among younger generations, to "digital cocaine"—offering short-term highs but lacking nutritional value and promoting technological dependence. Life-Changing Tools: Ed Gaudet shared a powerful personal story of overcoming alcoholism with the help of the Reframe app, emphasizing that the right technology, used responsibly, can have a profound, life-changing impact on solving mental health issues. Resources & Links Mentioned Censinet: Ed Gaudet's company, specializing in third-party and enterprise risk management for healthcare. Reframe App: An application Ed Gaudet used for his personal journey of recovery from alcoholism, highlighting the power of technology for mental health.

    48 分鐘
  2. 10月14日

    Exploring AI, APIs, and the Social Engineering of LLMs

    Summary: Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier. Key Takeaways The Prompt Injection Threat Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund. Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs. Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails: Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference. Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions. Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks. Defense and Design LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration. The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls. Security Design Patterns: To defend against prompt injection, security design patterns are key: Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions. Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code. Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process. Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code. Resources & Links Mentioned Trail of Bits Research: Blog: blog.trailofbits.com Company Site: trailofbits.com Weaponizing image scaling against production AI systems Call Me A Jerk: Persuading AI to Comply with Objectionable Requests Securing LLM Agents Paper: Design Patterns for Securing LLM Agents against Prompt Injections. Camel Prompt Injection Defending LLM applications against Unicode character smuggling Logit-Gap Steering: Efficient Short-Suffix Jailbreaks for Aligned Large Language Models LLM Explanation: Three Blue One Brown (3Blue1Brown) has a great short video explaining how Large Language Models work. Lakera Gandalf: Game for learning how to use prompt injection against AI Keith Hoodlet's Personal Sites: Website: securing.dev and thought.dev

    52 分鐘
  3. 9月23日

    Exploring the Rogue AI Agent Threat

    Summary: In a unique live recording, Timothy De Block is joined by Sam Chehab from Postman to tackle the intersection of AI and API security. The conversation goes beyond the hype of AI-created malware to focus on a more subtle, yet pervasive threat: "rogue AI agents." The speakers define these as sanctioned AI tools that, when misconfigured or given improper permissions, can cause significant havoc by misbehaving and exposing sensitive data. The episode emphasizes that this risk is not new, but an exacerbation of classic hygiene problems. Key Takeaways Defining "Rogue AI Agents": Sam Chehab defines a "rogue AI agent" as a sanctioned AI tool that misbehaves due to misconfiguration, often exposing data it shouldn't have access to. He likens it to an enterprise search tool in the early 2000s that crawled an intranet and surfaced things it wasn't supposed to. The AI-API Connection: An AI agent is comprised of six components, and the "tool" component is where it interacts with APIs. The speakers note that the AI's APIs are its "arms and legs" and are often where it gets into trouble. The Importance of Security Hygiene: The core of the solution is to "go back to basics" with good hygiene. This includes building APIs with an open API spec, enforcing schemas, and ensuring single-purpose logins for integrations to improve traceability. The Rise of the "Citizen Developer": The conversation highlights a new security vector: non-developers, or "citizen developers," in departments like HR and finance building their own agents using enterprise tools. These individuals often lack security fundamentals, and their workflows are a "ripe area for risk". AI's Role in Development: Sam and Timothy discuss how AI can augment a developer's capabilities, but a human is still needed in the process. The report from Veracode notes that AI-generated code is only secure about 45% of the time, which is about on par with human-written code. The best approach is to use AI to fix specific lines of code in pre-commit, rather than having it write entire applications. Resources & Links Mentioned Postman State of the API Report: This report, which discusses API trends and security, will be released on October 8th. The speakers tease a follow-up episode to dive into its findings. Veracode: The 2025 GenAI Code Security Report was mentioned in the discussion on AI-generated code. GitGuardian: The State of Secrets Sprawl report was referenced as a key resource. Cloudflare: Mentioned as a service for API shield and monitoring API traffic. News Sites: Sam Chehab recommends Security Affairs, The Hacker News, Cybernews, and Information Security Magazine for staying up-to-date.

    39 分鐘
  4. 9月16日

    A conversation with Kyle Andrus on Info Stealers and Supply Chain Attacks

    Summary: In this episode, Timothy De Block sits down with guest Kyle Andrus to dissect the ever-evolving landscape of cyber threats, with a specific focus on info stealers. The conversation covers everything from personal work-life balance and career burnout to the increasing role of AI in security. They explore how info stealers operate as a "commodity" in the cybercriminal world, the continuous "cat and mouse game" with attackers, and the challenges businesses face in implementing effective cybersecurity measures. Key Takeaways The AI Revolution in Security: The guests discuss how AI is improving job efficiency and security, particularly in data analytics, behavioral tracking, and automating low-level tasks like SOC operations and penetration testing. This automation allows security professionals to focus on more complex work. They also highlight the potential for AI misuse, such as for insider threat detection, and the "surveillance state" implications of tracking employee behavior. The InfoStealer Threat: Info stealers are a prevalent threat, often appearing as "click fix" or fake update campaigns that trick users into granting initial access or providing credentials. The data they collect, including credentials and session tokens, is sold on the dark web for as little as two to ten dollars. This fuels further attacks by cybercriminals who buy access rather than performing initial reconnaissance themselves. The Human and Business Challenge: As security controls improve, attackers are increasingly relying on human interaction to compromise systems. The speakers emphasize that cybercriminals, "like water, follow the path of least resistance." The episode also highlights the significant challenge for small to medium-sized businesses in balancing risk mitigation with operational costs. Software Supply Chain Attacks: The discussion touches on supply chain attacks, like the npm package breach and the Salesforce Drift breach, which targeted third parties and smaller companies with less mature security controls. They note the challenges of using Software Bill of Materials (SBOMs) to assess the trustworthiness of open-source components. Practical Cybersecurity Advice: The hosts discuss the need to rethink cybersecurity advice for non-tech-savvy individuals, as much of the current guidance is impractical and burdensome. While Timothy De Block sees the benefit of browser-based password managers when MFA is enabled, Kyle Sundra generally advises against storing passwords in browsers and recommends more secure password managers.

    41 分鐘
4.7
(滿分 5 顆星)
43 則評分

簡介

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.

你可能也會喜歡