Resilient Cyber

Chris Hughes

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.

  1. 3일 전

    Securing the Future with Autonomous Defense

    Summary: In this conversation, Chris Hughes and Stanislav Fort discuss the transformative role of AI in cybersecurity, particularly in vulnerability management. Stanislav shares insights on how AI can discover zero-day vulnerabilities in widely used codebases, the challenges of balancing AI-driven discoveries with quality assurance, and the importance of proactive security measures. They also explore the economic sustainability of AI in cybersecurity, the burden on maintainers, and the ongoing arms race between defenders and attackers. The discussion emphasizes the potential for AI to significantly enhance software security and the aspiration towards achieving zero vulnerabilities in critical infrastructure. Takeaways: AI is revolutionizing vulnerability management in cybersecurity. The ability to find long-hidden vulnerabilities is unprecedented. AI can enhance both offensive and defensive security measures. Proactive security integration into development pipelines is essential. The quality of vulnerability reports is declining due to AI-generated noise. Maintainers face increasing burdens from rapid AI-driven discoveries. AI can help secure open source projects effectively. Sustainability in AI cybersecurity requires financial backing. The arms race between attackers and defenders is intensifying with AI. Achieving zero vulnerabilities is an aspirational yet achievable goal. Chapters 00:00 Introduction to AI in Cybersecurity 02:52 The Evolution of AI and Vulnerability Discovery 05:45 AI's Impact on Software Development 08:59 Discovering Zero-Day Vulnerabilities 11:48 The Great Bifurcation in Security Research 14:52 Balancing AI-Driven Discoveries and Quality 17:59 Proactive Security Measures in Software Development 20:53 The Role of AI in Securing Open Source Projects 23:54 Sustainability of AI in Cybersecurity 27:07 Addressing the Burden on Maintainers 30:09 The Tension Between Autonomy and Security 33:03 The Arms Race Between Defenders and Attackers 36:12 Aiming for Zero Vulnerabilities 38:58 Conclusion and Future Outlook

    41분
  2. 2월 17일

    Exploiting AI IDEs

    In this episode of Resilient Cyber, we will be sat down with Ari Marzuk, the researcher who published "IDEsaster", A Novel Vulnerability Class in AI IDE's. We will be discussing the rise of AI-driven development and modern AI coding assistants, tools and agents, and how Ari discovered 30+ vulnerabilities impacting some of the most widely used AI coding tools and the broader risks around AI coding. Ari's background in offensive security — Ari has spent the past decade in offensive security, including time with Israeli military intelligence, NSO Group, Salesforce, and currently Microsoft, with a focus on AI security for the last two to three years.IDEsaster: a new vulnerability class — Ari's research uncovered 30+ vulnerabilities and 24 CVEs across AI-powered IDEs, revealing not just individual bugs but an entirely new vulnerability class rooted in the shared base IDE layer that tools like Cursor, Copilot, and others are built on."Secure for AI" as a design principle — Ari argues that legacy IDEs were never built with autonomous AI agents in mind, and that the same gap likely exists across CI/CD pipelines, cloud environments, and collaboration tools as organizations race to bolt on AI capabilities.Low barrier to exploitation — The vulnerabilities Ari found don't require nation-state sophistication to exploit; techniques like remote JSON schema exfiltration can be carried out with relatively simple prompt engineering and publicly known attack vectors.Human-in-the-loop is losing its effectiveness — Even with diff preview and approval controls enabled, exfiltration attacks still triggered in Ari's testing, and approval fatigue from hundreds of agent-generated actions is pushing developers toward YOLO mode.Least privilege and the capability vs. security trade-off — The same unrestricted access that makes AI coding agents so productive is what makes them vulnerable, and history suggests organizations will continue to optimize for utility over security without strong guardrails.Top defensive recommendations — Ari emphasized isolation (containers, VMs) as the single most important control, followed by enforcing secure defaults that can't be easily overridden, and applying enterprise-level monitoring and governance to AI agent usage.What's next — Ari is turning his attention to newer AI tools and attack surfaces but isn't naming targets yet. You can follow his work on LinkedIn, X, and his blog at makarita.com.

    25분
  3. 2월 10일

    AI is Ready for Production - Security, Risk and Compliance Isn't

    In this episode of Resilient Cyber, I sit down with VP, Product Marketing and Strategy for Protegrity, James Rice.  We will be discussing how traditional approaches to security aren't solving the AI security challenge, the importance of data-centric approaches for secure AI implementation and addressing issues such as AI data leakage. James and I dove into a lot of great topics, including: Why traditional perimeter-based and infrastructure-centric security models are failing in the era of AI, and why organizations need to fundamentally rethink their approach to securing AI workloads.The concept of data-centric security — protecting the data itself rather than the systems surrounding it — and why this shift is critical as data flows across cloud platforms, AI models, and agentic workflows.The growing risk of AI data leakage and how sensitive information (PII, PHI, PCI, intellectual property) can inadvertently be exposed through AI training data, model outputs, prompt injection, and RAG pipelines.Why many organizations find themselves stuck in an "AI circularity" — wanting to leverage AI but unable to do so because of the complexity of securing critical business data throughout the AI lifecycle.The importance of embedding security controls inline within the AI pipeline — from data ingestion and model training to orchestration and output — rather than bolting security on after the fact.How data protection techniques such as tokenization, anonymization, dynamic masking, and format-preserving encryption can enable organizations to use realistic, context-rich data for AI while maintaining compliance and reducing risk.The challenge of securing agentic AI workflows, where autonomous agents continuously interact with enterprise data, making traditional access control models insufficient.How organizations can balance the need for AI innovation and data utility with regulatory compliance requirements across frameworks like GDPR, HIPAA, PCI DSS, and emerging AI-specific regulations.James's perspective on how security, risk, and compliance functions need to evolve to keep pace with the rapid productionization of AI across the enterprise.The role of semantic guardrails in governing AI inputs and outputs, ensuring that protection is applied contextually based on how data is being used — not just where it resides. About the Guest James Rice is VP of Product Marketing and Strategy at Protegrity, a global leader in data-centric security. He brings over 20 years of experience in security, risk, and compliance, having provided solution engineering, value engineering, and implementation services to Fortune 1000 organizations across industries. Prior to Protegrity, James held leadership roles at Pathlock (formerly Greenlight Technologies), Accenture, and PricewaterhouseCoopers. About Protegrity Protegrity is a data-centric security platform that protects sensitive data across hybrid, multi-cloud, and AI environments. Their approach embeds security directly into the data itself — enabling enterprises to unlock insights, accelerate innovation, and meet global compliance with confidence. Protegrity's solutions include data discovery and classification, tokenization, anonymization, dynamic masking, and semantic guardrails for AI and analytics workflows. Learn more at protegrity.com

    26분
4.8
최고 5점
17개의 평가

소개

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.

좋아할 만한 다른 항목