Resilient Cyber

Chris Hughes

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.

  1. 4D AGO

    Exploiting AI IDEs

    In this episode of Resilient Cyber, we will be sat down with Ari Marzuk, the researcher who published "IDEsaster", A Novel Vulnerability Class in AI IDE's. We will be discussing the rise of AI-driven development and modern AI coding assistants, tools and agents, and how Ari discovered 30+ vulnerabilities impacting some of the most widely used AI coding tools and the broader risks around AI coding. Ari's background in offensive security — Ari has spent the past decade in offensive security, including time with Israeli military intelligence, NSO Group, Salesforce, and currently Microsoft, with a focus on AI security for the last two to three years.IDEsaster: a new vulnerability class — Ari's research uncovered 30+ vulnerabilities and 24 CVEs across AI-powered IDEs, revealing not just individual bugs but an entirely new vulnerability class rooted in the shared base IDE layer that tools like Cursor, Copilot, and others are built on."Secure for AI" as a design principle — Ari argues that legacy IDEs were never built with autonomous AI agents in mind, and that the same gap likely exists across CI/CD pipelines, cloud environments, and collaboration tools as organizations race to bolt on AI capabilities.Low barrier to exploitation — The vulnerabilities Ari found don't require nation-state sophistication to exploit; techniques like remote JSON schema exfiltration can be carried out with relatively simple prompt engineering and publicly known attack vectors.Human-in-the-loop is losing its effectiveness — Even with diff preview and approval controls enabled, exfiltration attacks still triggered in Ari's testing, and approval fatigue from hundreds of agent-generated actions is pushing developers toward YOLO mode.Least privilege and the capability vs. security trade-off — The same unrestricted access that makes AI coding agents so productive is what makes them vulnerable, and history suggests organizations will continue to optimize for utility over security without strong guardrails.Top defensive recommendations — Ari emphasized isolation (containers, VMs) as the single most important control, followed by enforcing secure defaults that can't be easily overridden, and applying enterprise-level monitoring and governance to AI agent usage.What's next — Ari is turning his attention to newer AI tools and attack surfaces but isn't naming targets yet. You can follow his work on LinkedIn, X, and his blog at makarita.com.

    25 min
  2. FEB 10

    AI is Ready for Production - Security, Risk and Compliance Isn't

    In this episode of Resilient Cyber, I sit down with VP, Product Marketing and Strategy for Protegrity, James Rice.  We will be discussing how traditional approaches to security aren't solving the AI security challenge, the importance of data-centric approaches for secure AI implementation and addressing issues such as AI data leakage. James and I dove into a lot of great topics, including: Why traditional perimeter-based and infrastructure-centric security models are failing in the era of AI, and why organizations need to fundamentally rethink their approach to securing AI workloads.The concept of data-centric security — protecting the data itself rather than the systems surrounding it — and why this shift is critical as data flows across cloud platforms, AI models, and agentic workflows.The growing risk of AI data leakage and how sensitive information (PII, PHI, PCI, intellectual property) can inadvertently be exposed through AI training data, model outputs, prompt injection, and RAG pipelines.Why many organizations find themselves stuck in an "AI circularity" — wanting to leverage AI but unable to do so because of the complexity of securing critical business data throughout the AI lifecycle.The importance of embedding security controls inline within the AI pipeline — from data ingestion and model training to orchestration and output — rather than bolting security on after the fact.How data protection techniques such as tokenization, anonymization, dynamic masking, and format-preserving encryption can enable organizations to use realistic, context-rich data for AI while maintaining compliance and reducing risk.The challenge of securing agentic AI workflows, where autonomous agents continuously interact with enterprise data, making traditional access control models insufficient.How organizations can balance the need for AI innovation and data utility with regulatory compliance requirements across frameworks like GDPR, HIPAA, PCI DSS, and emerging AI-specific regulations.James's perspective on how security, risk, and compliance functions need to evolve to keep pace with the rapid productionization of AI across the enterprise.The role of semantic guardrails in governing AI inputs and outputs, ensuring that protection is applied contextually based on how data is being used — not just where it resides. About the Guest James Rice is VP of Product Marketing and Strategy at Protegrity, a global leader in data-centric security. He brings over 20 years of experience in security, risk, and compliance, having provided solution engineering, value engineering, and implementation services to Fortune 1000 organizations across industries. Prior to Protegrity, James held leadership roles at Pathlock (formerly Greenlight Technologies), Accenture, and PricewaterhouseCoopers. About Protegrity Protegrity is a data-centric security platform that protects sensitive data across hybrid, multi-cloud, and AI environments. Their approach embeds security directly into the data itself — enabling enterprises to unlock insights, accelerate innovation, and meet global compliance with confidence. Protegrity's solutions include data discovery and classification, tokenization, anonymization, dynamic masking, and semantic guardrails for AI and analytics workflows. Learn more at protegrity.com

    26 min
4.8
out of 5
17 Ratings

About

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.

You Might Also Like