AI Security Ops

Black Hills Information Security

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

  1. AI News Stories | Episode 36

    22 JAN

    AI News Stories | Episode 36

    This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools. Key stories discussed 1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memory https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection) https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users) https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lures https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps (00:00) - Intro & Sponsors (01:16) - 1) n8n zero-day → unauthenticated RCE (09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory (19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection) (23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users) (29:59) - 5) APT28 phishing refreshed with AI-written lures (34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com

    35 min
  2. Community Q&A on AI Security | Episode 34

    18/12/2025

    Community Q&A on AI Security | Episode 34

    Community Q&A on AI Security | Episode 34 In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity. We break down: Why LLMs sometimes “make stuff up” and how to reduce hallucinationsThe role of prompts, temperature, and RAG databases in accuracyPrompting best practices and reasoning modes for better resultsLegal liability: Can you sue ChatGPT for bad advice?Memory features, data retention, and privacy trade-offsSecurity paranoia: AI apps, trust, and enterprise vs free accountsPractical examples like customizing AI for writing styleHow to explain AI to your mom (or any non-technical audience)Why AI isn’t magic—just math and advanced auto-completeWhether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly. Chapters (00:00) - Welcome & Sponsor Shoutouts (00:50) - Episode Overview: Community Q&A (01:19) - Q1: Will ChatGPT Make Stuff Up? (07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases? (11:15) - Q3: How Can AI Improve Without Ingesting Everything? (22:04) - Q4: How Do You Explain AI to Non-Technical People? (28:00) - Closing Remarks & Training Plug Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/ Active Countermeasureshttps://www.activecountermeasures.com Wild West Hackin Festhttps://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.com ----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    28 min
  3. AI News Stories | Episode 33

    11/12/2025

    AI News Stories | Episode 33

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –  https://poweredbybhis.com AI News | Episode 33In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface. We break down: AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.Amazon’s private AI bug bounty: Nova models under the microscope.Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead. ⏱️ Chapters (00:00) - Intro & Sponsor Shoutouts (01:27) - AI-Orchestrated Cyber Espionage (Anthropic) (08:10) - ShadowMQ: Critical RCE in AI Inference Engines (09:54) - KawaiiGPT: Free Black-Hat LLM (22:45) - Amazon Nova: Private AI Bug Bounty (26:38) - Google Antigravity IDE Hacked in 24 Hours (31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism 🔗 LinksAI-Orchestrated Cyber Espionage (Anthropic)ShadowMQ: Critical RCE in AI Inference EnginesKawaiiGPT: Free Black-Hat LLMAmazon Nova: Private AI Bug BountyGoogle Antigravity IDE Hacked in 24 HoursPROMPTFLUX: Malware Using Gemini for Polymorphism#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malwareBrought to you by Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ ---------------------------------------------------------------------------------------------- Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/ Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/ Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/ Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    37 min
  4. Model Evasion Attacks | Episode 32

    04/12/2025

    Model Evasion Attacks | Episode 32

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –  https://poweredbybhis.com Model Evasion Attacks | Episode 32In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity. We break down:- What model evasion attacks are and how they differ from data poisoning- How attackers tweak features to bypass classifiers (images, phishing, malware)- Real-world tactics like model extraction and trial-and-error evasion- Why non-determinism in AI models makes evasion harder to predict- Advanced threats: model theft, ablation, and adversarial AI- Defensive strategies: adversarial training, API throttling, and realistic expectations- Future outlook: regulatory trends, transparency, and the ongoing arms race Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it. #AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats Brought to you by Black Hills Information Security  https://www.blackhillsinfosec.com ---------------------------------------------------------------------------------------------- Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/ Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/ Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/ Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Sponsor Shoutouts (01:19) - What Are Model Evasion Attacks? (03:58) - Image Classifiers & Pixel Tweaks (07:01) - Malware Classification & Decision Boundaries (10:02) - Model Theft & Extraction Attacks (13:16) - Non-Determinism & Myth Busting (16:07) - AI in Offensive Capabilities (17:36) - Defensive Strategies & Adversarial Training (20:54) - Vendor Questions & Transparency (23:22) - Future Outlook & Regulatory Trends (25:54) - Panel Takeaways & Closing Thoughts

    29 min
  5. Data Poisoning | Episode 31

    27/11/2025

    Data Poisoning | Episode 31

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –  https://poweredbybhis.com Data Poisoning Attacks | Episode 31In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems. We break down: What data poisoning is and why it mattersHow attackers inject malicious samples or flip labels in training setsThe role of open-source repositories like Hugging Face in supply chain riskNew twists for LLMs: poisoning via reinforcement feedback and RAGReal-world concerns like bias in ChatGPT and malicious model uploadsDefensive strategies: governance, provenance, versioning, and security assessmentsWhether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything. #aisecurity  #DataPoisoning #Cybersecurity #BHIS #llmsecurity  #aithreats Brought to you by Black Hills Information Security  https://www.blackhillsinfosec.com ---------------------------------------------------------------------------------------------- Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/ Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/ Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/ Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/ Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Sponsor Shoutouts (01:19) - What Is Data Poisoning? (03:58) - Poisoning Classifier Models (08:10) - Risks in Open-Source Data Sets (12:30) - LLM-Specific Poisoning Vectors (17:04) - RAG and Context Injection (21:25) - Realistic Threats & Examples (25:48) - Defensive Strategies & Governance (28:27) - Panel Takeaways & Closing Thoughts

    31 min

About

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

You Might Also Like