AI Security Ops

Black Hills Information Security

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

  1. AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    -1 ДН.

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40 In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations. From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today. We break down: - How AI helps reduce alert fatigue and improve triage- Practical automation inside a real-world SOC- The difference between traditional ML approaches and LLM-powered workflows- Foundational techniques like K-means, anomaly detection, and behavioral baselining- Using LLMs for enrichment, investigation, and report drafting- Where AI struggles: hallucinations, inconsistency, and edge cases- Risks around over-trusting AI in security operations- How to responsibly integrate AI into analyst workflows This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security. (00:00) - Intro & Guest Introductions (04:44) - Alert Triage & SOC Pain Points (06:04) - Automation Inside the SOC (09:59) - “Boring AI”: Clustering, Baselining & Statistics (17:06) - AI-Assisted Reporting & Client Communication (18:34) - Limitations, Edge Cases & Model Risk (22:56) - Hallucinations & Inconsistent Outputs (25:04) - AI Demos vs. Real-World Security Work (28:35) - Final Thoughts & Closing Click here to watch this episode on YouTube. Creators & Guests Hayden Covington - Guest Ethan Robish - Guest Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com

    29 мин.
  2. AI News | Episode 39

    12 ФЕВР.

    AI News | Episode 39

    AI News | Episode 39 In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure. We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now. Chapters:00:00 – Introduction and SponsorsBlack Hills Information Security - https://www.blackhillsinfosec.com/Antisyphon Training - https://www.antisyphontraining.com/ 01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)Discussion begins as the hosts introduce the first story.How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.👉 https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/ 08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)Conversation shifts to agent privilege management and governance failures.👉 https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html 10:07 – NIST Focus on Securing AI Agents in Critical InfrastructureDiscussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.👉 https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c 13:44 – Tenable One AI ExposureBreaking down Tenable’s push into enterprise AI usage visibility and exposure management.👉 https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis Chapters (00:00) - Introduction and Sponsors (01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto) (10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure (13:44) - Tenable One AI Exposure Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Click here to watch this episode on YouTube. ----------------------------------------------------------------------------------------------About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com  Click here to view the episode transcript.

    18 мин.
  3. AI News Stories | Episode 36

    22 ЯНВ.

    AI News Stories | Episode 36

    This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools. Key stories discussed 1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memory https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection) https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users) https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lures https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps (00:00) - Intro & Sponsors (01:16) - 1) n8n zero-day → unauthenticated RCE (09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory (19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection) (23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users) (29:59) - 5) APT28 phishing refreshed with AI-written lures (34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders Click here to watch a video of this episode. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com

    35 мин.
  4. Community Q&A on AI Security | Episode 34

    18.12.2025

    Community Q&A on AI Security | Episode 34

    Community Q&A on AI Security | Episode 34 In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity. We break down: Why LLMs sometimes “make stuff up” and how to reduce hallucinationsThe role of prompts, temperature, and RAG databases in accuracyPrompting best practices and reasoning modes for better resultsLegal liability: Can you sue ChatGPT for bad advice?Memory features, data retention, and privacy trade-offsSecurity paranoia: AI apps, trust, and enterprise vs free accountsPractical examples like customizing AI for writing styleHow to explain AI to your mom (or any non-technical audience)Why AI isn’t magic—just math and advanced auto-completeWhether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly. Chapters (00:00) - Welcome & Sponsor Shoutouts (00:50) - Episode Overview: Community Q&A (01:19) - Q1: Will ChatGPT Make Stuff Up? (07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases? (11:15) - Q3: How Can AI Improve Without Ingesting Everything? (22:04) - Q4: How Do You Explain AI to Non-Technical People? (28:00) - Closing Remarks & Training Plug Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/ Active Countermeasureshttps://www.activecountermeasures.com Wild West Hackin Festhttps://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.com ----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    28 мин.

Об этом подкасте

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

Вам может также понравиться