AI Security Ops

Black Hills Information Security

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

  1. OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41

    2 DAYS AGO

    OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41

    In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook. OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another. But as with most cutting-edge AI experiments, things moved fast… and broke fast. We dig into: What OpenClaw actually is and how it worksAI agents operating other AI systems (Claude Code in the loop)The concept of “skills” and extending agent capabilitiesThe one-click RCE vulnerability discovered shortly after releaseMoltbook as a social network for AI agentsAPI keys, agent-only access, and how humans bypassed itBeacons, autonomy, and what “control” really meansWhere the line is between automation and true autonomyShort-term workforce impacts vs. long-term AI riskThis conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next. (00:00) - Intro & Guest Welcome (02:01) - AI Agents in the News (03:46) - From “Moltbot” to OpenClaw (04:36) - What Is OpenClaw? How It Works (05:36) - Claude Code + Agent-in-the-Middle Model (07:59) - Extending OpenClaw with Skills (09:05) - Release Timeline & Rapid Adoption (10:39) - One-Click RCE in OpenClaw (12:08) - Introducing Moltbook (AI Social Network) (14:26) - How Moltbook Actually Worked (18:18) - “I Am a Robot” & Agent Authentication (20:51) - Beaconing & Operational Behavior (27:07) - Automation vs. True Autonomy (27:49) - Control, Kill Switches & Agent Boundaries (31:22) - Workforce Impact & Near-Term Concerns (35:57) - AI Apocalypse? Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Beau Bullock - Guest Hayden Covington - Guest Derek Banks - Host Brian Fehrman - Host Bronwen Aker - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. 🧦 SOC Summit 2026https://www.antisyphontraining.com/event/soc-summit/

    36 min
  2. AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    20 FEB

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40 In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations. From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today. We break down: - How AI helps reduce alert fatigue and improve triage- Practical automation inside a real-world SOC- The difference between traditional ML approaches and LLM-powered workflows- Foundational techniques like K-means, anomaly detection, and behavioral baselining- Using LLMs for enrichment, investigation, and report drafting- Where AI struggles: hallucinations, inconsistency, and edge cases- Risks around over-trusting AI in security operations- How to responsibly integrate AI into analyst workflows This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security. (00:00) - Intro & Guest Introductions (05:07) - Alert Triage & SOC Pain Points (06:27) - Automation Inside the SOC (10:22) - “Boring AI”: Clustering, Baselining & Statistics (17:29) - AI-Assisted Reporting & Client Communication (18:57) - Limitations, Edge Cases & Model Risk (23:19) - Hallucinations & Inconsistent Outputs (25:27) - AI Demos vs. Real-World Security Work (28:58) - Final Thoughts & Closing Click here to watch this episode on YouTube. Creators & Guests Hayden Covington - Guest Ethan Robish - Guest Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com  🧦 SOC Summit 2026https://www.antisyphontraining.com/event/soc-summit/

    30 min
  3. AI News | Episode 39

    12 FEB

    AI News | Episode 39

    AI News | Episode 39 In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure. We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now. Chapters:00:00 – Introduction and SponsorsBlack Hills Information Security - https://www.blackhillsinfosec.com/Antisyphon Training - https://www.antisyphontraining.com/ 01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)Discussion begins as the hosts introduce the first story.How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.👉 https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/ 08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)Conversation shifts to agent privilege management and governance failures.👉 https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html 10:07 – NIST Focus on Securing AI Agents in Critical InfrastructureDiscussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.👉 https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c 13:44 – Tenable One AI ExposureBreaking down Tenable’s push into enterprise AI usage visibility and exposure management.👉 https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis Chapters (00:00) - Introduction and Sponsors (01:31) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto) (10:30) - NIST Focus on Securing AI Agents in Critical Infrastructure (14:07) - Tenable One AI Exposure Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Click here to watch this episode on YouTube. ----------------------------------------------------------------------------------------------About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com  Click here to view the episode transcript. 🧦 SOC Summit 2026https://www.antisyphontraining.com/event/soc-summit/

    19 min
  4. AI News Stories | Episode 36

    22 JAN

    AI News Stories | Episode 36

    This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools. Key stories discussed 1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memory https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection) https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users) https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lures https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps (00:00) - Intro & Sponsors (01:39) - 1) n8n zero-day → unauthenticated RCE (09:23) - 2) “Zombie agent” prompt injection via ChatGPT Memory (20:15) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection) (24:04) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users) (30:22) - 5) APT28 phishing refreshed with AI-written lures (34:38) - Closing thoughts: “AI genie is out of the bottle” + safety reminders Click here to watch a video of this episode. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com  🧦 SOC Summit 2026https://www.antisyphontraining.com/event/soc-summit/

    36 min

About

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

You Might Also Like