AI Security Ops

Black Hills Information Security

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

  1. Embedding Space Attacks | Episode 45

    26 DE MAR.

    Embedding Space Attacks | Episode 45

    In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data. Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly. Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios. We dig into:- What embeddings are and how AI systems convert text into numerical representations- How vector spaces enable similarity search and retrieval in LLM applications- What embedding space attacks are and why they matter for AI security- How small perturbations in data can drastically change model behavior- The risks of poisoned data in RAG and vector databases- How attackers can influence search results and downstream AI outputs- Why these attacks are subtle, hard to detect, and often overlooked- The role of visualization in understanding embedding behavior- Real-world implications for AI-powered applications and workflows- Defensive considerations when building with embeddings and vector stores This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI. ⸻ 📚 Key Concepts Covered AI Foundations- Embeddings and vector representations- Similarity search and vector space reasoning AI Security Risks- Embedding space manipulation- Data poisoning in vector databases- Retrieval manipulation in RAG systems Applications & Impact- LLM-powered search and assistants- AI pipelines using embeddings- Risks in production AI systems #AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis (00:00) - Intro & Episode Overview (01:39) - What Are Embeddings? (AI Only Understands Numbers) (03:44) - The Embedding Process (Text → Vectors) (07:43) - Similarity, Classification & Vector Math (09:55) - Visualizing Embedding Space (2D Projection) (14:29) - Classifiers (15:39) - Playing Games with Information (18:06) - Attack Techniques: Synonyms & Context Manipulation (20:29) - Context Padding (27:10) - Collision Attacks, Defenses & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    33 min
  2. Indirect Prompt Injection | Episode 44

    19 DE MAR.

    Indirect Prompt Injection | Episode 44

    In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems. Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them. From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up. We dig into:• What indirect prompt injection is and how it differs from direct attacks• Why OWASP ranks prompt injection as the #1 LLM security risk• How attackers hide payloads inside emails, documents, and web content• The EchoLeak zero-click exploit against Microsoft 365 Copilot• Web-based prompt injection attacks observed in the wild (Unit 42)• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot• How RAG systems amplify the risk through poisoned knowledge bases• Why LLM architecture makes this problem fundamentally hard to solve• Research showing modern defenses still fail 50%+ of the time• Practical mitigation strategies: least privilege, human-in-the-loop, and observability This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows. ⸻ 📚 Key References Prompt Injection & LLM Risk• OWASP Top 10 for LLM Applications 2025 — https://owasp.org Real-World Attacks• EchoLeak (CVE-2025-32711) — Aim Security / arXiv• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — https://unit42.paloaltonetworks.com AI System Vulnerabilities• Cursor IDE (CVE-2025-59944)• GitHub Copilot (CVE-2025-53773)• Lakera — Zero-Click MCP Attack — https://lakera.ai Research on Defenses• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)• Anthropic System Card (Feb 2026)• Google Gemini Security Research (2025) Standards & Guidance• NIST AI Risk Management Framework — https://nist.gov• MITRE ATLAS — https://atlas.mitre.org• ISO/IEC 42001 AI Management Systems #AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec (00:00) - Intro & BHIS / Antisyphon Overview (01:19) - OWASP Top 10 & Prompt Injection Context (01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy) (02:54) - Real-World Attack Scenarios (Calendar & Hidden Payloads) (05:10) - EchoLeak & Zero-Click Copilot Exploit (06:10) - Weaponized Excel Prompt Injection PoC (06:50) - Email Injection & AI Summarization Abuse (09:07) - Why Detection & Prevention Are So Difficult (14:02) - Mitigations & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    16 min
  3. Top AI Security Concerns | Episode 43

    12 DE MAR.

    Top AI Security Concerns | Episode 43

    In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year. As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first. We dig into:- Why agentic AI is emerging as a major security concern- How attackers could weaponize autonomous agents to scale operations- The risk of malicious agent skills and AI supply chain attacks- Why overly broad permissions make agent-based systems dangerous- AI-assisted phishing campaigns and social engineering at scale- The rise of deepfakes and corporate fraud driven by generative AI- Why humans still struggle to reliably detect deepfake media- The economics of deepfake fraud and real-world incidents- Prompt injection attacks and why they remain difficult to solve- Whether future models may autonomously discover and exploit jailbreaks This episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve. 📚 Key References Agentic AI Threats- CrowdStrike 2026 Global Threat Report — https://www.crowdstrike.com- IBM X-Force 2026 Threat Intelligence Index — https://www.ibm.com/security/x-force- Cisco State of AI Security 2026 — https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab Deepfakes & AI-Driven Fraud- WEF Global Cybersecurity Outlook 2026 — https://www.weforum.org/publications/global-cybersecurity-outlook-2026/- International AI Safety Report 2026 — https://www.internationalaisafetyreport.org AI Security & Infrastructure Risk- CISA Joint Guidance on AI in OT — https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology Prompt Injection & LLM Exploitation- Schneier et al., “The Promptware Kill Chain” — https://www.lawfaremedia.org/article/the-promptware-kill-chain- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/ (00:00) - Intro & Episode Overview (02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index) (03:46) - Malicious Agent Skills & AI Supply Chain Attacks (Cisco State of AI Security) (04:58) - How Agent Skills Actually Work (07:47) - Permissions & Guardrails for AI Agents (CISA AI in OT Guidance) (09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports) (13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook) (15:38) - Corporate Fraud & Deepfake Incidents (International AI Safety Report) (17:21) - Why Humans Struggle to Detect Deepfakes (21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain) (24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research) (28:59) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    29 min
  4. Claude Cowork Discussion | Episode 42

    6 DE MAR.

    Claude Cowork Discussion | Episode 42

    We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork. Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant. We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders. We dig into:- What Claude Cowork is and how it differs from Claude Code- Agentic desktop tools vs. command-line workflows- Local file access and OS-level interaction risks- Skills, automation, and task iteration- Chrome plugins and expanded attack surface- Overly broad permissions and least-privilege concerns- SaaS disruption and shifting trust boundaries- Endpoint monitoring challenges- The speed of AI releases vs. security review cycles- Balancing innovation with responsible deployment This conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs. (00:00) - Intro & Episode Overview (02:08) - What Is Claude Cowork? (04:03) - Desktop Agents vs. Command Line Users (06:12) - Agentic Workflows & Task Automation (08:08) - Building Fast with Claude (Speed of Development) (09:29) - Browser Plugins & Expanding Capabilities (11:06) - Permission Models & “Just Give It Access to Everything” (12:40) - SaaS Disruption & Enterprise Impact (14:38) - Overly Broad File Access Risks (16:27) - Organizational Disruption & Workforce Impact (18:09) - Security Lag vs. Rapid AI Releases (19:46) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Bronwen Aker - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    22 min
  5. OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41

    26 DE FEV.

    OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41

    In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook. OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another. But as with most cutting-edge AI experiments, things moved fast… and broke fast. We dig into: What OpenClaw actually is and how it worksAI agents operating other AI systems (Claude Code in the loop)The concept of “skills” and extending agent capabilitiesThe one-click RCE vulnerability discovered shortly after releaseMoltbook as a social network for AI agentsAPI keys, agent-only access, and how humans bypassed itBeacons, autonomy, and what “control” really meansWhere the line is between automation and true autonomyShort-term workforce impacts vs. long-term AI riskThis conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next. (00:00) - Intro & Guest Welcome (01:38) - AI Agents in the News (03:23) - From “Moltbot” to OpenClaw (04:13) - What Is OpenClaw? How It Works (05:13) - Claude Code + Agent-in-the-Middle Model (07:36) - Extending OpenClaw with Skills (08:42) - Release Timeline & Rapid Adoption (10:16) - One-Click RCE in OpenClaw (11:45) - Introducing Moltbook (AI Social Network) (14:03) - How Moltbook Actually Worked (17:55) - “I Am a Robot” & Agent Authentication (20:28) - Beaconing & Operational Behavior (26:44) - Automation vs. True Autonomy (27:26) - Control, Kill Switches & Agent Boundaries (30:59) - Workforce Impact & Near-Term Concerns (35:34) - AI Apocalypse? Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Beau Bullock - Guest Hayden Covington - Guest Derek Banks - Host Brian Fehrman - Host Bronwen Aker - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    36 min
  6. AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    20 DE FEV.

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40 In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations. From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today. We break down: - How AI helps reduce alert fatigue and improve triage- Practical automation inside a real-world SOC- The difference between traditional ML approaches and LLM-powered workflows- Foundational techniques like K-means, anomaly detection, and behavioral baselining- Using LLMs for enrichment, investigation, and report drafting- Where AI struggles: hallucinations, inconsistency, and edge cases- Risks around over-trusting AI in security operations- How to responsibly integrate AI into analyst workflows This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security. (00:00) - Intro & Guest Introductions (04:44) - Alert Triage & SOC Pain Points (06:04) - Automation Inside the SOC (09:59) - “Boring AI”: Clustering, Baselining & Statistics (17:06) - AI-Assisted Reporting & Client Communication (18:34) - Limitations, Edge Cases & Model Risk (22:56) - Hallucinations & Inconsistent Outputs (25:04) - AI Demos vs. Real-World Security Work (28:35) - Final Thoughts & Closing Click here to watch this episode on YouTube. Creators & Guests Hayden Covington - Guest Ethan Robish - Guest Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com

    29 min
  7. AI News | Episode 39

    12 DE FEV.

    AI News | Episode 39

    AI News | Episode 39 In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure. We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now. Chapters:00:00 – Introduction and SponsorsBlack Hills Information Security - https://www.blackhillsinfosec.com/Antisyphon Training - https://www.antisyphontraining.com/ 01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)Discussion begins as the hosts introduce the first story.How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.👉 https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/ 08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)Conversation shifts to agent privilege management and governance failures.👉 https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html 10:07 – NIST Focus on Securing AI Agents in Critical InfrastructureDiscussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.👉 https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c 13:44 – Tenable One AI ExposureBreaking down Tenable’s push into enterprise AI usage visibility and exposure management.👉 https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis Chapters (00:00) - Introduction and Sponsors (01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto) (10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure (13:44) - Tenable One AI Exposure Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Click here to watch this episode on YouTube. ----------------------------------------------------------------------------------------------About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com  Click here to view the episode transcript.

    18 min

Sobre

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

Você também pode gostar de