AI Security Ops

Black Hills Information Security

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

  1. Vercel Breach | Episode 50

    4 DAYS AGO ·  VIDEO

    Vercel Breach | Episode 50

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the Vercel breach — a real-world incident that shows just how fragile modern security has become in the age of AI integrations and SaaS sprawl. What started as a simple Roblox cheat script downloaded on a work laptop quickly escalated into a multi-hop compromise involving OAuth permissions, an AI productivity tool, and access into Vercel’s internal systems. This wasn’t a zero-day or advanced nation-state exploit. It was a chain of everyday decisions: installing software, clicking “Allow,” and trusting third-party integrations. The result? Allegedly $2M worth of data listed for sale, including API keys, internal data, and employee records — all from a breach path that most organizations aren’t even monitoring. We dig into:• What Vercel is and why it’s such a high-value target• How environment variables become the “keys to the kingdom”• The full attack chain: Roblox malware → Context.ai → Vercel• What infostealers like Lumma actually do (and how cheap they are)• How OAuth permissions become persistent backdoors• Why AI productivity tools introduce hidden risk• The rise of “shadow AI” inside organizations• How supply chain attacks continue to scale across ecosystems• The role of AI in accelerating attacker speed and capability• Why this type of breach is becoming the new normal This episode highlights a critical shift in cybersecurity: you don’t have to get hacked directly anymore — attackers just need to compromise something you’ve already trusted. ⸻ 📚 Key Concepts & Topics Attack Chain & Initial Access• Lumma infostealer and malware-as-a-service• Credential theft: passwords, cookies, OAuth tokens• Low-cost, high-impact compromise paths OAuth & Identity Risk• “Allow All” permissions and persistent access• OAuth tokens as long-lived entry points• Lack of visibility into third-party integrations AI Security Risks• Shadow AI and unsanctioned tool adoption• Deep integrations with Google Workspace and SaaS• AI tools as new supply chain attack surfaces Supply Chain Attacks• Multi-hop compromise paths across vendors• Real-world parallels (Trivy, LiteLLM)• Interconnected ecosystems increasing blast radius Threat Landscape Evolution• AI accelerating attacker speed and scale• Lower barrier to entry for complex attacks• Criminal groups operating as decentralized “businesses” Defensive Strategy• Auditing OAuth integrations and permissions• Enforcing least privilege across SaaS tools• Segmenting sensitive data and reducing blast radius• Avoiding risky behavior on corporate devices ⏱️ Chapters(00:00) - Intro & Breach Overview (00:21) - Sponsors & Show Setup (01:29) - What Vercel Is & Why It Matters (02:31) - Initial Compromise: Roblox Script & Infostealer (05:03) - OAuth Permissions & Pivot into Vercel (08:04) - AI Tools, Over-Permissioning & Supply Chain Risk (09:53) - AI Acceleration of Attacks & Ecosystem Impact (13:34) - Threat Actors, Attribution & Key Takeaways Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Ethan Robish - Guest Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    18 min
  2. Claude Mythos | Episode 49

    24 APR ·  VIDEO

    Claude Mythos | Episode 49

    In this episode of BHIS Presents: AI Security Ops, the team breaks down Claude Mythos Preview — Anthropic’s unreleased frontier model that may represent a turning point in AI-powered cybersecurity. What started as a controlled research release under Project Glasswing has quickly become one of the most controversial developments in AI security. Mythos isn’t just better at finding vulnerabilities — it’s operating at a scale and depth that challenges long-held assumptions about how quickly software can be broken… and whether it can realistically be fixed. From leaked internal documents to real-world exploit generation, this episode explores what happens when vulnerability discovery becomes cheap, fast, and automated — while remediation remains slow, manual, and human-bound. The result? A growing asymmetry that could fundamentally reshape the security landscape. We dig into:• What Claude Mythos Preview is and why it was withheld from the public• The leaks that exposed its existence and capabilities• How Project Glasswing is positioning AI for defensive use• Real-world vulnerability discoveries made by the model• The “vulnpocalypse” problem: discovery vs. remediation imbalance• Emerging AI behaviors that raise containment concerns• How attackers are already leveraging AI for offensive operations• The access control dilemma: who gets to use models like this?• Why patching — not discovery — is now the primary bottleneck• What defenders must do to prepare for AI-accelerated exploitation This episode explores a critical shift in cybersecurity: when vulnerability discovery scales faster than human response, the entire defensive model starts to break down. ⸻ 📚 Key Concepts & Topics AI-Powered Vulnerability Discovery• Autonomous exploit generation and chaining• Benchmark performance vs. prior models• AI-assisted offensive security workflows AI Security Risks• Discovery vs. remediation asymmetry• AI-driven vulnerability scaling• Offensive use by nation-states and cybercriminals Model Behavior & Safety• Emergent autonomy and sandbox escape concerns• Evaluation awareness and deceptive behaviors• Limits of containment and alignment Defensive Strategy & Readiness• Patch velocity as the new bottleneck• AI-assisted vulnerability management• Open-source ecosystem risk exposure AI Governance & Industry Response• Restricted model releases and access control• Regulatory and financial sector concerns• The future of AI capability containment #AISecurity #CyberSecurity #ArtificialIntelligence #LLMSecurity #BHIS #AIThreats #InfoSec #AIAgents #CyberDefense (00:00) - Intro & Show Overview (01:00) - Sponsors, Hosts, and Episode Setup (01:53) - What Is Claude Mythos Preview? (03:04) - The Leak, Project Glasswing, and Restricted Access (07:53) - Capabilities: Exploits, Benchmarks, and Breakthroughs (09:16) - Real-World Vulnerabilities & “Vulnpocalypse” Concerns (14:47) - Access Control, Threat Actors, and Emerging Risks (21:38) - Defensive Strategy: Patching, AI Tools, and What Comes Next (23:08) - Defensive Strategy: Patching, AI Tools, and What Comes Next Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Bronwen Aker - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    26 min
  3. Holocron OpenBrain with Alex Minster | Episode 48

    22 APR ·  VIDEO

    Holocron OpenBrain with Alex Minster | Episode 48

    In this episode of BHIS Presents: AI Security Ops, the team is joined by Alex Minster to demo his project: HOLOCRON OpenBrain with — a persistent, model-agnostic memory layer designed to solve one of the biggest frustrations in AI workflows. Instead of starting from scratch every time you open a new chat, Alex’s approach creates a centralized “brain” that multiple AI models can connect to, allowing context, notes, and intelligence to persist across sessions, tools, and even platforms. The result? A flexible system that captures thoughts, ingests threat intel, and generates structured outputs — all without locking you into a single AI provider. We dig into:• The “cold start” problem in AI and why it breaks real workflows• What the OpenBrain HOLOCRON is (and isn’t)• How centralized memory changes the way we interact with AI tools• The architecture: Supabase, OpenRouter, MCP, and multi-model access• Using Discord as a lightweight ingestion pipeline for persistent memory• Real-world CTI workflows: capturing intel and generating reports on demand• Managing, editing, and superseding memory over time• The tradeoffs between context richness and security exposure• Multi-model reliability differences (and why they matter)• Practical setup: what it takes to build your own system This episode highlights a shift in how AI is used operationally: moving from isolated chats to persistent, structured memory systems that can evolve alongside your work. ⸻ 📚 Key Concepts & Topics Persistent AI Memory• Solving the “cold start” problem• Centralized context across multiple models• Structured vs raw data ingestion AI Architecture & Tooling• Supabase as a backend memory store• OpenRouter for multi-model access• MCP protocol for integrations Cyber Threat Intelligence (CTI)• Capturing, tagging, and prioritizing intel• Generating automated reports and dashboards• Context-aware intelligence workflows Security & Privacy• Need-to-know data design• Avoiding overexposure via full integrations (email, docs, etc.)• Auditing and removing sensitive data Operational Workflows• Capturing ideas, notes, and research• Multi-project memory segmentation (“multiple brains”)• Using AI to accelerate—not replace—analysis 🔗 HOLOCRON GitHub Guide: https://github.com/belouve/open-brain-holocron🔗 Alex Minster: https://www.linkedin.com/in/alexminster/ #AISecurity #CyberSecurity #AIWorkflows #LLM #ThreatIntel #DevSecOps #BHIS #OpenSource #AIEngineering (00:00) - Intro & Guest Introduction (Alex Minster) (00:55) - What Is the OpenBrain HOLOCRON? (Cold Start Problem) (03:00) - How It Works: Centralized Memory & AI Integration (05:30) - Architecture & Free-Tier Stack (Supabase, OpenRouter, MCP) (07:54) - Demo: Capturing Thoughts via Discord (10:55) - CTI Use Case: Prioritizing & Querying Intelligence (15:03) - Managing Memory: Editing, Deleting & Superseding Data (19:04) - Running Protocols: Automated CTI Reports (Demo) (22:05) - Multi-Brain Concept & Segmentation (25:00) - Real-World Output: Reports, Dashboards & Briefings (31:31) - Multi-Model Differences (Claude vs ChatGPT) (35:55) - Improving the System with Feedback Loops (37:29) - How to Build Your Own OpenBrain (41:26) - Real-World Benefits & Workflow Improvements (45:44) - Security Considerations & Data Exposure Risks (47:20) - Where to Find the Project & Contribute (50:16) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Alex Minster "Belouve" - Guest Ethan Robish - Guest Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    51 min
  4. LiteLLM Supply Chain Compromise | Episode 47

    13 APR

    LiteLLM Supply Chain Compromise | Episode 47

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the LiteLLM supply chain compromise–a real-world attack that shows how AI systems are being breached through the same old software supply chain weaknesses. What initially looked like a bad release quickly escalated into a full-scale compromise affecting a library downloaded millions of times per day. But LiteLLM wasn’t the starting point–it was just one link in a much larger attack chain involving compromised security tools, CI/CD pipelines, and stolen publishing credentials. The result? Malicious packages distributed at scale, harvesting secrets, enabling lateral movement, and establishing persistence across affected systems. We dig into:• What LiteLLM is and why it’s such a high-value target• How the attack chain started with compromised security tooling (Trivy, Checkmarx)• How unpinned dependencies enabled the compromise• The role of CI/CD pipelines in exposing sensitive credentials• What the malicious LiteLLM packages actually did (credential harvesting, persistence, lateral movement)• The scale of impact given LiteLLM’s widespread adoption• Why supply chain attacks are no longer theoretical–and no longer nation-state exclusive• How AI is lowering the barrier to entry for attackers• Why this wasn’t really an “AI vulnerability”–but an infrastructure failure• The growing risk of automated, agent-driven attack discovery This episode highlights a critical reality: the biggest risks in AI systems aren’t always in the models–they’re in the pipelines, dependencies, and infrastructure surrounding them. ⸻ 📚 Key Concepts & Topics Supply Chain Security• Dependency poisoning and malicious package distribution• CI/CD pipeline compromise• Version pinning and build integrity Credential & Secrets Exposure• API keys, SSH keys, and cloud credentials in pipelines• Risks of centralized AI gateways like LiteLLM Threat Actor Techniques• Tag rewriting and trusted reference hijacking• Multi-stage malware (harvest, lateral movement, persistence)• Use of lookalike domains for exfiltration AI & Security Reality Check• AI as an amplifier, not the root vulnerability• Traditional security failures in modern AI stacks• Automation lowering attacker barriers Defensive Strategies• Dependency pinning and isolation (Docker, VPS)• Atomic credential rotation• Treating CI/CD tools as critical infrastructure• Monitoring outbound traffic from build environments (00:00) - Intro & Incident Overview (01:26) - What Is LiteLLM & Why It Matters (03:53) - Supply Chain Scope & Why This Is Dangerous (07:31) - Why These Attacks Are Getting Easier (AI + Scale) (10:48) - Attack Chain Breakdown (Trivy → Checkmarx → LiteLLM) (11:50) - What the Malware Did & Impact at Scale (14:23) - Detection, Response & Who Was Safe Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    20 min
  5. Model Ablation | Episode 46

    2 APR

    Model Ablation | Episode 46

    In this episode of BHIS Presents: AI Security Ops, the team breaks down model ablation — a powerful interpretability technique that’s quickly becoming a serious concern in AI security. What started as a way to better understand how models work is now being used to remove safety mechanisms entirely. By identifying and disabling specific components inside a model, researchers — and attackers — can effectively strip out refusal behavior while leaving the rest of the model fully functional. The result? A fast, reliable way to “de-safety” AI systems without prompt engineering, fine-tuning, or significant compute. We dig into:• What model ablation is and how it works• The difference between ablation and pruning• How safety behaviors can be isolated inside model internals• Why refusal mechanisms are often localized (and fragile)• How ablation is being used as a jailbreak technique• Why this is more reliable than prompt-based attacks• Risks specific to open-weight models and public checkpoints• The growing “uncensored model” ecosystem• Why interpretability is a double-edged sword• Whether safety should be deeply embedded into model architecture• What this means for defenders and AI security strategy This episode explores a critical shift in AI risk: when safety controls can be surgically removed, they stop being security controls at all. ⸻ 📚 Key Concepts & Topics Model Internals & Interpretability• Neurons, attention heads, and residual stream analysis• Activation space and feature directions AI Security Risks• Prompt injection vs. structural attacks• Jailbreaking techniques and safety bypasses Model Access & Risk Surface• Open-weight vs. API-only models• Hugging Face and the uncensored model ecosystem AI Safety & Governance• Defense-in-depth for AI systems• Future standards for ablation resistance #AISecurity #ModelAblation #LLMSecurity #CyberSecurity #ArtificialIntelligence #AIResearch #BHIS #AIAgents #InfoSec (00:00) - Intro & Show Overview (01:27) - Removing AI Safety Mechanisms (02:05) - What Is Model Ablation? (Technical Breakdown) (04:01) - Open-Weight Models & Practical Limitations (05:43) - Risks, Use Cases, and Ethical Tradeoffs (07:32) - Security Implications & “You Can’t Ban Math” (10:43) - Future Impact: Open Models Catching Up (17:44) - Final Takeaway: Why “No” Isn’t Security Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    18 min
  6. Embedding Space Attacks | Episode 45

    26 MAR

    Embedding Space Attacks | Episode 45

    In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data. Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly. Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios. We dig into:- What embeddings are and how AI systems convert text into numerical representations- How vector spaces enable similarity search and retrieval in LLM applications- What embedding space attacks are and why they matter for AI security- How small perturbations in data can drastically change model behavior- The risks of poisoned data in RAG and vector databases- How attackers can influence search results and downstream AI outputs- Why these attacks are subtle, hard to detect, and often overlooked- The role of visualization in understanding embedding behavior- Real-world implications for AI-powered applications and workflows- Defensive considerations when building with embeddings and vector stores This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI. ⸻ 📚 Key Concepts Covered AI Foundations- Embeddings and vector representations- Similarity search and vector space reasoning AI Security Risks- Embedding space manipulation- Data poisoning in vector databases- Retrieval manipulation in RAG systems Applications & Impact- LLM-powered search and assistants- AI pipelines using embeddings- Risks in production AI systems #AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis (00:00) - Intro & Episode Overview (01:39) - What Are Embeddings? (AI Only Understands Numbers) (03:44) - The Embedding Process (Text → Vectors) (07:43) - Similarity, Classification & Vector Math (09:55) - Visualizing Embedding Space (2D Projection) (14:29) - Classifiers (15:39) - Playing Games with Information (18:06) - Attack Techniques: Synonyms & Context Manipulation (20:29) - Context Padding (27:10) - Collision Attacks, Defenses & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    33 min
  7. Indirect Prompt Injection | Episode 44

    19 MAR

    Indirect Prompt Injection | Episode 44

    In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems. Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them. From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up. We dig into:• What indirect prompt injection is and how it differs from direct attacks• Why OWASP ranks prompt injection as the #1 LLM security risk• How attackers hide payloads inside emails, documents, and web content• The EchoLeak zero-click exploit against Microsoft 365 Copilot• Web-based prompt injection attacks observed in the wild (Unit 42)• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot• How RAG systems amplify the risk through poisoned knowledge bases• Why LLM architecture makes this problem fundamentally hard to solve• Research showing modern defenses still fail 50%+ of the time• Practical mitigation strategies: least privilege, human-in-the-loop, and observability This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows. ⸻ 📚 Key References Prompt Injection & LLM Risk• OWASP Top 10 for LLM Applications 2025 — https://owasp.org Real-World Attacks• EchoLeak (CVE-2025-32711) — Aim Security / arXiv• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — https://unit42.paloaltonetworks.com AI System Vulnerabilities• Cursor IDE (CVE-2025-59944)• GitHub Copilot (CVE-2025-53773)• Lakera — Zero-Click MCP Attack — https://lakera.ai Research on Defenses• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)• Anthropic System Card (Feb 2026)• Google Gemini Security Research (2025) Standards & Guidance• NIST AI Risk Management Framework — https://nist.gov• MITRE ATLAS — https://atlas.mitre.org• ISO/IEC 42001 AI Management Systems #AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec (00:00) - Intro & BHIS / Antisyphon Overview (01:19) - OWASP Top 10 & Prompt Injection Context (01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy) (02:54) - Real-World Attack Scenarios (Calendar & Hidden Payloads) (05:10) - EchoLeak & Zero-Click Copilot Exploit (06:10) - Weaponized Excel Prompt Injection PoC (06:50) - Email Injection & AI Summarization Abuse (09:07) - Why Detection & Prevention Are So Difficult (14:02) - Mitigations & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    16 min
  8. Top AI Security Concerns | Episode 43

    12 MAR

    Top AI Security Concerns | Episode 43

    In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year. As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first. We dig into:- Why agentic AI is emerging as a major security concern- How attackers could weaponize autonomous agents to scale operations- The risk of malicious agent skills and AI supply chain attacks- Why overly broad permissions make agent-based systems dangerous- AI-assisted phishing campaigns and social engineering at scale- The rise of deepfakes and corporate fraud driven by generative AI- Why humans still struggle to reliably detect deepfake media- The economics of deepfake fraud and real-world incidents- Prompt injection attacks and why they remain difficult to solve- Whether future models may autonomously discover and exploit jailbreaks This episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve. 📚 Key References Agentic AI Threats- CrowdStrike 2026 Global Threat Report — https://www.crowdstrike.com- IBM X-Force 2026 Threat Intelligence Index — https://www.ibm.com/security/x-force- Cisco State of AI Security 2026 — https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab Deepfakes & AI-Driven Fraud- WEF Global Cybersecurity Outlook 2026 — https://www.weforum.org/publications/global-cybersecurity-outlook-2026/- International AI Safety Report 2026 — https://www.internationalaisafetyreport.org AI Security & Infrastructure Risk- CISA Joint Guidance on AI in OT — https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology Prompt Injection & LLM Exploitation- Schneier et al., “The Promptware Kill Chain” — https://www.lawfaremedia.org/article/the-promptware-kill-chain- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/ (00:00) - Intro & Episode Overview (02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index) (03:46) - Malicious Agent Skills & AI Supply Chain Attacks (Cisco State of AI Security) (04:58) - How Agent Skills Actually Work (07:47) - Permissions & Guardrails for AI Agents (CISA AI in OT Guidance) (09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports) (13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook) (15:38) - Corporate Fraud & Deepfake Incidents (International AI Safety Report) (17:21) - Why Humans Struggle to Detect Deepfakes (21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain) (24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research) (28:59) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Brian Fehrman - Host Brought to you by: Black Hills Information Security  https://www.blackhillsinfosec.com Antisyphon Training https://www.antisyphontraining.com/ Active Countermeasures https://www.activecountermeasures.com Wild West Hackin Fest https://wildwesthackinfest.com 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.

    29 min

About

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

You Might Also Like