Certified - AI Security Audio Course

Jason Edwards

The AI Security & Threats Audio Course is a comprehensive, audio-first learning series focused on the risks, defenses, and governance models that define secure artificial intelligence operations today. Designed for cybersecurity professionals, AI practitioners, and certification candidates, this course translates complex technical and policy concepts into clear, practical lessons. Each episode explores a critical aspect of AI security—from prompt injection and model theft to data poisoning, adversarial attacks, and secure machine learning operations (MLOps). You’ll gain a structured understanding of how vulnerabilities emerge, how threat actors exploit them, and how robust controls can mitigate these evolving risks. The course also covers the frameworks and best practices shaping AI governance, assurance, and resilience. Learners will explore global standards and regulatory guidance, including NIST AI Risk Management Framework, ISO/IEC 23894, and emerging organizational policies around transparency, accountability, and continuous monitoring. Through practical examples and scenario-driven insights, you’ll learn how to assess model risk, integrate secure development pipelines, and implement monitoring strategies that ensure trust and compliance across the AI lifecycle. Developed by BareMetalCyber.com, the AI Security & Threats Audio Course blends foundational security knowledge with real-world application, helping you prepare for advanced certifications and leadership in the growing field of AI assurance. Explore more audio courses, textbooks, and cybersecurity resources at BareMetalCyber.com—your trusted source for structured, expert-driven learning.

  1. EPISODE 1

    Episode 1 — Course Overview & How to Use This Prepcast

    This opening episode provides a structured orientation to the AI Security and Threats Audio course series, helping listeners understand what the program covers and how to best engage with the material. The overview defines the scope of AI security by placing it within the broader context of cybersecurity and risk management, while clarifying the distinctive elements that make AI-specific security necessary. It explains how the episodes are organized, moving from foundational principles through attack surfaces, defenses, governance frameworks, and advanced considerations. The episode also outlines the intended audience, which includes exam candidates, practitioners, and professionals from related disciplines, while emphasizing accessibility for beginners. By framing AI security as both a technical and organizational discipline, the episode positions the Audio course as a comprehensive study and reference tool for learners at all levels. The description also introduces the concept of using checklists, transcripts, and structured resources to reinforce retention of exam-relevant material. It explains that each episode is designed to be self-contained, yet forms part of a coherent series that builds on prior topics for cumulative understanding. Scenarios are introduced as a way to contextualize threats and defenses, ensuring that learners connect theory with practice. Troubleshooting considerations, such as how to recognize gaps in current understanding or apply lessons across domains, are emphasized to prepare learners for certification exams. The episode closes with guidance on how to approach the course—either linearly or by focusing on specific areas most relevant to the listener’s role or goals—so that every learner can extract maximum value from the structured format. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    22 min
  2. EPISODE 2

    Episode 2 — The AI Security Landscape

    This episode defines the AI security landscape by mapping the assets, attack surfaces, and emerging threats that distinguish AI from classical application security. It introduces critical components such as training data, model weights, prompts, and external tools, explaining why each must be protected as an asset. The relevance for certification exams lies in understanding how these components shift trust boundaries and create new risks compared to traditional software systems. The episode emphasizes that adversaries target AI differently, often exploiting natural language, data poisoning, or model extraction techniques. By describing the breadth of risks, the episode establishes the foundation for examining each in detail throughout the Audio course. In its applied perspective, the episode explores how organizations must expand security programs to account for AI-specific challenges. Examples include leakage of personal information through outputs, manipulation of retrieval-augmented generation pipelines, and exploitation of agents connected to external systems. It discusses how exam candidates should recognize parallels and differences between AI security and established AppSec practices, noting where controls such as authentication, logging, and encryption remain essential but insufficient. Scenarios highlight how adversary motivations—ranging from fraud to disinformation—shape the threat landscape. The description underscores the importance of holistic defenses, aligning technical, organizational, and compliance strategies to manage this new class of risks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    23 min
  3. EPISODE 3

    Episode 3 — System Architecture & Trust Boundaries

    This episode explains the architecture of AI systems, breaking down their stages and components to show how trust boundaries shift across the lifecycle. Training, inference, retrieval-augmented generation (RAG), and agent frameworks are introduced as discrete but interconnected environments, each with distinct risks. For exam relevance, learners are expected to identify these architectural elements, describe where threats occur, and understand how adversaries exploit them. The discussion highlights how traditional security boundaries—such as network segmentation or user authentication—must be re-evaluated when applied to AI. Understanding these system dynamics is crucial for answering exam questions and for analyzing risks in real deployments. The applied discussion explores how architecture decisions affect overall system resilience. Examples include how training pipelines depend on secure data provenance, how inference APIs expose models to prompt injection or extraction attacks, and how agents connected to tools introduce risks of privilege escalation. The episode emphasizes practical considerations such as monitoring trust boundaries, enforcing least privilege, and mapping dependencies across cloud and on-premises environments. Troubleshooting scenarios illustrate how gaps in architecture create opportunities for attackers, reinforcing why governance of system design is as important as technical controls. By mastering these architectural concepts, learners gain both exam readiness and practical insight into AI security. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    22 min
  4. EPISODE 4

    Episode 4 — Data Lifecycle Security

    This episode examines data lifecycle security, covering the journey of data from collection and labeling through storage, retention, deletion, and provenance management. It explains why data is the foundation of AI system reliability and how its misuse or compromise undermines security objectives. For certification preparation, learners are introduced to key definitions of provenance, integrity, and retention policies, while understanding how regulatory requirements drive data governance practices. The episode situates data lifecycle security as both a technical and compliance necessity, bridging privacy, accuracy, and accountability in AI environments. The applied discussion focuses on real-world considerations such as how unvetted datasets can introduce bias or poisoning, how insecure storage creates risks of leakage, and how failure to enforce deletion or retention policies leads to regulatory violations. Best practices include documenting data sources, applying encryption at rest and in transit, and ensuring role-based access controls for labeling and preprocessing steps. Troubleshooting scenarios emphasize what happens when provenance cannot be established or when training datasets contain sensitive information without consent. For exams and professional practice, this perspective reinforces why lifecycle controls must be embedded in organizational AI policies, not treated as optional afterthoughts. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    24 min
  5. EPISODE 5

    Episode 5 — Prompt Security I: Injection & Jailbreaks

    This episode introduces prompt injection and jailbreaks as fundamental AI-specific security risks. It defines prompt injection as malicious manipulation of model inputs to alter behavior and describes jailbreaks as methods for bypassing built-in safeguards. For certification purposes, learners must understand these concepts as new categories of vulnerabilities unique to AI, distinct from but conceptually parallel to classical injection attacks. The discussion highlights why prompt injection is considered one of the highest risks in generative AI systems, as it can expose sensitive data, trigger unintended actions, or produce unsafe outputs. The applied perspective explores common techniques used in injection and jailbreak attacks, including direct user prompts, obfuscated instructions, and role-playing contexts. It also explains consequences such as data leakage, reputational damage, or compromised tool integrations. Best practices are introduced, including guardrail filters, structured outputs, and monitoring of anomalies, while emphasizing that no single measure is sufficient. Troubleshooting scenarios include how systems fail when filters are static or when output handling is overlooked. The exam-relevant takeaway is that understanding these risks prepares candidates to describe, detect, and mitigate prompt injection attacks effectively in both testing and professional settings. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    22 min
  6. EPISODE 6

    Episode 6 — Prompt Security II: Indirect & Cross-Domain Injections

    This episode examines indirect and cross-domain prompt injections, which expand the attack surface by embedding malicious instructions in external sources such as documents, websites, or email content. Unlike direct injection, where the attacker provides inputs to the model directly, these threats exploit retrieval or integration features that feed information into the AI system automatically. Learners preparing for certification exams must understand the mechanics of these attacks, which occur when contextual data bypasses normal user input validation and reaches the model unchecked. The relevance lies in recognizing how indirect vectors can compromise confidentiality, integrity, and availability in AI environments, and why they present challenges that differ from classical injection risks. The applied discussion highlights scenarios such as a retrieval-augmented generation pipeline that fetches poisoned documents or a plugin that receives hidden instructions from a web source. Best practices include validating all retrieved data, implementing layered content filters, and designing workflows with isolation boundaries between model prompts and external data. Troubleshooting considerations emphasize how reliance on untrusted content sources creates cascading failures that are difficult to diagnose. For exam preparation, candidates must be able to articulate both the theoretical definitions and the operational defenses, making indirect prompt injection an essential area of study for AI security professionals. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    22 min
  7. EPISODE 7

    Episode 7 — Content Safety vs. Security

    This episode explains the distinction and overlap between content safety and security in AI systems, a concept often emphasized in both professional practice and certification exams. Content safety refers to filtering or moderating outputs to prevent harmful or offensive material, while security focuses on protecting systems and assets from adversarial manipulation or data loss. Although they are related, treating them as identical can cause organizations to miss critical risks. Learners must grasp why an AI model can pass content safety tests yet still be vulnerable to prompt injection, data poisoning, or privacy leakage, making a dual approach essential. Understanding this distinction helps candidates evaluate scenarios in which filtering alone is insufficient to meet security objectives. In application, this distinction is illustrated by comparing moderation filters designed to block offensive text with monitoring systems aimed at detecting adversarial prompts or anomalous usage. A secure AI program requires both: safety filters to manage user experience and security defenses to protect organizational assets. Best practices include aligning safety policies with ethical and regulatory requirements, while embedding security controls across the entire AI lifecycle. Troubleshooting scenarios highlight failures when organizations rely solely on moderation layers, leaving underlying vulnerabilities unaddressed. For exam preparation, learners should be ready to differentiate safety measures from adversarial security controls and describe how the two domains reinforce each other without overlap. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    21 min
  8. EPISODE 8

    Episode 8 — Data Poisoning Attacks

    This episode introduces data poisoning as a high-priority threat in AI security, where adversaries deliberately insert malicious samples into training or fine-tuning datasets. For exam readiness, learners must understand how poisoning undermines model accuracy, introduces backdoors, or biases outputs toward attacker goals. The relevance of poisoning lies in its persistence, as compromised models may behave unpredictably long after training is complete. Definitions such as targeted versus indiscriminate poisoning, as well as the concept of trigger-based backdoors, are emphasized to ensure candidates can recognize variations in exam scenarios and real-world incidents. Applied examples include adversaries corrupting crowdsourced labeling platforms, inserting poisoned records into scraped datasets, or leveraging open repositories to distribute compromised models. Defensive strategies such as dataset provenance tracking, anomaly detection in data, and robust training algorithms are explored as ways to mitigate risk. Troubleshooting considerations focus on the difficulty of identifying poisoned samples at scale and the potential economic impact of retraining models from scratch. By mastering the definitions, implications, and defenses of data poisoning, learners develop a critical skill set for both exam performance and operational AI security. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.

    24 min

About

The AI Security & Threats Audio Course is a comprehensive, audio-first learning series focused on the risks, defenses, and governance models that define secure artificial intelligence operations today. Designed for cybersecurity professionals, AI practitioners, and certification candidates, this course translates complex technical and policy concepts into clear, practical lessons. Each episode explores a critical aspect of AI security—from prompt injection and model theft to data poisoning, adversarial attacks, and secure machine learning operations (MLOps). You’ll gain a structured understanding of how vulnerabilities emerge, how threat actors exploit them, and how robust controls can mitigate these evolving risks. The course also covers the frameworks and best practices shaping AI governance, assurance, and resilience. Learners will explore global standards and regulatory guidance, including NIST AI Risk Management Framework, ISO/IEC 23894, and emerging organizational policies around transparency, accountability, and continuous monitoring. Through practical examples and scenario-driven insights, you’ll learn how to assess model risk, integrate secure development pipelines, and implement monitoring strategies that ensure trust and compliance across the AI lifecycle. Developed by BareMetalCyber.com, the AI Security & Threats Audio Course blends foundational security knowledge with real-world application, helping you prepare for advanced certifications and leadership in the growing field of AI assurance. Explore more audio courses, textbooks, and cybersecurity resources at BareMetalCyber.com—your trusted source for structured, expert-driven learning.