Practical DevSecOps

Varun Kumar

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more. 

Episodios

  1. How Security Consultant Can Transition to AI Security Engineer in 2025

    HACE 11 H

    How Security Consultant Can Transition to AI Security Engineer in 2025

    In this episode, we explore the rapid evolution of cybersecurity and the critical rise of a new specialisation: the AI Security Engineer. As artificial intelligence advances, it not only enhances our defensive capabilities but also introduces sophisticated new attack vectors that traditional security measures can't handle. AI Security Certification - Certified AI Security Professional (CAISP) course This has created a massive demand for professionals who can secure the AI systems themselves, with an estimated 4.8 million unfilled cybersecurity positions worldwide and a significant shortage of experts skilled in both AI and cybersecurity. We'll break down the key differences between a traditional Cybersecurity Analyst and an AI Security Engineer. While an analyst typically monitors and responds to threats in existing IT systems, an AI Security Engineer proactively works to secure machine learning models throughout their lifecycle, from development to deployment.  This involves a shift from passive monitoring to actively protecting AI systems from unique threats like adversarial attacks, data poisoning, model inversion, and inference attacks. Discover the skills you already possess as a cybersecurity analyst that are directly transferable to an AI security role. Core competencies like threat analysis, incident response, and risk management are essential foundations. We'll discuss how to build upon these by adding knowledge of AI/ML concepts, programming languages like Python, and frameworks such as TensorFlow and PyTorch. For those ready to make this pivotal career move, we lay out a practical roadmap for the transition, which can take as little as three to four months with focused effort. A key resource highlighted is the Certified AI Security Professional (CAISP) course, designed to equip security professionals with hands-on experience in AI threat modelling, supply chain security, and simulating real-world attacks. The course covers critical frameworks like MITRE ATLAS and the OWASP Top 10 for LLMs and provides practical experience with over 25 hands-on exercises. Finally, we look at the incredible career opportunities this transition unlocks. AI Security Engineers are in high demand across major industries like finance, technology, government, and healthcare.  This demand is reflected in significantly higher salaries, with AI Security Engineers in the US earning between $150,000 and $250,000+, often 20-40% more than their cybersecurity analyst counterparts. With the AI security market projected to grow exponentially by 2030, this specialisation represents one of the most promising and lucrative career paths in technology today.

    21 min
  2. AI Red Teaming Guide for Beginners in 2025

    8 SEP

    AI Red Teaming Guide for Beginners in 2025

    This episode delves into the critical field of AI Red Teaming, a structured, adversarial process designed to identify vulnerabilities and weaknesses in AI systems before malicious actors can exploit them. The Certified AI Security Professional (CAISP) course is specifically designed to advance careers in this field, offering practical skills in executing attacks using MITRE ATLAS and OWASP Top 10, implementing enterprise AI security, threat modelling with STRIDE, and protecting AI development pipelines. This certification is industry-recognized and boosts an AI security career, with roles like AI Security Consultant and Red Team Lead offering high salary potential. It's an essential step in building safe, reliable, and trustworthy AI systems, preventing issues like data leakage, unfair results, and system takeovers. AI Red Teaming involves human experts and automated tools to simulate attacks. Red teamers craft special inputs like prompt injections to bypass safety controls, generate adversarial examples to confuse AI, and analyse model behaviour for consistency and safety. Common attack vectors include jailbreaking to bypass ethical guardrails, data poisoning to introduce toxic data, and model inversion to learn training data, threatening privacy and confidentiality. The importance of AI Red Teaming is highlighted through real-world examples: discovering unfair hiring programs using zip codes, manipulating healthcare AI systems to report incorrect cancer tests, and tricking autonomous vehicles by subtly altering sensor readings. It also plays a vital role in securing financial fraud detection systems, content moderation, and voice assistants/LLMs. Organisations also use it for regulatory compliance testing, adhering to standards like GDPR and the EU AI Act. Several tools and frameworks support AI Red Teaming. Mindgard, Garak, HiddenLayer, PyRIT, and Microsoft Counterfit are prominent tools. Open-source libraries like Adversarial Robustness Toolbox (ART), CleverHans, and TextAttack are also crucial. Key frameworks include the MITRE ATLAS Framework for mapping adversarial tactics and the OWASP ML Security Top 10, which outlines critical AI vulnerabilities like prompt injection and model theft. Ethical considerations are paramount, emphasising responsible disclosure, legal compliance (e.g., GDPR), harm minimisation, and thorough documentation to ensure transparency and accountability. For professionals, upskilling in AI Red Teaming is crucial as AI expands attack surfaces that traditional penetration testing cannot address. Essential skills include Python programming, machine learning knowledge, threat modelling, and adversarial thinking.

    20 min
  3. From DevSecOps to AI Security: 6,429 Pros Trained. - Here’s the Data

    30 JUL

    From DevSecOps to AI Security: 6,429 Pros Trained. - Here’s the Data

    Security isn't keeping pace with the swift advancements in AI and the explosion of cloud-native adoption. Many teams find themselves trying to mend broken pipelines with outdated AppSec playbooks, leading to significant vulnerabilities. This episode dives deep into how to bridge this critical gap, equipping you with the skills to truly defend modern systems. Ready to build these skills and stay ahead of the curve? Enroll in the Certified DevSecOps Professional and Certified AI Security Professional (CDP + CAISP) bundle today and save!  Practical DevSecOps, the platform behind these certifications, focuses on realistic, browser-based labs and a vendor-neutral curriculum. Their certifications are not just paper credentials; they require 6–24 hour practical, hands-on exams in production-like lab environments, proving real skill.  This approach has made them a trusted platform, even listed on the NICCS (National Initiative for Cybersecurity Careers and Studies) platform by CISA, reflecting their rigour and government-trusted structure. Unlike traditional training, these certifications are lifetime with no forced renewals. By combining the Certified DevSecOps Professional (CDP) and the Certified AI Security Professional (CAISP), you gain a powerful, holistic skillset that prepares you to secure both the underlying infrastructure and the cutting-edge AI systems built upon it.  As one learner states about AI security, it's "highly relevant to the challenges security experts are facing today". This is how you build real, production-grade security skills and truly become a defender in today's complex threat landscape.

    12 min
  4. MITRE ATLAS Framework - Securing AI Systems

    10 JUL

    MITRE ATLAS Framework - Securing AI Systems

    Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.  As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount. Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems. In this episode, we'll break down the core components of MITRE ATLAS: Tactics: These are the high-level objectives of attackers – the "why" behind their actions.  MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).  Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging. Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.  Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors. We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.  Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model. To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include: Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts. Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts. Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity. Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.  You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.  The Certified AI Security Professional Course comprehensively covers the MITRE ATLAS Framework, offering practical experience to implement these defences effectively.

    17 min

Acerca de

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more.