Practical DevSecOps

Varun Kumar

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more. 

集數

  1. From DevSecOps to AI Security: 6,429 Pros Trained. - Here’s the Data

    7月30日

    From DevSecOps to AI Security: 6,429 Pros Trained. - Here’s the Data

    Security isn't keeping pace with the swift advancements in AI and the explosion of cloud-native adoption. Many teams find themselves trying to mend broken pipelines with outdated AppSec playbooks, leading to significant vulnerabilities. This episode dives deep into how to bridge this critical gap, equipping you with the skills to truly defend modern systems. Ready to build these skills and stay ahead of the curve? Enroll in the Certified DevSecOps Professional and Certified AI Security Professional (CDP + CAISP) bundle today and save!  Practical DevSecOps, the platform behind these certifications, focuses on realistic, browser-based labs and a vendor-neutral curriculum. Their certifications are not just paper credentials; they require 6–24 hour practical, hands-on exams in production-like lab environments, proving real skill.  This approach has made them a trusted platform, even listed on the NICCS (National Initiative for Cybersecurity Careers and Studies) platform by CISA, reflecting their rigour and government-trusted structure. Unlike traditional training, these certifications are lifetime with no forced renewals. By combining the Certified DevSecOps Professional (CDP) and the Certified AI Security Professional (CAISP), you gain a powerful, holistic skillset that prepares you to secure both the underlying infrastructure and the cutting-edge AI systems built upon it.  As one learner states about AI security, it's "highly relevant to the challenges security experts are facing today". This is how you build real, production-grade security skills and truly become a defender in today's complex threat landscape.

    12 分鐘
  2. MITRE ATLAS Framework - Securing AI Systems

    7月10日

    MITRE ATLAS Framework - Securing AI Systems

    Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.  As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount. Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems. In this episode, we'll break down the core components of MITRE ATLAS: Tactics: These are the high-level objectives of attackers – the "why" behind their actions.  MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).  Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging. Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.  Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors. We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.  Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model. To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include: Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts. Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts. Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity. Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.  You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.  The Certified AI Security Professional Course comprehensively covers the MITRE ATLAS Framework, offering practical experience to implement these defences effectively.

    17 分鐘
  3. Best AI Security Books in 2025

    6月20日

    Best AI Security Books in 2025

    Are you ready to face the escalating threat of AI attacks? AI system attacks are hitting companies every single day.  Hackers use AI tools to break into major banks and steal millions. It's a critical time for anyone in tech or cybersecurity to understand how to fight back. In this episode, we delve into why AI security is more crucial than ever in 2025. We reveal that 74% of IT security professionals say AI-powered threats are seriously hurting their companies, and a staggering 93% of businesses expect to face AI attacks daily this year. These aren't just minor incidents; last year, 73% of organizations were hit by AI-related security breaches, costing an average of $4.8 million each time, with attacks taking an alarming 290 days to even detect. The good news? Companies are desperately seeking individuals with AI security expertise, offering excellent opportunities for those who are prepared. We discuss how AI security books serve as your secret weapon, providing proven strategies directly from real security experts who have battled actual AI attacks. We'll touch upon some top resources available, covering everything from: Understanding and protecting against Large Language Model (LLM) security threats.Practical applications of LLMs for building smart systems.Developing your own LLMs from scratch.Defending against sophisticated adversarial AI attacks, including prompt injection and model poisoning.Navigating AI data privacy, ethics, and regulatory compliance.Advanced techniques like AI red teaming to systematically assess and enhance security.Whether you're a beginner looking to understand the basics or an expert aiming for cutting-edge strategies, finding the right learning path in AI cybersecurity is essential. Don't wait – AI threats are growing stronger every day. Tune in to discover how to upskill and become an AI security expert, building solid skills step by step for career development success. Ready to go further? Our Certified AI Security Professional Course offers an in-depth exploration of AI risks. It combines the best book knowledge with hands-on practice, allowing you to work on real AI security system attacks and learn directly from industry experts.  Enroll today and upskill your AI Security knowledge with Certified AI Security Professional certification. Plus, for a limited time, you can save 15% on this course, and you can buy it now and start whenever you're ready!

    13 分鐘
  4. Threat Modeling for Medtech Industry

    6月18日

    Threat Modeling for Medtech Industry

    Join us for an insightful episode as we delve into the critical realm of product security within the Medtech industry. The digital revolution is transforming patient care, but it also introduces significant security risks to medical devices. We'll explore the complex security environment where devices like pacemakers and diagnostic systems are increasingly connected, making them targets for unauthorised access, data theft, and operational manipulation.  Discover how breaches can lead to dire consequences, from endangering patient health and damaging manufacturers' reputations, to incurring financial losses and navigating stricter regulatory hurdles. Learn about the types of medical devices most susceptible to cyber threats, including those with connectivity, remote access features, legacy systems, sensitive data storage (PHI), and life-sustaining equipment. Our focus shifts to threat modelling – a crucial, proactive process for enhancing medical device security.  We'll uncover its immense benefits, such as identifying and addressing risks, boosting device resilience against cyberattacks, and ensuring regulatory adherence. We'll also touch upon the FDA's recent policy update, transitioning from the Quality System Regulation (QSR) to the Quality Management System Regulation (QMSR), which now incorporates ISO 13485:2016 standards, highlighting a greater emphasis on risk management throughout the device lifecycle. Dive deep into various threat modelling techniques that help manufacturers fortify their products: Agile Threat Modeling: Integrating security with rapid development cycles, ensuring continuous assessments aligned with ongoing development. Goal-Centric Threat Modeling: Prioritizing protection for critical assets and business objectives based on impact on functionalities and compliance requirements. Library-Centric Threat Modeling: Utilizing pre-compiled lists of known threats and vulnerabilities pertinent to medical devices for standardized risk assessment, enhancing scalability and efficiency. Finally, we'll discuss how specialized training, such as the Practical DevSecOps Certified Threat Modeling Professional (CTMP) course, equips Medtech manufacturers with the essential skills to proactively identify and address security vulnerabilities. This training focuses on real-world applications and scenarios, ensuring continuous security assessment and compliance with stringent regulatory standards from design to deployment. Tune in to understand why threat modelling is not just a best practice, but an essential component for safeguarding patient well-being and maintaining integrity in the digital healthcare landscape.

    5 分鐘
  5. AI Security Frameworks for Enterprises

    6月12日

    AI Security Frameworks for Enterprises

    Welcome to "Securing the Future," the podcast dedicated to navigating the complex world of AI security. In this episode, we unpack the vital role of AI security frameworks—acting as instruction manuals—in safeguarding AI systems for multinational corporations.  These frameworks provide uniform guidelines for implementing security measures across diverse nations with varying legal requirements, from Asia-Pacific to Europe and North America. We explore how these blueprints help organizations find weak spots before bad actors do, establish consistent rules, meet laws and regulations, and ultimately build trust with AI users. Crucially, they enable compliance and reduce implementation costs through standardization. This episode delves into four leading frameworks: NIST AI Risk Management Framework (AI RMF): We break down its comprehensive, lifecycle-wide approach, structured around four core functions: Govern, Map, Measure, and Manage.  This widely recognized framework is often recommended for beginners due to its clear steps and available resources. Its risk-based approach is adaptable for specific sectors like healthcare and banking, forming the backbone of their tailored safety frameworks. Microsoft’s AI Security Framework: This framework focuses on operationalizing AI security best practices. It addresses five main parts: Security, Privacy, Fairness, Transparency, and Accountability. While integrating with Microsoft tools, its principles are broadly applicable for ensuring AI is used correctly and protected. MITRE ATLAS Framework for AI Security: Discover this specialized framework that catalogues real-world AI threats and attack techniques. We discuss attack types like data poisoning, evasion attacks, model stealing, and privacy attacks, which represent “novel attacks” on AI systems. ATLAS is invaluable for threat modelling and red teaming, providing insights into adversarial machine learning techniques. Databricks AI Security Framework (DASF) 2.0: Learn about this framework, which identifies 62 risks and 64 real use-case controls. Based on standards like NIST and MITRE, DASF is platform-agnostic, allowing its controls to be mapped across various cloud or data platform providers.  It critically differentiates between traditional cybersecurity risks and novel AI-specific attacks like adversarial machine learning, and bridges business, data, and security teams with practical tools. We discuss how organizations can use parts from different frameworks to build comprehensive protection, complementing each other across strategic risks, governance, and technical controls.  Case studies from healthcare and banking illustrate how these conceptual frameworks are tailored to meet strict government rules and sector-specific challenges, ensuring robust risk management and governance. Ultimately, AI security is an ongoing journey, not a one-off project. The key takeaway is to start small and build up your security over time. For more information, read our “Best AI Security Frameworks for Enterprises” blog:

    6 分鐘
  6. Global Banks Slash Security Costs 5X with Threat Model Training

    6月2日

    Global Banks Slash Security Costs 5X with Threat Model Training

    Discover how a global financial institution transformed its security posture and achieved massive cost savings through targeted threat modeling training.  Facing challenges like inconsistent practices, difficulty scaling training across 50 countries, and keeping pace with evolving threats, this bank needed a new approach beyond infrequent, in-person workshops. Their solution? Leveraging the Certified Threat Modeling Professional (CTMP) course from Practical DevSecOps. This program offered a practical learning approach with extensive hands-on labs simulating real banking scenarios and crucial 24/7 expert support via Mattermost.  It covered key methodologies like STRIDE and PASTA and integrated threat modeling into their DevSecOps pipeline. Structured, role-specific training ensured everyone, from developers to core system engineers, received relevant education. The results were remarkable: $0.5 million annually saved on training and logistics.Estimated $10 million reduction in potential breach costs.40% reduced time for threat modeling sessions.30% more potential threats mitigated in the design phase.45% reduction in high-severity production vulnerabilities.150% increase in systems undergoing threat modeling. Achieved 100% compliance with security assessment regulations. This success story highlights the power of a scalable, practical, and continuously supported security education programme like the CTMP course in fostering a cultural shift and embedding threat modeling into a global bank's DNA, truly embracing the Shift-left culture.  Learn how practical training, hands-on experience, and expert guidance can lead to significant efficiency gains, cost reductions, and enhanced security in complex financial environments.

    12 分鐘

簡介

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more.