Professional Courses & Training

Veljko Massimo Plavsic

This is the right place if you search for a podcast course or training professionally created and 100% success guaranteed

  1. ISO/PAS 8800 Lesson 10: Strategy for Standards Compliance

    APR 10

    ISO/PAS 8800 Lesson 10: Strategy for Standards Compliance

    This final lesson provides actionable insights for integrating ISO/PAS 8800 requirements into professional engineering practices. Strategy for Standards Compliance in ISO/PAS 8800 Introduction Developing Artificial Intelligence (AI) for automotive applications requires a paradigm shift from traditional software development. ISO/PAS 8800 provides a dedicated framework to address the safety of road vehicles utilizing AI. This lesson focuses on the strategic approach organizations must take to ensure compliance while maintaining innovation and agility. 1. Integration with Existing Standards A successful compliance strategy begins with understanding that ISO/PAS 8800 does not exist in a vacuum. It must be integrated with: ISO 26262 (Functional Safety): Addressing hardware and software malfunctions.ISO 21448 (SOTIF): Addressing safety of the intended functionality, particularly relevant for the probabilistic nature of AI.2. The Four Pillars of Compliance Strategy A. Organizational Readiness Compliance starts with corporate culture. Organizations must establish clear roles, such as the AI Safety Manager, and ensure that cross-functional teams (Data Science, Safety Engineering, and Systems Engineering) speak a common language. B. Data Governance and Lifecycle Management Unlike traditional code, AI performance is dictated by data. A compliance strategy must include robust data lineage, tracking the provenance, cleaning, and labeling of training data to prevent bias and ensure representativeness. C. The Iterative Safety Case Instead of a static safety manual, ISO/PAS 8800 compliance demands a dynamic 'Safety Case.' This is a structured argument, supported by evidence, that the AI system is safe for its intended use. This should be updated at every stage of the Machine Learning (ML) lifecycle. D. Toolchain Qualification The tools used to train and validate AI models (e.g., simulators, labeling tools) must be qualified. If the tool fails, can it introduce a safety risk? This question guides the level of rigor required for tool qualification. 3. Gap Analysis and Roadmapping To achieve compliance, organizations should follow these steps: Baseline Assessment: Evaluate current AI development processes against ISO/PAS 8800 requirements.Identify Gaps: Pinpoint where documentation or verification methods fall short.Prioritization: Address high-risk areas first, such as model robustness and out-of-distribution detection.Continuous Monitoring: Implement post-deployment monitoring to ensure the AI stays within safe operational boundaries as it encounters real-world data.Conclusion Compliance with ISO/PAS 8800 is not a 'check-the-box' exercise but a continuous commitment to safety-by-design. By aligning AI development with established automotive safety principles, manufacturers can mitigate risks and build trust in autonomous technologies.

    1 min
  2. ISO/PAS 8800 Lesson 9:Lifecycle Management

    APR 10

    ISO/PAS 8800 Lesson 9:Lifecycle Management

    Lifecycle Safety Management for AI in Road Vehicles Overview\nISO/PAS 8800 provides a rigorous framework for managing the safety of Artificial Intelligence (AI) throughout its entire lifecycle. Traditional safety standards, such as ISO 26262 (Functional Safety), focus on preventing failures in electrical and electronic systems. ISO/PAS 8800 extends this by addressing the non-deterministic nature of AI and machine learning (ML), focusing on the Safety of the Intended Functionality (SOTIF) as outlined in ISO 21448. Integration with Existing Standards Lifecycle management under ISO/PAS 8800 is not a standalone process; it is integrated with: n1. ISO 26262: To ensure that the hardware and software executing the AI models are functionally safe.\n2. ISO 21448 (SOTIF): To mitigate risks arising from performance limitations and unexpected environmental conditions. Key Phases of the AI Safety Lifecycle 1. Concept Phase In this phase, the Operational Design Domain (ODD) is defined. Developers must specify the environment in which the AI is expected to operate safely (e.g., clear weather, specific speed limits). Safety goals are established based on the potential impact of AI-driven decisions. 2. Development and Data Management This is a unique addition to the automotive lifecycle. It involves: Data Collection: Ensuring the data is representative of the ODD. Data Labeling: High-quality annotation to avoid training errors. Model Training: Implementing safeguards against over-fitting and bias. 3. Verification and Validation (V&V) Verification ensures the model meets technical specifications, while validation ensures it meets the safety goals within the ODD. This often involves massive-scale simulation and physical road testing 4. Operation and Post-Market Surveillance AI systems can exhibit "performance drift" over time. ISO/PAS 8800 mandates continuous monitoring once the vehicle is on the road. If a safety-critical anomaly is detected, a feedback loop triggers a return to the development phase for retraining or model adjustment. Roles and Responsibilities Effective lifecycle management requires a cross-functional team, including Data Scientists, Safety Engineers, and Domain Experts, to ensure that safety requirements are maintained across all hand-overs.

    1 min
  3. ISO/PAS 8800.   Lesson 8: Model Verification and Validation in ISO/PAS 8800

    APR 10

    ISO/PAS 8800. Lesson 8: Model Verification and Validation in ISO/PAS 8800

    1. Introduction\nIn the context of ISO/PAS 8800 (Road vehicles — Safety and artificial intelligence), Verification and Validation (V&V) are the cornerstones of ensuring that AI-based systems are safe for public roads. While traditional software follows deterministic paths, AI models are probabilistic and data-dependent, requiring a shift in how we confirm their correctness and safety. 2. Defining Model Verification\nVerification asks: \"Did we build the system right?" It involves checking the AI model against the technical requirements and design specifications defined in the early stages of development. Under ISO/PAS 8800, verification includes: Formal Methods: Using mathematical proofs to verify that certain safety properties are always maintained by the model. Robustness Testing: Measuring how the model handles small, intentional perturbations in input data, often referred to as adversarial robustness. Unit and Integration Testing: Testing individual components of the AI pipeline (e.g., pre-processing scripts or specific neural network layers) to ensure they function as intended. Code and Model Audits: Reviewing the architecture and hyperparameters to ensure they align with the safety goals. 3. Defining Model Validation Validation asks: "Did we build the right system?" This process ensures the model meets the actual needs of the user and remains safe within its intended Operational Design Domain (ODD). Key aspects include: ODD-Based Testing: Validating performance across diverse conditions such as varying weather, lighting, and geographic locations. Edge Case Analysis: Identifying and testing "long-tail" scenarios that are rare but safety-critical. Safety of the Intended Functionality (SOTIF): Aligning with ISO 21448 to ensure that functional insufficiencies do not lead to unreasonable risk. Performance Metrics: Evaluating the model using safety-relevant KPIs such as False Negative Rates in pedestrian detection. 4. The Integrated V-Model for AI\nISO/PAS 8800 adapts the classic V-Model to account for the iterative nature of machine learning. This includes a feedback loop where validation failures in the field trigger a re-verification of the training data and model architecture. Verification ensures the model is statistically sound, while validation ensures that statistical soundness translates to real-world safety.

    1 min
  4. ISO/PAS 8800. Lesson 7: Data Integrity and Quality in ISO/PAS 8800

    APR 10

    ISO/PAS 8800. Lesson 7: Data Integrity and Quality in ISO/PAS 8800

    In the realm of road vehicles, the safety of AI-based systems is inextricably linked to the data used to develop them. ISO/PAS 8800 (Road Vehicles — Safety and Artificial Intelligence) provides a framework for ensuring that data integrity and quality are maintained throughout the AI lifecycle. Unlike traditional software where logic is explicitly coded, AI systems 'learn' from data, making the quality of that data a primary safety concern. 2. Key Definitions Data Integrity: The assurance that data remains accurate, complete, and consistent throughout its entire lifecycle, from collection to decommissioning.Data Quality: The fitness of data for its intended purpose in training, validating, and testing AI models within a specific Operational Design Domain (ODD).3. Data Quality Dimensions under ISO/PAS 8800 To comply with safety standards, data must be evaluated against several dimensions: Representativeness: The data must cover the full range of scenarios the vehicle will encounter in its ODD, including rare 'corner cases'.Accuracy and Precision: The sensors used for collection must be calibrated, and the ground-truth labels must be verified for correctness.Completeness: There should be no missing values or gaps in the datasets that could lead to biased or unpredictable AI behavior.Timeliness: For dynamic environments, data must reflect current road conditions, signage standards, and traffic laws.4. The Data Lifecycle and Safety ISO/PAS 8800 emphasizes a rigorous data pipeline: Acquisition: Capturing raw data using high-fidelity sensors.Preparation & Cleaning: Removing noise and handling outliers without stripping away critical safety information.Labeling/Annotation: Ensuring that human or automated labelers follow strict guidelines to avoid 'label noise'.Augmentation: Using synthetic data to fill gaps in real-world data, provided the synthetic data is physically realistic.Mitigating BiasBias in data is a significant safety risk. If a training set lacks diversity (e.g., only contains daytime driving), the AI may fail in low-light conditions. ISO/PAS 8800 requires documented processes to identify and mitigate technical and cognitive biases to ensure the Intended Functionality (SOTIF) is safe. 6. Summary Data integrity is not just a technical requirement; it is a safety-critical pillar. High-quality data ensures that the resulting AI model is robust, reliable, and capable of operating safely in complex automotive environments.

    1 min
  5. ISO/PAS 8800  Lesson 6: Safety-Related AI Development under ISO/PAS 8800

    APR 10

    ISO/PAS 8800 Lesson 6: Safety-Related AI Development under ISO/PAS 8800

    1. Introduction As artificial intelligence (AI) and machine learning (ML) become integral to automotive systems—ranging from Advanced Driver Assistance Systems (ADAS) to Automated Driving Systems (ADS)—traditional safety standards like ISO 26262 and ISO 21448 (SOTIF) require specialized extensions. ISO/PAS 8800 provides the necessary framework to address the unique risks associated with AI, specifically focusing on the non-deterministic nature of machine learning and the safety implications of data-driven development. 2. The AI Development Lifecycle for Safety Safety-related AI development shifts the focus from manual coding to data curation and model training. The lifecycle includes: Requirements Definition: Defining the Safety Goals and Functional Safety Requirements (FSRs) that the AI component must satisfy.Data Collection and Management: Ensuring that the data used for training, validation, and testing is representative, complete, and free from safety-critical biases.Model Design and Training: Selecting appropriate architectures (e.g., Convolutional Neural Networks) and training regimes that prioritize robustness over mere accuracy.Verification and Validation (V&V): Implementing rigorous testing protocols, including formal methods, simulation, and real-world testing to ensure the model behaves predictably in its Operational Design Domain (ODD).3. Key Concepts in ISO/PAS 8800 Development 3.1 Data Quality and Integrity In AI safety, data is equivalent to source code. ISO/PAS 8800 emphasizes: Representativeness: Does the data cover all edge cases in the ODD?Labeling Accuracy: Ensuring that the ground truth used for training is verified and error-free.Independence: Keeping training, validation, and test datasets strictly separated to prevent overfitting and biased results.3.2 Robustness and Resilience Safety-related AI must be robust against perturbations. This involves testing for: Adversarial Robustness: Protection against small, intentional changes to input that could cause a misclassification.Distribution Shift: How the model handles data that differs slightly from its training set (e.g., different weather conditions or sensor noise).4. Integration with ISO 26262 and ISO 21448 ISO/PAS 8800 acts as a bridge. It leverages the functional safety processes of ISO 26262 to manage hardware and system failures, while utilizing ISO 21448 (SOTIF) principles to address performance limitations and situational awareness errors inherent in AI systems. 5. Conclusion Developing AI for safety-critical applications requires a fundamental shift in engineering mindset. By following the structured approach in ISO/PAS 8800, developers can build a compelling safety case that demonstrates the AI component is fit for use on public roads.

    1 min
  6. ISO/PAS 8800. Lesson 5:Hazard Analysis and Risk Assessment HARA

    APR 10

    ISO/PAS 8800. Lesson 5:Hazard Analysis and Risk Assessment HARA

    Lesson: Hazard Analysis and Risk Assessment (HARA) in ISO/PAS 8800 1. Introduction to AI-Specific HARA Hazard Analysis and Risk Assessment (HARA) is a foundational safety activity in the automotive industry, traditionally governed by ISO 26262. However, ISO/PAS 8800 extends this process to address the unique characteristics of Artificial Intelligence (AI) and Machine Learning (ML). While traditional HARA focuses on malfunctioning behavior (e.g., a short circuit), AI HARA must also consider hazards arising from performance limitations and environmental triggers, aligning closely with ISO 21448 (SOTIF). 2. The Item Definition and ODD Analysis Before hazards can be identified, the 'Item' must be clearly defined. In the context of AI, this includes the intended functionality and a rigorous definition of the Operational Design Domain (ODD). The ODD specifies the external conditions—such as road types, weather, and lighting—under which the AI system is designed to operate safely. Any operation outside these boundaries is considered a 'limit' that the system must handle safely. 3. Hazard Identification Hazards in ISO/PAS 8800 are categorized into two primary types: 1. Malfunctioning Behavior: Failures caused by errors in the software or hardware execution. 2. Performance Limitations: Situations where the AI model performs as programmed but fails to meet safety needs (e.g., a perception system failing to detect a specific type of obstacle due to a lack of training data). The analysis explores how these behaviors lead to hazardous events in specific driving scenarios. 4. Risk Estimation: S, E, and C Once hazards are identified, the risk is estimated using three parameters: - Severity (S): The intensity of potential harm to passengers or road users (S0 to S3). - Exposure (E): The probability of the vehicle being in a scenario where the hazard could occur (E0 to E4). - Controllability (C): The ability of the driver or the system to prevent harm once the hazard has occurred (C0 to C3). These parameters are used to determine the Automotive Safety Integrity Level (ASIL), ranging from QM (Quality Management) to ASIL D. ## 5. Deriving Safety Goals The final output of the HARA is a set of Safety Goals. These are high-level safety requirements assigned to the system to mitigate the identified risks. For AI systems, safety goals often involve specific performance metrics, such as minimum detection probabilities or maximum latency requirements for safety-critical decisions.

    1 min
  7. ISO/PAS 8800.       Lesson 4:Key AI Safety Concepts

    APR 10

    ISO/PAS 8800. Lesson 4:Key AI Safety Concepts

    Key AI Safety Concepts in ISO/PAS 8800 ISO/PAS 8800, titled Road vehicles — Safety and artificial intelligence, provides a dedicated framework for managing safety risks specifically introduced by Artificial Intelligence (AI) and Machine Learning (ML) in automotive applications. This lesson explores the foundational concepts that underpin the standard. 1. AI Safety vs. Functional Safety Traditional functional safety (ISO 26262) focuses on hazards caused by malfunctioning electronic systems. In contrast, AI safety in ISO/PAS 8800 addresses risks stemming from the performance limitations of the AI itself, even when the hardware and software are functioning as designed. This aligns closely with SOTIF (Safety of the Intended Functionality) principles. 2. Robustness and Reliability Robustness: The ability of an AI system to maintain its performance level under varied and potentially adverse conditions (e.g., sensor noise, heavy rain, or unexpected road layouts).Reliability: The consistency of the AI’s performance over time under specified conditions.3. Explainability and Transparency One of the greatest challenges in automotive AI is the 'black box' nature of neural networks. ISO/PAS 8800 emphasizes: Interpretability: The degree to which a human can understand the cause of a decision.Explainability (XAI): Techniques used to provide human-understandable evidence for why an AI model reached a specific output, which is critical for post-accident forensics and system validation.4. Training Data Quality AI safety is inextricably linked to the data used to train it. The standard highlights: Completeness: Ensuring the dataset covers all relevant driving scenarios.Representativeness: Ensuring the data accurately reflects the real-world environment where the vehicle will operate.Bias Mitigation: Identifying and reducing systematic errors that could lead to unsafe behaviors in specific demographics or environmental conditions.5. Safe-by-Design and V&V ISO/PAS 8800 advocates for a 'Safe-by-Design' approach, where safety constraints are integrated into the ML model architecture. Verification and Validation (V&V) must move beyond simple accuracy metrics to include safety-specific testing, such as edge-case analysis and adversarial testing.

    1 min
  8. ISO/PAS 8800.  Lesson 3:Relation with ISO 26262 and SOTIF

    APR 10

    ISO/PAS 8800. Lesson 3:Relation with ISO 26262 and SOTIF

    1. Overview of the Safety Ecosystem ISO/PAS 8800, titled 'Road vehicles — Safety and artificial intelligence', was developed to address the unique safety challenges posed by Machine Learning (ML) and Artificial Intelligence (AI) in automotive applications. It does not replace the existing safety standards; rather, it acts as a specialized supplement. To understand its role, one must look at the two primary pillars of automotive safety: ISO 26262 (Functional Safety) and ISO 21448 (Safety of the Intended Functionality, or SOTIF). 2. Interaction with ISO 26262 (Functional Safety) ISO 26262 focuses on hazards caused by malfunctions in electrical and electronic (E/E) systems. These are typically divided into systematic failures (e.g., software bugs) and random hardware failures. How ISO/PAS 8800 Fits: While ISO 26262 provides the general framework for software development (Part 6), it was not originally designed for the non-deterministic nature of AI. ISO/PAS 8800 provides specific guidance for the 'AI element' within the ISO 26262 lifecycle. It helps define how to handle systematic failures in the AI training process, model selection, and deployment that could lead to functional safety violations. ## 3. Interaction with ISO 21448 (SOTIF) SOTIF deals with hazards that occur without a system failure. Instead, these hazards arise from performance limitations or environmental triggers (e.g., a vision system failing to detect a pedestrian because of intense sun glare). How ISO/PAS 8800 Fits: AI performance limitations are a core concern of SOTIF. ISO/PAS 8800 expands on the SOTIF concept by providing detailed methodologies for AI-specific issues like data bias, over-fitting, and robustness against adversarial attacks. It provides the technical 'how-to' for achieving the safety goals defined by the SOTIF process when AI is the underlying technology. ## 4. The Integrated Approach The relationship can be visualized as a Venn diagram where ISO/PAS 8800 sits at the intersection of AI development and automotive safety requirements. ISO 26262: Ensures the AI hardware and integration logic don't break. ISO 21448 (SOTIF): Ensures the AI's intended function is safe in complex environments. ISO/PAS 8800: Provides the specific AI/ML engineering practices to satisfy both of the above. ## 5. Key Mapping Points Data Quality: ISO/PAS 8800 provides requirements for dataset completeness and representativeness, which supports SOTIF's goal of reducing 'Unknown Unsafe' scenarios. * Validation & Verification: It introduces AI-specific V&V methods, such as metamorphic testing, which are required to supplement the traditional testing methods found in ISO 26262.

    1 min

About

This is the right place if you search for a podcast course or training professionally created and 100% success guaranteed