Papers
Topics
Authors
Recent
Search
2000 character limit reached

From High-Dimensional Spaces to Verifiable ODD Coverage for Safety-Critical AI-based Systems

Published 2 Apr 2026 in cs.AI and cs.LG | (2604.02198v1)

Abstract: While AI offers transformative potential for operational performance, its deployment in safety-critical domains such as aviation requires strict adherence to rigorous certification standards. Current EASA guidelines mandate demonstrating complete coverage of the AI/ML constituent's Operational Design Domain (ODD) -- a requirement that demands proof that no critical gaps exist within defined operational boundaries. However, as systems operate within high-dimensional parameter spaces, existing methods struggle to provide the scalability and formal grounding necessary to satisfy the completeness criterion. Currently, no standardized engineering method exists to bridge the gap between abstract ODD definitions and verifiable evidence. This paper addresses this void by proposing a method that integrates parameter discretization, constraint-based filtering, and criticality-based dimension reduction into a structured, multi-step ODD coverage verification process. Grounded in gathered simulation data from prior research on AI-based mid-air collision avoidance research, this work demonstrates a systematic engineering approach to defining and achieving coverage metrics that satisfy EASA's demand for completeness. Ultimately, this method enables the validation of ODD coverage in higher dimensions, advancing a Safety-by-Design approach while complying with EASA's standards.

Summary

  • The paper introduces a multi-step framework that discretizes high-dimensional ODDs to produce formal coverage guarantees required for AI safety certification.
  • The approach leverages criticality-based dimension reduction and constraint-based filtering to reduce parameter space while retaining safety-critical variations.
  • The method demonstrates a 60% reduction in the relevant state space and enables iterative scenario generation to effectively address coverage gaps.

Verifiable ODD Coverage in Safety-Critical AI-based Systems

Motivation and Regulatory Context

The operational deployment of AI/ML in safety-critical domains, especially civil aviation, is subject to stringent certification processes. In particular, the focus on the Operational Design Domain (ODD) is central to regulatory trust frameworks led by EASA. The recent regulatory trajectory mandates demonstrable and complete ODD coverage for AI-based systems, as outlined in the EASA Concept Paper for Learning Assurance (Objectives DM-08, LM-16), which requires evidence that all operational scenarios—across potentially high-dimensional parameter spaces—are adequately covered by the verification and validation process.

However, the high-dimensional nature of ODDs presents a substantial challenge. The Cartesian explosion of operational combinations, compounded by intricate dependencies between parameters, renders naive grid or brute-force sampling computationally infeasible. While methods such as k-means-based adaptive scenario selection [Weissensteiner et al., 2023], vine copula-based dependency modeling [Hoehndorf et al., 2024], and geometry-based approaches (convex hulls, KDEs) [Hirschle et al., 2024] provide partial solutions, none deliver the required formal guarantees of ODD coverage completeness at certification scale under regulatory scrutiny.

Proposed Methodology

The paper presents a systematic, multi-step framework that operationalizes ODD coverage verification so as to render it both verifiable and tractable under high-dimensionality:

  • Parameter Discretization: Each ODD parameter is discretized into bins. Bin sizes are not chosen uniformly; instead, they are informed by parameter criticality, balancing computational tractability with the resolution necessary to capture safety-relevant variations.
  • Criticality-based Dimension Reduction: A criticality measure enables the reduction of non-crucial parameter dimensions or the grouping of related parameters, compressing the ODD without loss of safety sensitivity.
  • Constraint-based Filtering: The feasible operational parameter space is further pruned by domain-specific constraints—eliminating physically implausible or operationally irrelevant combinations.
  • Explicit Dependency Modeling: Statistical and logical dependencies between parameters are encoded, which precludes the combinatorial consideration of unrealistic states, focusing the verification effort on operationally meaningful scenario bins.
  • Iterative Coverage Verification: The method iteratively analyzes the discretized parameter space, identifies uncovered bins, and triggers scenario generation targeted at coverage gaps until completeness is demonstrably achieved.

The procedure explicitly targets the formal coverage requirements of EASA’s compliance roadmap by allowing the definition of a finite, relevant scenario set whose exhaustion constitutes regulatory completeness for AI/ML system approval.

VerticalCAS Case Study

The method is instantiated for the VerticalCAS airborne collision avoidance system. The ODD in this context is defined by state variables including relative altitude, vertical rates of ownship/intruder, time to loss of horizontal separation, and advisory mode memory. The stepwise application is as follows:

  • Discretization partitions the state space into bins driven by operational criticality (e.g., bin size for relative altitude set to the order of an A320's height for practical maneuver discrimination).
  • Constraint-based Filtering is exemplified by two constraints: (1) range of relative altitudes shrinks with time-to-conflict via a logarithmic envelope, (2) exclusion of diverging flight states where vertical separation is naturally increasing.
  • Parameter Reduction: Prior advisories are subsumed into a single bin given their non-criticality for geometric coverage completeness, reducing parameter space from 195,200 to 78,688 combinations.

Numerical results demonstrate a 60% reduction in the relevant state space. Despite the database containing 1.97 million simulated scenarios, raw coverage of the reduced ODD (after applying constraints and grouping) is just 2.6%—highlighting the stringency and informativeness of the approach for identifying coverage gaps and justifying targeted scenario generation.

Engineering and Certification Implications

The method concretely operationalizes the regulatory demand for complete ODD coverage by ensuring that all relevant and plausible operational bins are exercised. Unlike legacy parameter-wise or density-based approaches, this pipeline exposes latent coverage holes across joint parameter spaces, which is critical for certifying AI-based decision systems where failure may reside at nontrivial intersections of parameter boundaries. The iterative, scenario-completing process ensures that evidence for regulatory compliance is not probabilistic but exhaustive within the reduced, constraint-defined ODD.

Practically, such an approach enables AI engineering teams to develop traceably complete scenario sets for simulation, testing, and verification. Theoretically, it aligns with the Safety-by-Design paradigm, harmonizing model-based ODD definitions with empirical and formal safety arguments required for certification of neural policies and reinforcement-learning components in domains where the ODD itself is both dynamic and high-dimensional.

Future Directions

As ODDs become more complex (e.g., due to environmental, operational, or emergent human-AI interaction factors), advancing automated criticality estimation and data-driven or formal dependency extraction will become necessary to maintain scalability. Integration with combinatorial scenario generation and runtime monitoring frameworks (e.g., Scenic, ODD augmentation engines) will be a logical next step. Furthermore, as AI system architectures evolve toward greater autonomy, formal coverage analysis tools will need to couple with explainability and runtime assurance mechanisms.

Conclusion

The methodology outlined constitutes a rigorous, certification-oriented solution for providing verifiable ODD coverage in safety-critical AI-based systems operating in high-dimensional spaces (2604.02198). By merging criticality-driven reduction, binning, constraint pruning, and iterative coverage closure, the framework establishes a tractable pathway from abstract ODD definitions to certifiable evidence of completeness, directly addressing the demands of regulatory bodies and providing a foundation for future advances in certifiable, Safety-by-Design AI engineering.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.