Absence of Unreasonable Risk (AUR)
- Absence of Unreasonable Risk (AUR) is a principle ensuring that residual system hazards remain below a societally and technically justified threshold.
- It employs quantitative risk metrics, layered safety analysis, and legal-ethical standards to balance mitigation costs with system utility.
- AUR guides high-risk systems like automated driving by integrating simulation, field testing, and continuous stakeholder reviews.
Absence of Unreasonable Risk (AUR) denotes the foundational safety requirement that no “unreasonable risk” remains after a system’s risk management measures are applied. Originating in functional safety standards (e.g., ISO 26262, ISO/IEC Guide 51), AUR has become the central regulatory and engineering threshold for the deployment of high-risk automated systems, notably Automated Driving Systems (ADS) and high-impact AI. AUR adjudicates acceptability not as simple minimization of all risk, but as a context- and value-dependent boundary established by societal, legal, and technical consensus. Formalization involves quantitative metrics, structured legal/ethical principles (notably “reasonableness”), and robust governance procedures for continual review and stakeholder involvement (Fraser et al., 2023, Favaro, 2021, Favaro et al., 2023, Favaro et al., 15 May 2025, Salem et al., 2023).
1. Foundational Concepts and Definitions
AUR encapsulates the condition where all residual risks after the application of safety controls are deemed “not unreasonable” under specified criteria. In ISO 26262:2018, unreasonable risk is defined as “risk judged to be unacceptable in a certain context according to valid societal moral concepts,” with safety characterized as the “absence of unreasonable risk” (Favaro, 2021). Quantitatively, risk associated with any hazard is , where is probability of harm and is severity. For a system to satisfy AUR, it must hold that , where is a contextually validated threshold (Favaro, 2021).
The European approach, as reflected in the evolving AI Act, distinguishes AUR from “as far as possible” (AFAP) requirements by replacing zero-tolerance residual risk minimization with a proportional, reasonable, and explicitly justified risk-acceptability framework (Fraser et al., 2023).
2. Legal and Regulatory Foundations of Reasonableness
The AUR threshold is grounded in reasonableness, a normative construct with roots in both negligence law (“reasonable person” or four-factor test) and sectoral regulation (e.g., medical device directives) (Fraser et al., 2023).
Negligence law operationalizes reasonableness through a balancing of:
- the likelihood of harm ,
- its potential severity ,
- the cost of mitigation ,
- the impact of mitigation on system utility .
The standard dictates that a precaution is required if , and remaining risk is unreasonable when , adjusted for utility loss.
Medical device regulation (EU MDR Annex I) further refines AFAP by stipulating risk “as far as possible without adversely affecting the benefit–risk ratio,” thereby permitting legitimate economic and therapeutic trade-offs.
The upshot is a regulatory default toward balancing societal benefits, utility, cost-effectiveness, and technical feasibility, rejecting both unduly burdensome mitigation and risk minimization that disables system function (Fraser et al., 2023).
3. Formalization and Operationalization of AUR Thresholds
AUR is mathematically instantiated through several decision rules and quantitative formulas:
- Per-risk scenario: For each scenario with (probability), (severity), and (mitigation cost), require .
- System-level acceptability: Let be aggregate residual risk, total compliance cost, system net benefit. System is acceptable if .
- Materiality threshold: Risks below a de minimis threshold —i.e., —need not be reduced further (Fraser et al., 2023).
Contemporary engineering frameworks, such as Waymo’s ADS process, instantiate AUR via twelve architecture- and behavior-specific acceptance criteria, each quantitatively anchored but aggregated through a “signal-based” (multi-stream, non-monolithic) evaluation (Favaro et al., 15 May 2025). The Risk Management Core further operationalizes this via an iterative loop: Risk Analysis Evaluation Treatment, with explicit comparison (Salem et al., 2023).
4. Methodologies for Demonstrating and Governing AUR
AUR determination integrates layered architectural analysis, scenario-based and simulation-based quantitative evaluation, and continuous in-service oversight:
- Layered safety (vertical decomposition): Hazards decomposed into architectural (hardware/software), behavioral (driving policy), and operational (environment/maintenance) (Favaro et al., 2023).
- Readiness criteria: Comprehensive test and verification regime composed of collision avoidance, predicted collision risk (rate & severity), adherence to traffic rules, VRU event analysis, and in-service field performance. Each is governed by measurable metrics (e.g., collision rate at severity , compliance rates) and compared to acceptance thresholds (Favaro et al., 15 May 2025).
- Scenario-based simulation: Explicit risk quantification via probabilistic models, such as chance-constrained POMDPs for AVs, bounding scenario collision probability below a user-specified tolerance (Khonji et al., 2019).
- Risk management process: Explicit representation of , with rigorous hazard log maintenance, and gap/mitigation analysis (Salem et al., 2023).
Multi-tiered governance is integral: methodology owners analyze, simulate, and validate; steering committees holistically review; safety boards approve or require remediation. Independent safety-case construction and in-field monitoring iterate to guarantee ongoing conformity to AUR (Favaro et al., 15 May 2025).
5. Relationship to Positive Risk Balance (PRB) and Industry Benchmarks
AUR and PRB are distinct yet potentially complementary. PRB can serve as:
- a system-level performance goal (frequency of ADS harms benchmark, typically human driving rates),
- a development risk-acceptance criterion (design-stage risk budget , i.e., hazard-level threshold for ).
AUR is satisfied when PRB’s benchmark is demonstrably “reasonable,” i.e., . Otherwise, explicit hazard budgets must be used to underpin AUR (as with PRB₂ in industry debate) (Favaro, 2021).
Discussion remains ongoing over whether outperforming the human driver baseline suffices for AUR or is merely necessary but not sufficient—societal acceptability plays a decisive role.
6. Civic Legitimacy, Stakeholder Involvement, and Adaptivity
AUR is not determined unilaterally by technical actors. Regulatory guidance (e.g., via the EU AI Act’s AI Office), common specifications, and mandatory inclusion of civil society, SME, and user groups in standards-setting instantiate “civic legitimacy” (Fraser et al., 2023). Regulatory bodies must both issue detailed AUR guidance (e.g., on estimating , , and for AI-specific hazards) and monitor standardization for alignment with proportionality and precaution requirements. This ensures pluralistic, democratically accountable boundary-setting between acceptable and unreasonable risk.
A living AUR process requires maturity in safety culture, transparent organizational processes, and formal review policies for updates as technology, operational domain, and societal values evolve (Favaro et al., 15 May 2025).
7. Practical Applications and Illustrative Architectures
In advanced ADS deployments, AUR is realized through architectures that support probabilistic uncertainty propagation and enforce chance constraints at the decision-theoretic level. For example, risk-aware planners for AVs ensure every trajectory satisfies , with all uncertainty (perception, intention, dynamics) explicitly modeled (Khonji et al., 2019). ADS development frameworks (e.g., Waymo’s layered/dynamic/credible case approach) employ both multi-scenario simulation, test track, on-road trials, and in-service monitoring, with each performance metric checked against governance-reviewed acceptance criteria (Favaro et al., 2023, Favaro et al., 15 May 2025).
Risk Management Core implementations explicitly log, quantify, and manage risk at each step, avoiding the legacy opacity of purely category-based functional safety and ensuring both traceability and responsiveness (Salem et al., 2023).
The Absence of Unreasonable Risk principle thus synthesizes legal, ethical, technical, and organizational doctrines into a pragmatic, adaptable threshold that guides both regulation and engineering practice for the deployment of high-risk automated systems. Its rigorous formalism, process requirements, and insistence on civic accountability distinguish it sharply from both naive risk-minimization and simplistic benchmark-based assurances.