Papers
Topics
Authors
Recent
2000 character limit reached

Absence of Unreasonable Risk (AUR)

Updated 30 December 2025
  • Absence of Unreasonable Risk (AUR) is a principle ensuring that residual system hazards remain below a societally and technically justified threshold.
  • It employs quantitative risk metrics, layered safety analysis, and legal-ethical standards to balance mitigation costs with system utility.
  • AUR guides high-risk systems like automated driving by integrating simulation, field testing, and continuous stakeholder reviews.

Absence of Unreasonable Risk (AUR) denotes the foundational safety requirement that no “unreasonable risk” remains after a system’s risk management measures are applied. Originating in functional safety standards (e.g., ISO 26262, ISO/IEC Guide 51), AUR has become the central regulatory and engineering threshold for the deployment of high-risk automated systems, notably Automated Driving Systems (ADS) and high-impact AI. AUR adjudicates acceptability not as simple minimization of all risk, but as a context- and value-dependent boundary established by societal, legal, and technical consensus. Formalization involves quantitative metrics, structured legal/ethical principles (notably “reasonableness”), and robust governance procedures for continual review and stakeholder involvement (Fraser et al., 2023, Favaro, 2021, Favaro et al., 2023, Favaro et al., 15 May 2025, Salem et al., 2023).

1. Foundational Concepts and Definitions

AUR encapsulates the condition where all residual risks after the application of safety controls are deemed “not unreasonable” under specified criteria. In ISO 26262:2018, unreasonable risk is defined as “risk judged to be unacceptable in a certain context according to valid societal moral concepts,” with safety characterized as the “absence of unreasonable risk” (Favaro, 2021). Quantitatively, risk associated with any hazard hh is R(h)=P(h)×S(h)R(h) = P(h) \times S(h), where P(h)P(h) is probability of harm and S(h)S(h) is severity. For a system to satisfy AUR, it must hold that R(h)Θ(h)R(h) \leq \Theta(h) h\forall h, where Θ(h)\Theta(h) is a contextually validated threshold (Favaro, 2021).

The European approach, as reflected in the evolving AI Act, distinguishes AUR from “as far as possible” (AFAP) requirements by replacing zero-tolerance residual risk minimization with a proportional, reasonable, and explicitly justified risk-acceptability framework (Fraser et al., 2023).

The AUR threshold is grounded in reasonableness, a normative construct with roots in both negligence law (“reasonable person” or four-factor test) and sectoral regulation (e.g., medical device directives) (Fraser et al., 2023).

Negligence law operationalizes reasonableness through a balancing of:

  • the likelihood of harm pp,
  • its potential severity LL,
  • the cost of mitigation CC,
  • the impact of mitigation on system utility UU.

The standard dictates that a precaution is required if CpLC \leq p \cdot L, and remaining risk is unreasonable when pLCp \cdot L \gg C, adjusted for utility loss.

Medical device regulation (EU MDR Annex I) further refines AFAP by stipulating risk “as far as possible without adversely affecting the benefit–risk ratio,” thereby permitting legitimate economic and therapeutic trade-offs.

The upshot is a regulatory default toward balancing societal benefits, utility, cost-effectiveness, and technical feasibility, rejecting both unduly burdensome mitigation and risk minimization that disables system function (Fraser et al., 2023).

3. Formalization and Operationalization of AUR Thresholds

AUR is mathematically instantiated through several decision rules and quantitative formulas:

  • Per-risk scenario: For each scenario ii with pip_i (probability), LiL_i (severity), and CiC_i (mitigation cost), require CipiLiC_i \leq p_i L_i.
  • System-level acceptability: Let RresR_{\mathrm{res}} be aggregate residual risk, CtotC_{\mathrm{tot}} total compliance cost, BB system net benefit. System is acceptable if Rres+CtotBR_{\mathrm{res}} + C_{\mathrm{tot}} \leq B.
  • Materiality threshold: Risks below a de minimis threshold δ\delta—i.e., piLi<δp_i L_i < \delta—need not be reduced further (Fraser et al., 2023).

Contemporary engineering frameworks, such as Waymo’s ADS process, instantiate AUR via twelve architecture- and behavior-specific acceptance criteria, each quantitatively anchored but aggregated through a “signal-based” (multi-stream, non-monolithic) evaluation (Favaro et al., 15 May 2025). The Risk Management Core further operationalizes this via an iterative loop: Risk Analysis \rightarrow Evaluation \rightarrow Treatment, with explicit comparison Ractual<RacceptedR_{\text{actual}} < R_{\text{accepted}} (Salem et al., 2023).

4. Methodologies for Demonstrating and Governing AUR

AUR determination integrates layered architectural analysis, scenario-based and simulation-based quantitative evaluation, and continuous in-service oversight:

  • Layered safety (vertical decomposition): Hazards decomposed into architectural (hardware/software), behavioral (driving policy), and operational (environment/maintenance) (Favaro et al., 2023).
  • Readiness criteria: Comprehensive test and verification regime composed of collision avoidance, predicted collision risk (rate & severity), adherence to traffic rules, VRU event analysis, and in-service field performance. Each is governed by measurable metrics (e.g., λ^s\hat\lambda_s collision rate at severity ss, compliance rates) and compared to acceptance thresholds (Favaro et al., 15 May 2025).
  • Scenario-based simulation: Explicit risk quantification via probabilistic models, such as chance-constrained POMDPs for AVs, bounding scenario collision probability R(τe)R(\tau_e) below a user-specified tolerance ρmax\rho_{\max} (Khonji et al., 2019).
  • Risk management process: Explicit representation of R=S×PR = S \times P, with rigorous hazard log maintenance, and gap/mitigation analysis (Salem et al., 2023).

Multi-tiered governance is integral: methodology owners analyze, simulate, and validate; steering committees holistically review; safety boards approve or require remediation. Independent safety-case construction and in-field monitoring iterate to guarantee ongoing conformity to AUR (Favaro et al., 15 May 2025).

5. Relationship to Positive Risk Balance (PRB) and Industry Benchmarks

AUR and PRB are distinct yet potentially complementary. PRB can serve as:

  • a system-level performance goal (frequency of ADS harms \leq benchmark, typically human driving rates),
  • a development risk-acceptance criterion (design-stage risk budget B(h)B(h), i.e., hazard-level threshold for R(h)R(h)).

AUR is satisfied when PRB’s benchmark is demonstrably “reasonable,” i.e., Fbenchmark(h)Θ(h)F_{\mathrm{benchmark}}(h) \leq \Theta(h). Otherwise, explicit hazard budgets must be used to underpin AUR (as with PRB₂ in industry debate) (Favaro, 2021).

Discussion remains ongoing over whether outperforming the human driver baseline suffices for AUR or is merely necessary but not sufficient—societal acceptability plays a decisive role.

6. Civic Legitimacy, Stakeholder Involvement, and Adaptivity

AUR is not determined unilaterally by technical actors. Regulatory guidance (e.g., via the EU AI Act’s AI Office), common specifications, and mandatory inclusion of civil society, SME, and user groups in standards-setting instantiate “civic legitimacy” (Fraser et al., 2023). Regulatory bodies must both issue detailed AUR guidance (e.g., on estimating pp, LL, and δ\delta for AI-specific hazards) and monitor standardization for alignment with proportionality and precaution requirements. This ensures pluralistic, democratically accountable boundary-setting between acceptable and unreasonable risk.

A living AUR process requires maturity in safety culture, transparent organizational processes, and formal review policies for updates as technology, operational domain, and societal values evolve (Favaro et al., 15 May 2025).

7. Practical Applications and Illustrative Architectures

In advanced ADS deployments, AUR is realized through architectures that support probabilistic uncertainty propagation and enforce chance constraints at the decision-theoretic level. For example, risk-aware planners for AVs ensure every trajectory satisfies R(τe)ρmaxR(\tau_e) \leq \rho_{\max}, with all uncertainty (perception, intention, dynamics) explicitly modeled (Khonji et al., 2019). ADS development frameworks (e.g., Waymo’s layered/dynamic/credible case approach) employ both multi-scenario simulation, test track, on-road trials, and in-service monitoring, with each performance metric checked against governance-reviewed acceptance criteria (Favaro et al., 2023, Favaro et al., 15 May 2025).

Risk Management Core implementations explicitly log, quantify, and manage risk at each step, avoiding the legacy opacity of purely category-based functional safety and ensuring both traceability and responsiveness (Salem et al., 2023).


The Absence of Unreasonable Risk principle thus synthesizes legal, ethical, technical, and organizational doctrines into a pragmatic, adaptable threshold that guides both regulation and engineering practice for the deployment of high-risk automated systems. Its rigorous formalism, process requirements, and insistence on civic accountability distinguish it sharply from both naive risk-minimization and simplistic benchmark-based assurances.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Absence of Unreasonable Risk (AUR).