SAFER: Safety Requirements Analysis
- The paper introduces SAFER, a model-driven methodology that integrates formal methods and AI to analyze, formalize, and verify safety requirements in early design phases.
- It employs systematic mapping and detection techniques to identify gaps, duplicate, and contradictory requirements, ensuring traceability and auditability in safety-critical systems.
- The approach has demonstrated improved consistency and efficiency in diverse domains, including automotive, industrial instrumentation, and machine learning systems.
Foundational Analysis of Safety Engineering Requirements (SAFER) is a model-driven engineering methodology for analyzing, formalizing, and verifying safety requirements in complex safety-critical systems. The SAFER approach synthesizes control-theoretic hazard analysis, formal modeling, generative AI–assisted requirements analysis, and verification through model checking and testing. Its objectives are early detection of requirement gaps, rework due to duplication or contradiction, and ensuring traceable, auditable allocation of safety requirements to system functions and architectures. SAFER is applied across domains ranging from automotive controllers and industrial instrumentation to machine learning systems, providing structured support for requirements engineering, automated validation, and certified safety argumentation (Chemo et al., 9 Jan 2026, Abdulkhaleq et al., 2016).
1. Conceptual Foundations and Objectives
SAFER (Foundational Analysis of Safety Engineering Requirements) is defined as a generative-AI-augmented, model-based methodology for early-phase analysis and validation of system safety requirements. Its foundations lie in the recognition that raw stakeholder requirements are typically uncoordinated, leading to gaps, duplications, and contradictions that threaten safety and compliance in regulated industrial contexts.
The central objectives are:
- Mapping safety requirements to system functions for explicit allocation and traceability;
- Detecting insufficiently specified functions (coverage gaps), duplicate requirements, and contradictions within the requirement set;
- Supporting structured, repeatable reporting and decision support for safety architects and engineers.
SAFER integrates these steps into a unified workflow built on MBSE (Model-Based Systems Engineering) and formal methods, providing a basis for lifecycle safety assurance (Chemo et al., 9 Jan 2026). Its scope extends to software-intensive systems, cyber-physical platforms, and sociotechnical settings, where emergent hazards demand system-level analysis and formalization (Abdulkhaleq et al., 2016, Cimatti et al., 2010).
2. Formal Models and Notational Structure
SAFER is constructed atop a formal ontology of system architecture and requirements, typically captured in OPM/OPL or SysML-BDD. Let be the set of stakeholder-provided requirements, the system functions, and the set of requirement types (functional, probabilistic).
Key formal elements include:
- A (partial) mapping (or, more precisely, a relation ), assigning each requirement to a function.
- Type assignment , distinguishing functional vs. probabilistic requirements.
- Gap detection predicates: a function is sufficiently covered iff and . Otherwise, a coverage gap is flagged.
- Duplication detection: for each , the set of duplicate pairs , where is a semantic similarity predicate.
- Contradiction detection: for each , the set , with denoting logical conflict.
These model-theoretic foundations allow deterministic, auditable analysis when coupled with zero-temperature LLM pipelines, producing reproducible mappings and flags (Chemo et al., 9 Jan 2026).
3. Methodological Workflow
The SAFER process is structured in sequential phases, each with formal inputs, outputs, and validation steps:
- Requirements-to-Function Allocation: The architecture model is extracted; each raw requirement is classified and mapped to a function using LLMs with controlled prompts and JSON-output constraints.
- Coverage Analysis: For each function, the sufficiency of both functional and probabilistic requirements is computed algorithmically. Functions not meeting the coverage threshold are flagged.
- Duplicate and Contradiction Detection: Within each function's allocated requirements, LLMs analyze for semantic duplication and logical contradiction. Prompt engineering and iterative refinement ensure recall and precision in predictive judgments.
- Audit and Reporting: All mappings and detected issues are presented for human-in-the-loop validation before acceptance. Full traceability is maintained via structured data and logs.
- Case Study—Autonomous Drone System: Demonstrated impact includes increased classification consistency (from ~71% to ~83%) and high recall in duplicate and contradiction detection after prompt tuning. Manual review time was sharply reduced, highlighting the operational efficiency of the approach (Chemo et al., 9 Jan 2026).
Table 1 – Example Coverage Matrix (from case study)
| Function | #Reqs | FUNC | PROB | Coverage |
|---|---|---|---|---|
| DM | 18 | ✔ | ✔ | Complete |
| EN | 12 | ✔ | ✔ | Complete |
| NAV | 10 | ✔ | ✗ | Missing |
| PEA | 8 | ✔ | ✗ | Missing |
| ... | ... | ... | ... | ... |
4. Integration with System-Theoretic and Formal Verification Approaches
SAFER is designed to interoperate with control-theoretic safety analysis, notably STPA (System-Theoretic Process Analysis), which is embedded into the requirements derivation step. Key elements include:
- Control structure diagrams encompassing software controllers, actuators, sensors, human and environmental interactions.
- Identification of Unsafe Control Actions (UCAs) and their translation to software safety requirements (SSRs).
- Construction of context tables distinguishing variable valuations triggering hazards.
- Expression of SSRs as temporal logic (e.g., LTL) conditions and integration as guards in statechart-based behavior models (Abdulkhaleq et al., 2016).
This approach supports both formal verification, using model checkers such as NuSMV/SPIN, and automated generation of safety-based test suites (ModelJUnit), with traceability linking each SSR to covered test cases and verification results. Such dual-track validation ensures both exhaustiveness (via model checking) and practical confidence (via test coverage) (Abdulkhaleq et al., 2016).
5. Formalization, Validation, and Traceability for Safety-Critical Domains
For safety-critical systems—e.g., railways, aerospace—SAFER can be instantiated with high expressiveness. Requirements are formalized in hybrid first-order temporal logic, validated automatically against logical consistency (SAT), scenario compatibility, and property entailment (UNSAT) using model checking and SMT-based bounded analysis (Cimatti et al., 2010). Methodological steps involve:
- Decomposing informal requirements into atomic fragments using traceability tools (e.g., IBM RequisitePro);
- Formalizing these in UML-extended, temporal logic specifications;
- Iterative validation against domain scenarios, with diagnostics provided by counterexamples or unsatisfiable cores for expert review.
Empirical results show that large-scale formalization and automated checking (e.g., for the ETCS project) can be accomplished at industrial scales, with direct impact on requirements quality and certification readiness (Cimatti et al., 2010).
6. Extension to Machine Learning and Socio-Technical Systems
Emerging applications require extension of SAFER concepts to the algorithmic, ML, and socio-technical safety domains. This involves:
- Explicit treatment of epistemic uncertainty, harm thresholds, and distributional robustness in machine learning safety requirements (Varshney, 2016).
- Adaptation of FMEA and STPA frameworks for algorithmic impact assessment (e.g., social/ethical hazard mapping, Risk Priority Number scoring), supported by lightweight toolkits, organizational processes, and participatory stakeholder engagement (Rismani et al., 2022).
- Coordination of technical, human, and organizational perspectives in safety analysis, embedding behavioral safety goals and run-time capability monitoring within multi-view architecture frameworks (Bagschik et al., 2018).
Integration into standard MLOps/CI-CD workflows and support for social harm detection are increasingly essential, requiring scalable, context-aware, and traceable SAFER methodologies (Rismani et al., 2022, Varshney, 2016).
7. Generalizable Principles, Limitations, and Future Directions
SAFER establishes foundational principles for requirements-driven safety engineering:
- Early and systematic identification of hazards and coverage gaps using formal models and AI assistance;
- Modular, ontology-based traceability from requirements to functions, behaviors, and implementation artifacts;
- Dual-track validation (formal/model-based testing) for confidence and exhaustiveness;
- Iterability and feedback: refinement of requirements as design knowledge evolves.
Limitations include reliance on the quality of domain models, prompts, and training data for LLM-based judgments, and the expressiveness bounds of collaborative formalizations. As complexity and sociotechnical entanglement increase, further work is warranted to increase the scope, automation, and socio-contextual awareness of SAFER-derived safety engineering processes (Chemo et al., 9 Jan 2026, Rismani et al., 2022).
References
- (Chemo et al., 9 Jan 2026) Foundational Analysis of Safety Engineering Requirements (SAFER)
- (Abdulkhaleq et al., 2016) A comprehensive safety engineering approach for software-intensive systems based on STPA
- (Cimatti et al., 2010) Formalization and Validation of Safety-Critical Requirements
- (Varshney, 2016) Engineering Safety in Machine Learning
- (Rismani et al., 2022) From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
- (Bagschik et al., 2018) A System's Perspective Towards an Architecture Framework for Safe Automated Vehicles