Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System (2204.07874v3)

Published 16 Apr 2022 in cs.SE and cs.LG

Abstract: Integration of Machine Learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We initiated a research project with the goal to demonstrate a complete safety case for an ML component in an open automotive system. This paper reports results from an industry-academia collaboration on safety assurance of SMIRK, an ML-based pedestrian automatic emergency braking demonstrator running in an industry-grade simulator. We demonstrate an application of AMLAS on SMIRK for a minimalistic operational design domain, i.e., we share a complete safety case for its integrated ML-based component. Finally, we report lessons learned and provide both SMIRK and the safety case under an open-source licence for the research community to reuse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Markus Borg (60 papers)
  2. Jens Henriksson (7 papers)
  3. Kasper Socha (2 papers)
  4. Olof Lennartsson (2 papers)
  5. Elias Sonnsjö Lönegren (1 paper)
  6. Thanh Bui (7 papers)
  7. Piotr Tomaszewski (4 papers)
  8. Sankar Raman Sathyamoorthy (9 papers)
  9. Sebastian Brink (1 paper)
  10. Mahshid Helali Moghadam (14 papers)
Citations (19)