Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What's my role? Modelling responsibility for AI-based safety-critical systems (2401.09459v1)

Published 30 Dec 2023 in cs.CY and cs.AI

Abstract: AI-Based Safety-Critical Systems (AI-SCS) are being increasingly deployed in the real world. These can pose a risk of harm to people and the environment. Reducing that risk is an overarching priority during development and operation. As more AI-SCS become autonomous, a layer of risk management via human intervention has been removed. Following an accident it will be important to identify causal contributions and the different responsible actors behind those to learn from mistakes and prevent similar future events. Many authors have commented on the "responsibility gap" where it is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS. This is due to the complex development cycle for AI, uncertainty in AI performance, and dynamic operating environment. A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating, and may not have understanding of. This cross-disciplinary paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety. We use a core concept (Actor(A) is responsible for Occurrence(O)) to create role responsibility models, producing a practical method to capture responsibility relationships and provide clarity on the previously identified responsibility issues. Our paper demonstrates the approach with two examples: a retrospective analysis of the Tempe Arizona fatal collision involving an autonomous vehicle, and a safety focused predictive role-responsibility analysis for an AI-based diabetes co-morbidity predictor. In both examples our primary focus is on safety, aiming to reduce unfair or disproportionate blame being placed on operators or developers. We present a discussion and avenues for future research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. April 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. https://artificialintelligenceact.eu/wp-content/uploads/2022/05/AIA-COM-Proposal-21-April-21.pdf.
  2. February 2023. Artificial intelligence liability directive briefing. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)739342.
  3. UK General Public Acts. 1974. Health and Safety at Work etc. Act 1974. https://www.legislation.gov.uk/ukpga/1974/37/contents.
  4. Assuring the machine learning lifecycle: Desiderata, methods, and challenges. ACM Computing Surveys (CSUR) 54, 5 (2021), 1–39.
  5. Uber ATG. 2018. Safety Report Supplement Internal and External Safety Reviews. https://aurora-dev.cdn.prismic.io/aurora-dev/4f3e03fe-7d41-4bed-a998-7b2d918b9579_UberATGSupplementSafetyReview2018.pdf
  6. Lisanne Bainbridge. 1983. Ironies of automation. Automatica 19, 6 (1983), 775–779. https://doi.org/10.1016/0005-1098(83)90046-8
  7. G. Baxter and I. Sommerville. 2011. Responsibility Modelling For Resilience. In PROCEEDINGS OF THE FOURTH RESILIENCE ENGINEERING SYMPOSIUM. 22–28. https://doi.org/10.4000/books.pressesmines.960
  8. ORDIT: A new methodology to assist in the process of eliciting and modelling organisational requirements. In Proceedings on the Conference on Organisational Computing Systems. 216–227.
  9. Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence 279 (2020), 103201.
  10. Understanding Accountability in Algorithmic Supply Chains. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1186–1197. https://doi.org/10.1145/3593013.3594073
  11. Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 864–876. https://doi.org/10.1145/3531146.3533150
  12. Sidney Dekker. 2018. Just Culture: Restoring Trust and Accountability in Your Organization, Third Ed. CRC Press.
  13. U.S. Food and Drug Administration. Jan 2021. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. https://www.fda.gov/media/145022/download.
  14. Office for Nuclear Regulation. 2023. Allocation of Function Between Human and Engineered Systems. NS-TAST-GD-064.
  15. The PyTorch Foundation. [n. d.]. Pytorch. https://pytorch.org/. Accessed August 2023.
  16. J Fulbrook. 2021. Reverberations from Uber v Aslam in personal injury claims? Journal of Personal Injury Law 2 (2021), 59–67.
  17. David J Gunkel. 2012. The machine question: Critical perspectives on AI, robots, and ethics. mit Press.
  18. H. L. A. Hart. 2008. 210POSTSCRIPT: RESPONSIBILITY AND RETRIBUTION. In Punishment and Responsibility: Essays in the Philosophy of Law. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199534777.003.0009 arXiv:https://academic.oup.com/book/0/chapter/160311870/chapter-pdf/38995148/acprof-9780199534777-chapter-9.pdf
  19. Richard Hawkins. [n. d.]. The Principles of Software Safety Assurance. https://api.semanticscholar.org/CorpusID:14955881
  20. Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS). arXiv:2102.01564 [cs.LG]
  21. IEC. 2010. IEC-61508 - Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems.
  22. Independent. 2023. The backup driver in the 1st death by a fully autonomous car pleads guilty to endangerment. https://www.independent.co.uk/news/uber-ap-phoenix-maricopa-county-tesla-b2384012.html.
  23. Alan Turing Institute. 2023. AI Safety Standards Hub. https://aistandardshub.org/ai-standards-search/. Accessed November 2023.
  24. ISO. 2018. ISO-26262 Road Vehicles Functional Safety.
  25. ISO. 2019. ISO-14971 Medical devices. Application of risk management to medical devices.
  26. The role of explainability in assuring safety of machine learning in healthcare. IEEE Transactions on Emerging Topics in Computing 10, 4 (2022), 1746–1760.
  27. The need for the human-centred explanation for ML-based clinical decision support systems. In 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI). IEEE.
  28. T Kelly. 1998. Arguing Safety, a Systematic Approach to Managing Safety Cases. PhD Thesis, Department of Computer Science, University of York (1998).
  29. Edwards L. 2022. Regulating AI in Europe: four problems and four solutions. https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinion-Lilian-Edwards-Regulating-AI-in-Europe.pdf.
  30. V7 Labs. [n. d.]. Open Datasets. https://www.v7labs.com/open-datasets. Accessed August 2023.
  31. Clinicians Risk Becoming ”Liability Sinks” for Artificial Intelligence. (2023).
  32. Responsibility Modelling for Risk Analysis. ESREL, ESRA: Eur Safety Reliab Assoc. https://doi.org/10.1201/9780203859759.ch152
  33. Carl Macrae. 2022. Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk. Risk Analysis 42, 9 (2022), 1999–2025. https://doi.org/10.1111/risa.13850 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/risa.13850
  34. A Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6 (2004), 175–183. https://doi.org/0.1007/s10676-004-3422-1
  35. Experience with the application of HAZOP to computer-based systems. In IEEE proceedings of the 10th Conference on Computer Assurance Systems Integrity, Software Safety and Process Security. 37–48.
  36. An enhanced vehicle control model for assessing highly automated driving safety. Reliability Engineering & System Safety 202 (2020), 107061.
  37. The Value of Responsibility Gaps in Algorithmic Decision-Making. Ethics and Information Technology 25, 1 (2023), 1–11. https://doi.org/10.1007/s10676-023-09699-6
  38. National Transportation Safety Board. 2019. Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018, NTSB/HAR-19/03. https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf
  39. Helen Nissenbaum. 1996. Accountability in a computerized society. Science and engineering ethics 2, 1 (1996), 25–42. https://doi.org/10.1007/BF02639315
  40. How can we improve safety culture in transport organizations? A review of interventions, effects and influencing factors. Transportation Research Part F: Traffic Psychology and Behaviour 54 (2018), 28–46. https://doi.org/10.1016/j.trf.2018.01.002
  41. Predicting Progression of Type 2 Diabetes using Primary Care Data with the Help of Machine Learning. In Medical Informatics Europe 2023.
  42. P Morgan et. al. 2023. Tort Liability and Autonomous Systems Accidents. Edward Elgar Publishing, Ltd.
  43. Unravelling Responsibility for AI. arXiv:2308.02608 [cs.AI]
  44. Distinguishing two features of accountability for AI technologies. Nature of Machine Intelligence 4 (Sept 2022), 734–736. Issue 9. https://doi.org/10.1038/s42256-022-00533-0
  45. Open Robotics. 2023. Gazebo. https://gazebosim.org/. Accessed November 2023.
  46. RTCA/EUROCAE. 2011. DO-178C Software Considerations in Airborne Systems and Equipment Certification.
  47. Safety Engineering, Role Responsibility and Lessons from the Uber ATG Tempe Accident. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems (Edinburgh, United Kingdom) (TAS ’23). Association for Computing Machinery, New York, NY, USA, Article 25, 10 pages. https://doi.org/10.1145/3597512.3599718
  48. The Impact of Training Data Shortfalls on Safety of AI-based Clinical Decision Support Systems. SAFECOMP 2023 (2023).
  49. D Shepardson. 2023. Backup driver in 2018 Uber self-driving crash pleads guilty. https://www.reuters.com/business/autos-transportation/backup-driver-2018-uber-self-driving-crash-pleads-guilty-2023-07-28/.
  50. Connected Bradford: a Whole System Data Linkage Accelerator. Wellcome open research 7 (2022), 26. https://doi.org/10.12688/wellcomeopenres.17526.2
  51. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ health & care informatics 26, 1 (2019).
  52. Mark A Sujan. 2023. Looking at the Safety of AI from a Systems Perspective: Two Healthcare Examples. In Safety in the Digital Age: Sociotechnical Perspectives on Algorithms and Machine Learning. Springer, 79–90.
  53. Should healthcare providers do safety cases? Lessons from a cross-industry review of safety case practices. Safety science 84 (2016), 181–189.
  54. Dennis F. Thompson. 1980. Moral Responsibility of Public Officials: The Problem of Many Hands. American Political Science Review 74, 4 (1980), 905–916. https://doi.org/10.2307/1954312
  55. Dennis F. Thompson. 2017. Designing Responsibility: The Problem of Many Hands in Complex Organizations. Cambridge University Press, 32–56. https://doi.org/10.1017/9780511844317.003
  56. Daniel Tigard. 2021. There Is No Techno-Responsibility Gap. Philosophy & Technology 34 (09 2021). https://doi.org/10.1007/s13347-020-00414-7
  57. UK Maritime and Coastguard Agency. 2022. Improving Safety and Organisational Performance Through A Just Culture . https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/286139/just_culture.pdf.
  58. UK Ministry of Defence. 2017. Safety Management Requirements for Defence Systems 00-056 Part 1, Issue 7. Online: accessed 2022.
  59. Human Rights Watch. 2023. How the EU’s Flawed Artificial Intelligence Regulation Endangers the Social Safety Net: Questions and Answers. https://www.hrw.org/news/2021/11/10/how-eus-flawed-artificial-intelligence-regulation-endangers-social-safety-net.
  60. Bradley W. Weaver and Patricia R. DeLucia. 2022. A Systematic Review and Meta-Analysis of Takeover Performance During Conditionally Automated Driving. Human Factors 64, 7 (2022), 1227–1260. https://doi.org/10.1177/0018720820976476 PMID: 33307821.
  61. Wired. 2023. The Legal Saga of Uber’s Fatal Self-Driving Car Crash Is Over. https://www.wired.com/story/ubers-fatal-self-driving-car-crash-saga-over-operator-avoids-prison/.
  62. Reasoning About Responsibility in Autonomous Systems: Challenges and Opportunities. AI and Society 38 (12 2022). https://doi.org/10.1007/s00146-022-01607-8
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Philippa Ryan (7 papers)
  2. Zoe Porter (6 papers)
  3. Joanna Al-Qaddoumi (2 papers)
  4. John McDermid (13 papers)
  5. Ibrahim Habli (20 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com