Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams (2211.06326v3)

Published 31 Oct 2022 in cs.CY

Abstract: This chapter explores moral responsibility for civilian harms by human-AI teams. Although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments through the processes of replacing human decision-making with AI determinations in war making. Responsibility for civilian harm in human-AI military teams may be contested, risking operators becoming detached, being extreme moral witnesses, becoming moral crumple zones or suffering moral injury from being part of larger human-AI systems authorised by the state. Acknowledging military ethics, human factors and AI work to date as well as critical case studies, this chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams. These include: 1) new decision responsibility prompts for critical decision method in a cognitive task analysis, and 2) applying an AI workplace health and safety framework for identifying cognitive and psychological risks relevant to attributions of moral responsibility in targeting decisions. Mechanisms such as these enable militaries to design human-centred AI systems for responsible deployment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
Citations (2)