Dice Question Streamline Icon: https://streamlinehq.com

Accountability for Actions of Large-Scale AI Systems

Ascertain whether, and under what conditions, humans who design, deploy, or control large-scale AI systems can be held responsible for the systems’ actions, particularly when such systems are characterized as autonomous or unpredictable.

Information Square Streamline Icon: https://streamlinehq.com

Background

In discussing arguments used to justify robot rights, the authors note a practical concern: as AI systems scale and are portrayed as autonomous, responsibility for their actions may become ambiguous. They highlight a scenario involving police robots equipped with LLMs where manufacturers or deployers might claim unpredictability to evade accountability.

The authors explicitly state that in large-scale systems it is no longer clear whether humans in charge can still be held responsible—framing a core unresolved question in AI governance and liability.

References

[I]n large-scale AI systems it is no longer clear whether the humans in charge of these systems can still be held responsible for the deeds of their ‘autonomous’ systems.

Debunking Robot Rights Metaphysically, Ethically, and Legally (2404.10072 - Birhane et al., 15 Apr 2024) in Section 3 (The Robots at Issue)