Papers
Topics
Authors
Recent
2000 character limit reached

Nose to Glass: Looking In to Get Beyond

Published 26 Nov 2020 in cs.CY | (2011.13153v2)

Abstract: Brought into the public discourse through investigative work by journalists and scholars, awareness of algorithmic harms is at an all-time high. An increasing amount of research has been conducted under the banner of enhancing responsible AI, with the goal of addressing, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems. Nonetheless, implementation of such tools remains low. Given this gap, this paper offers a modest proposal: that the field, particularly researchers concerned with responsible research and innovation, may stand to gain from supporting and prioritising more ethnographic work. This embedded work can flesh out implementation frictions and reveal organisational and institutional norms that existing work on responsible artificial intelligence AI has not yet been able to offer. In turn, this can contribute to more insights about the anticipation of risks and mitigation of harm. This paper reviews similar empirical work typically found elsewhere, commonly in science and technology studies and safety science research, and lays out challenges of this form of inquiry.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.