2000 character limit reached
A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms (2103.14976v2)
Published 27 Mar 2021 in cs.HC
Abstract: Given that there are a variety of stakeholders involved in, and affected by, decisions from ML models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
- Ana Lucic (15 papers)
- Madhulika Srikumar (4 papers)
- Umang Bhatt (42 papers)
- Alice Xiang (28 papers)
- Ankur Taly (22 papers)
- Q. Vera Liao (49 papers)
- Maarten de Rijke (263 papers)