Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support (2112.05675v2)

Published 10 Dec 2021 in cs.AI, cs.CY, and cs.HC

Abstract: Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems. However, prior research has highlighted gaps between the intended design of these tools and practices and their use within particular contexts, including gaps caused by the role that organizational factors play in shaping fairness work. In this paper, we investigate these gaps for one such practice: disaggregated evaluations of AI systems, intended to uncover performance disparities between demographic groups. By conducting semi-structured interviews and structured workshops with thirty-three AI practitioners from ten teams at three technology companies, we identify practitioners' processes, challenges, and needs for support when designing disaggregated evaluations. We find that practitioners face challenges when choosing performance metrics, identifying the most relevant direct stakeholders and demographic groups on which to focus, and collecting datasets with which to conduct disaggregated evaluations. More generally, we identify impacts on fairness work stemming from a lack of engagement with direct stakeholders or domain experts, business imperatives that prioritize customers over marginalized groups, and the drive to deploy AI systems at scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Michael Madaio (15 papers)
  2. Lisa Egede (9 papers)
  3. Hariharan Subramonyam (13 papers)
  4. Jennifer Wortman Vaughan (52 papers)
  5. Hanna Wallach (48 papers)
Citations (120)

Summary

We haven't generated a summary for this paper yet.