Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics (2001.00089v3)

Published 17 Dec 2019 in cs.CY, cs.AI, and cs.LG

Abstract: Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a lay audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of three such definitions--demographic parity, equal opportunity, and equalized odds. We evaluate this metric using an online survey, and investigate the relationship between comprehension and sentiment, demographics, and the definition itself.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Debjani Saha (2 papers)
  2. Candice Schumann (10 papers)
  3. Duncan C. McElfresh (9 papers)
  4. John P. Dickerson (78 papers)
  5. Michelle L. Mazurek (11 papers)
  6. Michael Carl Tschantz (18 papers)
Citations (16)