Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Statistical inference for individual fairness (2103.16714v1)

Published 30 Mar 2021 in stat.ML and cs.LG

Abstract: As we rely on ML models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g., gender and racial biases) has come to the fore of the public's attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Subha Maity (18 papers)
  2. Songkai Xue (7 papers)
  3. Mikhail Yurochkin (68 papers)
  4. Yuekai Sun (62 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.