Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
107 tokens/sec
Gemini 2.5 Pro Premium
58 tokens/sec
GPT-5 Medium
29 tokens/sec
GPT-5 High Premium
25 tokens/sec
GPT-4o
101 tokens/sec
DeepSeek R1 via Azure Premium
84 tokens/sec
GPT OSS 120B via Groq Premium
478 tokens/sec
Kimi K2 via Groq Premium
213 tokens/sec
2000 character limit reached

Public Perceptions of Fairness Metrics Across Borders (2403.16101v3)

Published 24 Mar 2024 in cs.AI

Abstract: Which fairness metrics are appropriately applicable in your contexts? There may be instances of discordance regarding the perception of fairness, even when the outcomes comply with established fairness metrics. Several questionnaire-based surveys have been conducted to evaluate fairness metrics with human perceptions of fairness. However, these surveys were limited in scope, including only a few hundred participants within a single country. In this study, we conduct an international survey to evaluate public perceptions of various fairness metrics in decision-making scenarios. We collected responses from 1,000 participants in each of China, France, Japan, and the United States, amassing a total of 4,000 participants, to analyze the preferences of fairness metrics. Our survey consists of three distinct scenarios paired with four fairness metrics. This investigation explores the relationship between personal attributes and the choice of fairness metrics, uncovering a significant influence of national context on these preferences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. Machine bias. In Ethics of data and analytics, pp.  254–264. 2022.
  2. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAT, pp.  77–91, 2018.
  3. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
  4. Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. In CHI, pp.  1–17, 2021.
  5. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
  6. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. arXiv preprint arXiv:1408.6491, 2014.
  7. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 world wide web conference, pp.  903–912, 2018.
  8. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
  9. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In FAccT, pp.  392–402, 2020.
  10. Counterfactual fairness. Advances in neural information processing systems, 2017.
  11. A survey on bias and fairness in machine learning. ACM computing surveys, 54(6):1–35, 2021.
  12. Exploring user perceptions of discrimination in online targeted advertising. In USENIX Security, pp.  935–951, 2017.
  13. On fairness and calibration. NeurIPS, 2017.
  14. Measuring non-expert comprehension of machine learning fairness metrics. In ICML, 2020.
  15. Human perceptions of fairness: a survey experiment. In Wirtschaftsinformatik, 2023.
  16. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In KDD, pp.  2459–2468, 2019.
  17. A qualitative exploration of perceptions of algorithmic fairness. In CHI, pp.  1–14, 2018.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.