Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly (2405.09251v1)

Published 15 May 2024 in cs.LG and cs.CY

Abstract: Providing various ML applications in the real world, concerns about discrimination hidden in ML models are growing, particularly in high-stakes domains. Existing techniques for assessing the discrimination level of ML models include commonly used group and individual fairness measures. However, these two types of fairness measures are usually hard to be compatible with each other, and even two different group fairness measures might be incompatible as well. To address this issue, we investigate to evaluate the discrimination level of classifiers from a manifold perspective and propose a "harmonic fairness measure via manifolds (HFM)" based on distances between sets. Yet the direct calculation of distances might be too expensive to afford, reducing its practical applicability. Therefore, we devise an approximation algorithm named "Approximation of distance between sets (ApproxDist)" to facilitate accurate estimation of distances, and we further demonstrate its algorithmic effectiveness under certain reasonable assumptions. Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. A reductions approach to fair classification. In ICML, volume 80, pages 60–69. PMLR, 2018.
  2. Fair regression: Quantitative definitions and reduction-based algorithms. In ICML, volume 97, pages 120–129. PMLR, 2019.
  3. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nat Biomed Eng, pages 1–24, 2023.
  4. Scalable fair clustering. In ICML, volume 97, pages 405–413. PMLR, 2019.
  5. Fairness and machine learning: Limitations and opportunities. MIT Press, 2023.
  6. Fairness in criminal justice risk assessments: The state of the art. Sociol Methods Res, 50(1):3–44, 2021.
  7. Increasing fairness via combination with learning guarantees. arXiv preprint arXiv:2301.10813, 2023.
  8. Building classifiers with independency constraints. In ICDM workshops, pages 13–18. IEEE, 2009.
  9. Optimized pre-processing for discrimination prevention. In NIPS, volume 30. Curran Associates, Inc., 2017.
  10. Centers for Disease Control and Prevention. Types of discrimination. [EB/OL]. URL https://www.cdc.gov/eeo/faqs/discrimination.htm. Latest accessed August 29, 2022.
  11. Fair clustering through fairlets. In NIPS, volume 30. Curran Associates, Inc., 2017.
  12. A. Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2):153–163, 2017.
  13. Tackling prediction uncertainty in machine learning for healthcare. Nat Biomed Eng, pages 1–8, 2022.
  14. U. E. E. O. Commission. Discrimination by type. [EB/OL]. URL https://www.eeoc.gov/discrimination-type. Latest accessed August 29, 2022.
  15. Algorithmic decision making and the cost of fairness. In SIGKDD, pages 797–806. ACM, 2017.
  16. Fairgbm: Gradient boosting with fairness constraints. In ICLR, 2023.
  17. J. Demšar. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res, 7:1–30, 2006.
  18. Demonstrating accuracy equity and predictive parity performance of the compas risk scales in broward county. Technical report, 2016.
  19. Fairness through awareness. In ITCS, pages 214–226. ACM, 2012.
  20. Decoupled classifiers for group-fair and efficient machine learning. In FAT, volume 81, pages 119–133. PMLR, 2018.
  21. Certifying and removing disparate impact. In SIGKDD, pages 259–268, 2015.
  22. W. Fleisher. What’s fair about individual fairness? In AIES, pages 480–490, 2021.
  23. A comparative study of fairness-enhancing interventions in machine learning. In FAT, pages 329–338. ACM, 2019.
  24. P. Gajane and M. Pechenizkiy. On formalizing fairness in prediction with machine learning. In FAT/ML, 2018.
  25. Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In AAAI, volume 32, 2018.
  26. Equality of opportunity in supervised learning. In NIPS, volume 29, pages 3323–3331. Curran Associates Inc., 2016.
  27. V. Iosifidis and E. Ntoutsi. Adafair: Cumulative fairness adaptive boosting. In CIKM, pages 781–790, New York, NY, USA, 2019. ACM.
  28. Fairness in learning: Classic and contextual bandits. In NIPS, volume 29. Curran Associates, Inc., 2016.
  29. Lightgbm: A highly efficient gradient boosting decision tree. In NIPS, volume 30, pages 3146–3154, 2017.
  30. Counterfactual fairness. In NIPS, volume 30, pages 4069–4079. Curran Associates, Inc., 2017.
  31. A survey on bias and fairness in machine learning. ACM Comput Surv, 54(6):1–35, 2021.
  32. The cost of fairness in binary classification. In FAT, volume 81, pages 107–118. PMLR, 2018.
  33. Causal conceptions of fairness and their consequences. In ICML, volume 162, pages 16848–16887. PMLR, 2022.
  34. D. Pessach and E. Shmueli. Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings. Expert Syst Appl, 185:115667, 2021.
  35. D. Pessach and E. Shmueli. Algorithmic fairness. In Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook, pages 867–886. Springer, 2023.
  36. On fairness and calibration. In NIPS, volume 30, 2017.
  37. Pareto ensemble pruning. In AAAI, volume 29, pages 2935–2941, 2015.
  38. N. Quadrianto and V. Sharmanska. Recycling privileged learning and distribution matching for fairness. In NIPS, volume 30. Curran Associates, Inc., 2017.
  39. What-is and how-to for fairness in machine learning: A survey, reflection, and perspective. ACM Comput Surv, 55(13s):1–37, 2023.
  40. G. Varoquaux and V. Cheplygina. Machine learning for medical imaging: Methodological failures and recommendations for the future. NPJ Digit Med, 5(1):48, 2022.
  41. S. Verma and J. Rubin. Fairness definitions explained. In FairWare, pages 1–7, 2018.
  42. Unlocking fairness: a trade-off revisited. In NeurIPS, volume 32, pages 8783–8792. Curran Associates, Inc., 2019.
  43. Learning non-discriminatory predictors. In COLT, volume 65, pages 1920–1953. PMLR, 2017.
  44. Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. In WWW, pages 1171–1180, 2017a.
  45. Fairness constraints: Mechanisms for fair classification. In AISTATS, volume 54, pages 962–970. PMLR, 2017b.
  46. Learning fair representations. In ICML, pages 325–333. PMLR, 2013.
  47. H. Zhao and G. J. Gordon. Inherent tradeoffs in learning fair representations. J Mach Learn Res, 23(1):2527–2552, 2022.
  48. Z.-H. Zhou. Machine learning. Springer Nature, 2021.
  49. I. Žliobaitė. Measuring discrimination in algorithmic decision making. Data Min Knowl Discov, 31(4):1060–1089, 2017.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com