Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 35 tok/s Pro
2000 character limit reached

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness (2304.09779v3)

Published 19 Apr 2023 in cs.LG, cs.CY, math.OC, and math.PR

Abstract: Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike. These two objectives, however, are incompatible when a scoring model is calibrated through discontinuous probability functions, where individuals can be randomly assigned an outcome determined by a fixed probability. This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different -- a clear violation of individual fairness. Assigning unique odds to each protected sub-population may also prevent members of one sub-population from ever receiving equal chances of a positive outcome to another, which we argue is another type of unfairness called individual odds. We reconcile all this by constructing continuous probability functions between group thresholds that are constrained by their Lipschitz constant. Our solution preserves the model's predictive power, individual fairness and robustness while ensuring group fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. A reductions approach to fair classification. In International conference on machine learning. 60–69.
  2. Machine bias. ProPublica 23 (May 2016).
  3. Equalized odds postprocessing under imperfect group information. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, Vol. 108. PMLR, 1770–1780.
  4. Model Interpretability through the lens of computational complexity. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 15487–15498.
  5. Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2315–2326.
  6. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples. Journal of Machine Learning Research 7, 85 (2006), 2399–2434.
  7. Reuben Binns. 2020. On the Apparent Conflict between Individual and Group Fairness. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency. 514–524.
  8. Sumon Biswas and Hridesh Rajan. 2021. Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 981–993.
  9. A clarification of the nuances in the fairness metrics landscape. Scientific Reports 12 (2022).
  10. Why Is My Classifier Discriminatory?. In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, Inc.
  11. Gender Bias and Under-representation in Natural Language Processing Across Human Languages. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 24–34.
  12. A Statistical Binary Classifier: Probabilistic Vector Machine. In Progress in Artificial Intelligence: 16th Portuguese Conference on Artificial Intelligence, EPIA 2013. Springer, 211–222.
  13. Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains. In Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML’10). 279–286.
  14. Is There a Trade-off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. PMLR, 2803–2813.
  15. Fairness through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 214–226.
  16. Donald Estep. 2002. Practical Analysis in One Variable. Springer. 83–87 pages.
  17. Herbert Federer. 1996. Geometric Measure Theory. Springer Berlin Heidelberg.
  18. Tilmann Gneiting. 2011. Making and Evaluating Point Forecasts. J. Amer. Statist. Assoc. 106, 494 (2011), 746–762.
  19. Tilmann Gneiting and Adrian E Raftery. 2007. Strictly Proper Scoring Rules, Prediction, and Estimation. J. Amer. Statist. Assoc. 102, 477 (2007), 359–378.
  20. The case for process fairness in learning: Feature selection for fair decision making. In NIPS symposium on machine learning and the law.
  21. Discrimination in mortgage lending: Evidence from a correspondence experiment. Journal of Urban Economics 92 (2016), 48–65.
  22. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates, Inc.
  23. Does Knowing Your FICO Score Change Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers. The Review of Economics and Statistics 103, 2 (2021), 236–250.
  24. Scoring Models and Credit Risk: The Case of Cooperative Banks in Poland. Risks 9, 7 (2021).
  25. Bias Mitigation Post-processing for Individual and Group Fairness. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2847–2851.
  26. On the Tradeoff Between Robustness and Fairness. In Advances in Neural Information Processing Systems.
  27. Credit scoring methods: Latest trends and points to consider. The Journal of Finance and Data Science 8 (2022), 180–201.
  28. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys 54, 6 (2021).
  29. Anthony Pennington-Cross. 2003. Credit history and the performance of prime and nonprime mortgages. The Journal of Real Estate Finance and Economics 27, 3 (2003), 279–301.
  30. Post-processing for Individual Fairness. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 25944–25955.
  31. Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of Internal Medicine 169, 12 (2018), 866–872.
  32. How We Investigated France’s Mass Profiling Machine. https://www.lighthousereports.com/methodology/how-we-investigated-frances-mass-profiling-machine/
  33. Achieving Equalized Odds by Resampling Sensitive Attributes. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 361–371.
  34. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1, 5 (2019), 206–215.
  35. The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review 2, 1 (2020), 1.
  36. Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity. arXiv preprint arXiv:2203.07139 (2022).
  37. Impact of pass/fail grading on medical students’ well-being and academic outcomes. Medical Education 45, 9 (2011), 867–877.
  38. U.S. Federal Reserve. 2007. Report to the congress on credit scoring and its effects on the availability and affordability of credit. Board of Governors of the Federal Reserve System (2007).
  39. Learning Fair Scoring Functions: Bipartite Ranking under ROC-based Fairness Constraints. In International conference on artificial intelligence and statistics. PMLR, 784–792.
  40. In-processing Modeling Techniques for Machine Learning Fairness: A Survey. ACM Transactions on Knowledge Discovery from Data (2022).
  41. Understanding and Improving Fairness–Accuracy Trade-offs in Multi-task Learning. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021).
  42. Rebecca Wexler. 2017. When a computer program keeps you in jail: How computers are harming criminal justice. New York Times 13 (2017).
  43. The FAIR guiding principles for scientific data management and stewardship. Scientific Data 3 (2016).
  44. Bias, Fairness, and Validity in Graduate-school Admissions: A Psychometric Perspective. Perspectives on Psychological Science 18, 1 (2023), 3–31.
  45. To be robust or to be fair: Towards fairness in adversarial training. In Proceedings of the 38th International Conference on Machine Learning. 11492–11501.
  46. Understanding Robust Overfitting of Adversarial Training and Beyond. In Proceedings of the 39th International Conference on Machine Learning, Vol. 162. PMLR, 25595–25610.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.