Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Procedural Fairness in Machine Learning (2404.01877v1)

Published 2 Apr 2024 in cs.LG

Abstract: Fairness in ML has received much attention. However, existing studies have mainly focused on the distributive fairness of ML models. The other dimension of fairness, i.e., procedural fairness, has been neglected. In this paper, we first define the procedural fairness of ML models, and then give formal definitions of individual and group procedural fairness. We propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_{FAE}$, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of the ML models. We validate the effectiveness of $GPF_{FAE}$ on a synthetic dataset and eight real-world datasets. Our experiments reveal the relationship between procedural and distributive fairness of the ML model. Based on our analysis, we propose a method for identifying the features that lead to the procedural unfairness of the model and propose two methods to improve procedural fairness after identifying unfair features. Our experimental results demonstrate that we can accurately identify the features that lead to procedural unfairness in the ML model, and both of our proposed methods can significantly improve procedural fairness with a slight impact on model performance, while also improving distributive fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. N. Chen, B. Ribeiro, and A. Chen, “Financial credit risk assessment: A recent review,” Artificial Intelligence Review, vol. 45, pp. 1–23, 2016.
  2. L. Li, T. Lassiter, J. Oh, and M. K. Lee, “Algorithmic hiring in practice: Recruiter and HR professional’s perspectives on AI use in hiring,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, ser. AIES ’21.   New York, NY, USA: Association for Computing Machinery, 2021, p. 166–176.
  3. J. Dressel and H. Farid, “The accuracy, fairness, and limits of predicting recidivism,” Science Advances, vol. 4, no. 1, p. eaao5580, 2018.
  4. C. Huang, Z. Zhang, B. Mao, and X. Yao, “An overview of artificial intelligence ethics,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 4, pp. 799–819, 2023.
  5. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Computing Surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021.
  6. J. Greenberg, “A taxonomy of organizational justice theories,” Academy of Management Review, vol. 12, no. 1, pp. 9–22, 1987.
  7. L. Morse, M. H. M. Teodorescu, Y. Awwad, and G. C. Kane, “Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms,” Journal of Business Ethics, vol. 181, no. 4, pp. 1083–1095, 2022.
  8. N. Grgić-Hlača, M. B. Zafar, K. P. Gummadi, and A. Weller, “Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018, pp. 51–60.
  9. M. L. Ambrose and A. Arnaud, “Are procedural justice and distributive justice conceptually distinct?” in Handbook of Organizational Justice.   Psychology Press, 2013, pp. 59–84.
  10. R. Folger, “Distributive and procedural justice in the workplace,” Social Justice Research, vol. 1, pp. 143–159, 1987.
  11. J. W. Thibaut and L. Walker, “Procedural justice: A psychological analysis,” L. Erlbaum Associates, 1975.
  12. V. d. Bos, E. A. Lind, and H. A. M. Wilke, “The psychology of procedural and distributive justice viewed from the perspective of fairness heuristic theory,” Justice in the Workplace: From Theory to Practice, vol. 2, pp. 49–66, 2001.
  13. K. Van den Bos, H. A. Wilke, and E. A. Lind, “When do we need procedural fairness? The role of trust in authority.” Journal of Personality and Social Psychology, vol. 75, no. 6, pp. 1449–1458, 1998.
  14. D. Pessach and E. Shmueli, “A review on fairness in machine learning,” ACM Computing Surveys (CSUR), vol. 55, no. 3, pp. 1–44, 2022.
  15. Y. Zhao, Y. Wang, and T. Derr, “Fairness and explainability: Bridging the gap towards fair model explanations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 9, 2023, pp. 11 363–11 371.
  16. E. Balkir, S. Kiritchenko, I. Nejadgholi, and K. C. Fraser, “Challenges in applying explainability methods to improve the fairness of NLP models,” arXiv preprint arXiv:2206.03945, 2022.
  17. R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys (CSUR), vol. 51, no. 5, pp. 1–42, 2018.
  18. U. Bhatt, A. Weller, and J. M. Moura, “Evaluating and aggregating feature-based model explanations,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2021, pp. 3016–3022.
  19. Z. Wang, C. Huang, and X. Yao, “Feature attribution explanation to detect harmful dataset shift,” in 2023 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2023, pp. 1–8.
  20. Z. Wang, C. Huang, Y. Li, and X. Yao, “Multi-objective feature attribution explanation for explainable machine learning,” ACM Transactions on Evolutionary Learning, vol. 4, no. 1, pp. 1–32, 2024.
  21. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 2012, pp. 214–226.
  22. E. Benussi, A. Patane, M. Wicker, L. Laurenti, and M. Kwiatkowska, “Individual fairness guarantees for neural networks,” arXiv preprint arXiv:2205.05763, 2022.
  23. T. Calders and S. Verwer, “Three naive bayes approaches for discrimination-free classification,” Data Mining and Knowledge Discovery, vol. 21, pp. 277–292, 2010.
  24. M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” Advances in Neural Information Processing Systems, vol. 29, pp. 3315–3323, 2016.
  25. R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork, “Learning fair representations,” in International Conference on Machine Learning.   PMLR, 2013, pp. 325–333.
  26. F. Kamiran and T. Calders, “Data preprocessing techniques for classification without discrimination,” Knowledge and Information Systems, vol. 33, no. 1, pp. 1–33, 2012.
  27. Q. Zhang, J. Liu, Z. Zhang, J. Wen, B. Mao, and X. Yao, “Mitigating unfairness via evolutionary multi-objective ensemble learning,” IEEE Transactions on Evolutionary Computation, 2022, doi: 10.1109/TEVC.2022.3209544.
  28. Q. Zhang, J. Liu, Z. Zhang, J. Wen, B. Mao, and X. Yao, “Fairer machine learning through multi-objective evolutionary learning,” in Artificial Neural Networks and Machine Learning, 2021, pp. 111–123.
  29. B. H. Zhang, B. Lemoine, and M. Mitchell, “Mitigating unwanted biases with adversarial learning,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 335–340.
  30. A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020.
  31. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774, 2017.
  32. U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M. Moura, and P. Eckersley, “Explainable machine learning in deployment,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 648–657.
  33. B. Abdollahi and O. Nasraoui, “Transparency in fair machine learning: The case of explainable recommender systems,” Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, pp. 21–35, 2018.
  34. J. Dai, S. Upadhyay, U. Aivodji, S. H. Bach, and H. Lakkaraju, “Fairness via explanation quality: Evaluating disparities in the quality of post hoc explanations,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022, pp. 203–214.
  35. T. Begley, T. Schwedes, C. Frye, and I. Feige, “Explainability for fair machine learning,” arXiv preprint arXiv:2010.07389, 2020.
  36. W. Pan, S. Cui, J. Bian, C. Zhang, and F. Wang, “Explaining algorithmic fairness through fairness-aware causal path decomposition,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 1287–1297.
  37. B. Dimanov, U. Bhatt, M. Jamnik, and A. Weller, “You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods.” in European Conference on Artificial Intelligence, 2020, pp. 2473–2480.
  38. T. Le Quy, A. Roy, V. Iosifidis, W. Zhang, and E. Ntoutsi, “A survey on datasets for fairness-aware machine learning,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 12, no. 3, p. e1452, 2022.
  39. G. P. Jones, J. M. Hickey, P. G. Di Stefano, C. Dhanjal, L. C. Stoddart, and V. Vasileiou, “Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms,” arXiv preprint arXiv:2010.03986, 2020.
  40. D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
  41. P. Van der Laan, “The 2001 census in the netherlands,” in Conference The Census of Population, 2000.
  42. L. F. Wightman, “LSAC national longitudinal bar passage study. LSAC research report series.” 1998.
  43. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias,” in Ethics of Data and Analytics.   Auerbach Publications, 2016, pp. 254–264.
  44. R. K. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilovic et al., “AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias,” arXiv preprint arXiv:1810.01943, 2018.
  45. A. Stevens, P. Deruyck, Z. Van Veldhoven, and J. Vanthienen, “Explainability and fairness in machine learning: Improve fair end-to-end lending for kiva,” in 2020 IEEE Symposium Series on Computational Intelligence (SSCI).   IEEE, 2020, pp. 1241–1248.
  46. S. Caton and C. Haas, “Fairness in machine learning: A survey,” ACM Computing Surveys (CSUR), 2023. [Online]. Available: https://doi.org/10.1145/3616865
  47. S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E. P. Hamilton, and D. Roth, “A comparative study of fairness-enhancing interventions in machine learning,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 329–338.
  48. T. Speicher, H. Heidari, N. Grgic-Hlaca, K. P. Gummadi, A. Singla, A. Weller, and M. B. Zafar, “A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2239–2248.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ziming Wang (59 papers)
  2. Changwu Huang (4 papers)
  3. Xin Yao (139 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets