Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FairCompass: Operationalising Fairness in Machine Learning (2312.16726v1)

Published 27 Dec 2023 in cs.LG, cs.AI, cs.CY, and cs.SE

Abstract: As AI increasingly becomes an integral part of our societal and individual activities, there is a growing imperative to develop responsible AI solutions. Despite a diverse assortment of machine learning fairness solutions is proposed in the literature, there is reportedly a lack of practical implementation of these tools in real-world applications. Industry experts have participated in thorough discussions on the challenges associated with operationalising fairness in the development of machine learning-empowered solutions, in which a shift toward human-centred approaches is promptly advocated to mitigate the limitations of existing techniques. In this work, we propose a human-in-the-loop approach for fairness auditing, presenting a mixed visual analytical system (hereafter referred to as 'FairCompass'), which integrates both subgroup discovery technique and the decision tree-based schema for end users. Moreover, we innovatively integrate an Exploration, Guidance and Informed Analysis loop, to facilitate the use of the Knowledge Generation Model for Visual Analytics in FairCompass. We evaluate the effectiveness of FairCompass for fairness auditing in a real-world scenario, and the findings demonstrate the system's potential for real-world deployability. We anticipate this work will address the current gaps in research for fairness and facilitate the operationalisation of fairness in machine learning systems.

Introduction to FairCompass

In the ever-growing field of AI, fairness in ML has become a pressing concern. With AI's integration into society, unfair ML models have been shown to negatively impact individuals, especially those from marginalized groups. There is a strong focus on minimizing algorithmic bias; however, the practical adoption of fairness solutions is lagging behind. Effective implementation of these solutions is hindered by a lack of tools for real-world applications, leading researchers to advocate for a shift towards human-centered approaches to enhance existing techniques.

A Human-in-the-Loop Approach for Fairness

A novel human-in-the-loop approach named FairCompass offers a step towards addressing fairness in ML systems. It combines subgroup discovery techniques with decision-tree based guidance for end users, aiming to streamline the fairness auditing process. This approach integrates an Exploration, Guidance, and Informed Analysis loop to enhance the deployment of fairness auditing tools in real-world settings. Moreover, FairCompass innovatively includes a visual analytics system that helps users better understand and manipulate fairness metrics relevant to their data.

Operationalizing Fairness in Practice

The development of FairCompass involves a comprehensive review of existing AI fairness tools to identify common issues. By combining technical, non-technical, and visual analytics solutions, FairCompass moves towards operationalizing fairness in ML. This is shown through benchmarking against existing tools and demonstrating a new approach that meets the practical needs of ML practitioners concerning fairness. The new system emphasizes a human-centric design and is designed without assumptions about users' expertise in ML fairness.

Evaluation and Future Directions

FairCompass has been evaluated using a real-world scenario for fairness auditing, showcasing its potential for deployability and effectiveness. The system is open to the public for further research and use. However, the development of FairCompass also illuminates limitations, including the need for more comprehensive guidance to cover the complex fairness research landscape, awareness of human biases in human-in-the-loop systems, adjustments for domain-specific issues, and the need for higher-level organizational enforcement of fairness practices.

As we move towards more responsible AI practices, incorporating methods for operationalizing fairness such as those offered by FairCompass can assist organizations in navigating the challenges of fairness in decision-making powered by machine learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science, vol. 349, pp. 255–260, July 2015.
  2. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Comput. Surv., vol. 54, jul 2021.
  3. B. Richardson and J. E. Gilbert, “A framework for fairness: A systematic review of existing fair AI solutions,” Dec. 2021.
  4. R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilovic, S. Nagar, K. N. Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang, “AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias,” IBM J. Res. Dev., vol. 63, pp. 4:1–4:15, July 2019.
  5. P. Saleiro, B. Kuester, L. Hinkson, J. London, A. Stevens, A. Anisfeld, K. T. Rodolfa, and R. Ghani, “Aequitas: A bias and fairness audit toolkit,” Nov. 2018.
  6. K. Holstein, J. Wortman Vaughan, H. Daumé, III, M. Dudik, and H. Wallach, “Improving fairness in machine learning systems,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, (New York, NY, USA), ACM, May 2019.
  7. M. A. Madaio, L. Stark, J. Wortman Vaughan, and H. Wallach, “Co-designing checklists to understand organizational challenges and opportunities around fairness in AI,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, (New York, NY, USA), ACM, Apr. 2020.
  8. M. Veale and R. Binns, “Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data,” Big Data & Society, vol. 4, p. 205395171774353, 12 2017.
  9. D. Sacha, M. Sedlmair, L. Zhang, J. A. Lee, J. Peltonen, D. Weiskopf, S. C. North, and D. A. Keim, “What you see is what you can change: Human-centered machine learning by interactive visualization,” Neurocomputing, vol. 268, pp. 164–175, Dec. 2017.
  10. A. A. Cabrera, W. Epperson, F. Hohman, M. Kahng, J. Morgenstern, and D. H. Chau, “FAIRVIS: Visual analytics for discovering intersectional bias in machine learning,” in 2019 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Oct. 2019.
  11. B. Ruf and M. Detyniecki, “Towards the right kind of fairness in AI,” Feb. 2021.
  12. J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg, F. Viegas, and J. Wilson, “The What-If tool: Interactive probing of machine learning models,” IEEE Trans. Vis. Comput. Graph., vol. 26, pp. 56–65, Jan. 2020.
  13. C. Sandvig, “Seeing the sort: The aesthetic and industrial defence of ‘the algorithm,” Media-N, vol. 11, no. 1, pp. 35–51, 2014.
  14. J. Angwin, J. Larson, L. Kirchner, and S. Mattu, “Machine bias,” ProPublica, May 2016.
  15. J. Dressel and H. Farid, “The accuracy, fairness, and limits of predicting recidivism,” Science Advances, vol. 4, p. eaao5580, 01 2018.
  16. G. Harrison, J. Hanson, C. Jacinto, J. Ramirez, and B. Ur, “An empirical study on the perceived fairness of realistic, imperfect machine learning models,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, (New York, NY, USA), ACM, Jan. 2020.
  17. A. Lambrecht and C. Tucker, “Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of STEM career ads,” Manage. Sci., vol. 65, pp. 2966–2981, July 2019.
  18. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, (New York, NY, USA), ACM, Jan. 2012.
  19. M. Joseph, M. Kearns, J. Morgenstern, and A. Roth, “Fairness in learning: Classic and contextual bandits,” May 2016.
  20. M. Kearns, S. Neel, A. Roth, and Z. S. Wu, “Preventing fairness gerrymandering: Auditing and learning for subgroup fairness,” Nov. 2017.
  21. Z. Chen, J. M. Zhang, F. Sarro, and M. Harman, “A comprehensive empirical study of bias mitigation methods for machine learning classifiers,” ACM Trans. Softw. Eng. Methodol., vol. 32, pp. 1–30, Oct. 2023.
  22. J. Buolamwini and T. Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 2018.
  23. Y. Ahn and Y.-R. Lin, “FairSight: Visual analytics for fairness in decision making,” IEEE Trans. Vis. Comput. Graph., vol. 26, pp. 1086–1095, Jan. 2020.
  24. “Fairlearn: A toolkit for assessing and improving fairness in AI,” 2020.
  25. B. Friedman and H. Nissenbaum, “Bias in computer systems,” ACM Trans. Inf. Syst., vol. 14, p. 330–347, jul 1996.
  26. B. Rakova, J. Yang, H. Cramer, and R. Chowdhury, “Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices,” June 2020.
  27. J. Kleinberg, S. Mullainathan, and M. Raghavan, “Inherent trade-offs in the fair determination of risk scores,” Sept. 2016.
  28. S. Verma and J. Rubin, “Fairness definitions explained,” in Proceedings of the International Workshop on Software Fairness, (New York, NY, USA), ACM, May 2018.
  29. D. Greene, A. L. Hoffmann, and L. Stark, “Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning,” in Proceedings of the 52nd Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, 2019.
  30. H. Cramer, J. Garcia-Gathright, S. Reddy, A. Springer, and R. Takeo Bouyer, “Translation, tracks & data,” in Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, (New York, NY, USA), ACM, May 2019.
  31. A. Birhane, “Algorithmic injustice: a relational ethics approach,” Patterns (N. Y.), vol. 2, p. 100205, Feb. 2021.
  32. B. Ruf and M. Detyniecki, “A tool bundle for AI fairness in practice,” in CHI Conference on Human Factors in Computing Systems Extended Abstracts, (New York, NY, USA), ACM, Apr. 2022.
  33. D. Sacha, A. Stoffel, F. Stoffel, B. C. Kwon, G. Ellis, and D. A. Keim, “Knowledge generation model for visual analytics,” IEEE Trans. Vis. Comput. Graph., vol. 20, pp. 1604–1613, Dec. 2014.
  34. D. A. Norman, The Design of Everyday Things. USA: Basic Books, Inc., 2002.
  35. “UCI machine learning repository.” https://archive.ics.uci.edu/ml/index.php. Accessed: 2023-5-28.
  36. R. Baeza-Yates, “Bias on the web,” Communications of the ACM, vol. 61, pp. 54–61, 05 2018.
  37. H. Suresh and J. Guttag, “A framework for understanding sources of harm throughout the machine learning life cycle,” in Equity and Access in Algorithms, Mechanisms, and Optimization, (New York, NY, USA), ACM, Oct. 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jessica Liu (7 papers)
  2. Huaming Chen (38 papers)
  3. Jun Shen (63 papers)
  4. Kim-Kwang Raymond Choo (59 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com