Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Challenging the Human-in-the-loop in Algorithmic Decision-making (2405.10706v2)

Published 17 May 2024 in cs.LG

Abstract: We discuss the role of humans in algorithmic decision-making (ADM) for socially relevant problems from a technical and philosophical perspective. In particular, we illustrate tensions arising from diverse expectations, values, and constraints by and on the humans involved. To this end, we assume that a strategic decision-maker (SDM) introduces ADM to optimize strategic and societal goals while the algorithms' recommended actions are overseen by a practical decision-maker (PDM) - a specific human-in-the-loop - who makes the final decisions. While the PDM is typically assumed to be a corrective, it can counteract the realization of the SDM's desired goals and societal values not least because of a misalignment of these values and unmet information needs of the PDM. This has significant implications for the distribution of power between the stakeholders in ADM, their constraints, and information needs. In particular, we emphasize the overseeing PDM's role as a potential political and ethical decision maker, who acts expected to balance strategic, value-driven objectives and on-the-ground individual decisions and constraints. We demonstrate empirically, on a machine learning benchmark dataset, the significant impact an overseeing PDM's decisions can have even if the PDM is constrained to performing only a limited amount of actions differing from the algorithms' recommendations. To ensure that the SDM's intended values are realized, the PDM needs to be provided with appropriate information conveyed through tailored explanations and its role must be characterized clearly. Our findings emphasize the need for an in-depth discussion of the role and power of the PDM and challenge the often-taken view that just including a human-in-the-loop in ADM ensures the 'correct' and 'ethical' functioning of the system.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Conference on Human Factors in Computing Systems (CHI), pages 1–18, 2018.
  2. A reductions approach to fair classification. In International Conference on Machine Learning (ICML), pages 60–69, 2018.
  3. Street-level algorithms: A theory at the gaps between policy and decisions. In Conference on Human Factors in Computing Systems (CHI), pages 1–13, 2019.
  4. DER AMS-ALGORITHMUS: Eine Soziotechnische Analyse des Arbeitsmarktchancen-Assistenz-Systems (AMAS). Technical report, 2020.
  5. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020.
  6. Algorithmic decision-making? the user interface and its role for human involvement in decisions supported by artificial intelligence. Organization, 26(5):655–672, 2019.
  7. Jack Bandy. Problematic machine behavior: A systematic literature review of algorithm audits. Human-computer Interaction, 5(CSCW1):1–34, 2021.
  8. Improving refugee integration through data-driven algorithmic assignment. Science, 359(6373):325–329, January 2018.
  9. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Conference on Human Factors in Computing Systems (CHI), pages 1–16, 2021.
  10. Big data’s disparate impact. California law review, pages 671–732, 2016.
  11. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115, 2020.
  12. It’s just not that simple: An empirical study of the accuracy-explainability trade-off in machine learning for public policy. In Conference on Fairness, Accountability, and Transparency (FAccT), page 248–266. ACM, 2022.
  13. Reuben Binns. Human judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance, 16(1):197–211, 2022.
  14. Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
  15. Paula Boddington. Towards A Code Of Ethics For Artificial Intelligence. Springer International Publishing, 2017.
  16. From street-level to system-level bureaucracies: how information and communication technology is transforming administrative discretion and constitutional control. Public administration review, 62(2):174–184, 2002.
  17. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. In Conference on Human Factors in Computing Systems (CHI), pages 1–12. ACM, 2019.
  18. Building classifiers with independency constraints. In International Conference on Data Mining Workshops, pages 13–18. IEEE, 2009.
  19. Understanding algorithmic decision-making: Opportunities and challenges. 2019.
  20. How child welfare workers reduce racial disparities in algorithmic decisions. In Conference on Human Factors in Computing Systems (CHI), pages 1–22, 2022.
  21. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2):153–163, 2017.
  22. Covid-net clinical icu: Enhanced prediction of icu admission for covid-19 patients via explainability and trust quantification. 2021.
  23. Mark Coeckelbergh. AI Ethics. The MIT Press, 2020.
  24. European Commission. Artifical intelligence act, 2021.
  25. European Commission. Laying Down Harmonised Rules on Artificial Intelligence, April 2021.
  26. Artificial Intelligence, Transparency, and Public Decision-Making: Why Explanations Are Key When Trying to Produce Perceived Legitimacy. AI Soc., 35(4):917–926, 12 2020.
  27. Who needs to know what, when?: Broadening the explainable AI (XAI) design space by looking at explanations across the AI lifecycle. In Designing Interactive Systems Conference (DIS), pages 1591–1602. ACM, 2021.
  28. Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. In International Conference on Intelligent User Interfaces, pages 275–285, March 2019.
  29. The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580, 2018.
  30. Kathleen M Eisenhardt. Agency theory: An assessment and review. Academy of management review, 14(1):57–74, 1989.
  31. Virginia Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
  32. Certifying and removing disparate impact. In International Conference on Knowledge Discovery and Data Mining, pages 259–268, 2015.
  33. In-the-loop or on-the-loop? Interactional arrangements to support team coordination with a planning agent. Concurrency and Computation: Practice and Experience, 33(8):e4082, 2021.
  34. The principles and limits of algorithm-in-the-loop decision making. Human-Computer Interaction, 3:1–24, 2019.
  35. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management, 5(1):81–102, 1978.
  36. Fair governance with humans and machines. MPI Collective Goods Discussion Paper, (2022/4), 2022.
  37. How Humans Judge Machines. MIT Press, 2021.
  38. Ted: Teaching ai to explain its decisions. In Conference on AI, Ethics, and Society (AIES), pages 123–129, 2019.
  39. Fairness, equality, and power in algorithmic decision-making. In Conference on Fairness, Accountability, and Transparency (FAccT), pages 576–586. ACM, 2021.
  40. Marisa Kelly. Theories of justice and street-level discretion. Journal of Public Administration Research and Theory, 4(2):119–140, 1994.
  41. Avoiding discrimination through causal reasoning. Advances in Neural Information Processing Systems (NeurIPS), 30, 2017.
  42. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
  43. Riikka Koulu. Human control over automation: Eu policy and ai ethics. European Journal of Legal Studies (EJLS), 12:9, 2020.
  44. Too much, too little, or just right? ways explanations impact end users’ mental models. In Symposium on visual languages and human centric computing, pages 3–10. IEEE, 2013.
  45. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296:103473, 2021.
  46. Questioning the AI: informing design practices for explainable AI user experiences. In Conference on Human Factors in Computing Systems (CHI), pages 1–15. ACM, 2020.
  47. Michael Lipsky. Street-level bureaucracy: Dilemmas of the individual in public service. Russell Sage Foundation, 2010.
  48. Predict responsibly: improving fairness and accuracy by learning to defer. Advances in Neural Information Processing Systems (NeurIPS), 31, 2018.
  49. Shifting concepts of value: Designing algorithmic decision-support systems for public services. In Nordic Conference on Human-Computer Interaction, pages 70:1–70:12. ACM, 2020.
  50. Christoph Molnar. Interpretable Machine Learning. 2nd edition, 2022.
  51. High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. 2019.
  52. The role of discretion in the age of automation. Computer Supported Cooperative Work (CSCW), 29:303–333, 2020.
  53. A brief introduction of the puzzle of discretion. Canadian Journal of Law and Society, 24(3):301–312, 2009.
  54. Jeffrey Manditch Prottas. The power of the street-level bureaucrat in public service bureaucracies. Urban Affairs Quarterly, 13(3):285–312, 1978.
  55. Iyad Rahwan. Society-in-the-loop: programming the algorithmic social contract. Ethics and information technology, 20(1):5–14, 2018.
  56. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. nat mach intell 1: 206–215. DOI: https://doi. org/10.1038/s42256-019-0048-x, 2019.
  57. On explanations, fairness, and appropriate reliance in human-ai decision-making. arXiv preprint arXiv:2209.11812, 2022.
  58. Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective. In Conference on Fairness, Accountability, and Transparency (FAccT), pages 2138–2148. ACM, June 2022.
  59. Auditing risk prediction of long-term unemployment. Human-Computer Interaction, 6(GROUP):1–12, 2022.
  60. Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. International Journal of Human–Computer Interaction, pages 1–28, 2022.
  61. One explanation does not fit all: The promise of interactive explanations for machine learning transparency. KI-Künstliche Intelligenz, 34(2):235–250, 2020.
  62. Lorne Sossin. Redistributing democracy: An inquiry into authority, discretion and the possibility of engagement in the welfare state. Ottawa Law Review, 26:1, 1994.
  63. Timo Speith. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In Conference on Fairness, Accountability, and Transparency (FAccT), pages 2239–2250. ACM, 2022.
  64. Sarah Spiekermann. From value-lists to value-based engineering with IEEE 7000™. In International Symposium on Technology and Society (ISTAS), pages 1–6. IEEE, 2021. ISSN: 2158-3412.
  65. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2), 2022.
  66. Andreas Sudmann. The Democratization of Artificial Intelligence : Net Politics in the Era of Learning Algorithms. Transcript, 2019.
  67. Shannon Vallor. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, 2016.
  68. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. pages 1–14. Conference on Human Factors in Computing Systems (CHI), 2018.
  69. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2):76–99, 2017.
  70. Moral machines : teaching robots right from wrong. Oxford University Press, 2009.
  71. Michael Winikoff. Towards trusting autonomous systems. Engineering Multi-Agent Systems, 32:3–20, 2018.
  72. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Conference on Human Factors in Computing Systems (CHI), pages 1–14. ACM, 2018.
  73. A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 135:364–381, 2022.
  74. Keeping designers in the loop: Communicating inherent algorithmic trade-offs across multiple objectives. In Designing Interactive Systems, pages 1245–1257, 2020.
  75. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In International Conference on World Wide Web (WWW), pages 1171–1180, 2017.
  76. How do fair decisions fare in long-term qualification? Advances in Neural Information Processing Systems (NeurIPS), 33:18457–18469, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com