Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Fairness of Machine-Assisted Human Decisions (2110.15310v2)

Published 28 Oct 2021 in cs.CY, cs.HC, cs.LG, econ.GN, q-fin.EC, and stat.ML

Abstract: When machine-learning algorithms are used in high-stakes decisions, we want to ensure that their deployment leads to fair and equitable outcomes. This concern has motivated a fast-growing literature that focuses on diagnosing and addressing disparities in machine predictions. However, many machine predictions are deployed to assist in decisions where a human decision-maker retains the ultimate decision authority. In this article, we therefore consider in a formal model and in a lab experiment how properties of machine predictions affect the resulting human decisions. In our formal model of statistical decision-making, we show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions. Specifically, we document that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities. In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions. While our concrete theoretical results rely on specific assumptions about the data, algorithm, and decision-maker, and the experiment focuses on a particular prediction task, our findings show more broadly that any study of critical properties of complex decision systems, such as the fairness of machine-assisted human decisions, should go beyond focusing on the underlying algorithmic predictions in isolation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Agan, Amanda and Sonja Starr (2018). Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment. The Quarterly Journal of Economics, 133(1):191–235.
  2. Andrews, Isaiah and Jesse M Shapiro (2021). A model of scientific communication. Econometrica, 89(5):2117–2142.
  3. The Allocation of Decision Authority to Human and Artificial Intelligence. AEA Papers and Proceedings, 110:80–84.
  4. Barocas, Solon and Andrew D Selbst (2016). Big data’s disparate impact. Calif. L. Rev., 104:671.
  5. Improving human decision-making with machine learning. arXiv preprint arXiv:2108.08454.
  6. Bent, Jason R (2019). Is algorithmic affirmative action legal. Geo. LJ, 108:803.
  7. Bertsimas, Dimitris and Nathan Kallus (2020). From predictive to prescriptive analytics. Management Science, 66(3):1025–1044.
  8. Inaccurate Statistical Discrimination. SSRN Electronic Journal.
  9. The Dynamics of Discrimination: Theory and Evidence. American Economic Review, 109(10):3395–3436.
  10. Beliefs about Gender. American Economic Review, 109(3):739–773.
  11. Human and Machine: The Impact of Machine Input on Decision-Making Under Cognitive Limitations.
  12. From Soft Classifiers to Hard Decisions: How fair can we be? In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, pages 309–318, New York, NY, USA. Association for Computing Machinery.
  13. The Effect of Group Identity on Hiring Decisions with Incomplete Information. Management Science, 68(8):6336–6345.
  14. Chan, Alex (2022). Discrimination Against Doctors: A Field Experiment.
  15. Chouldechova, Alexandra (2016). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv:1610.07524 [cs, stat].
  16. The Role of Beliefs in Driving Gender Discrimination. Management Science, 67(6):3551–3569.
  17. Corbett-Davies, Sam and Sharad Goel (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.
  18. Algorithmic Decision Making and the Cost of Fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 797–806, New York, NY, USA. Association for Computing Machinery.
  19. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–12, Honolulu HI USA. ACM.
  20. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3):1155–1170.
  21. Approaching the human in the loop–legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31(1):123–153.
  22. European Commission (2021). Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence.
  23. Eyting, Markus (2022). Why do we Discriminate? The Role of Motivated Reasoning.
  24. The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies. Proc. ACM Hum.-Comput. Interact., 5(CSCW2):1–24.
  25. “Un”Fair Machine Learning Algorithms. Management Science, 68(6):4173–4195.
  26. Garrett, Brandon L. and John Monahan (2018). Judging Risk. SSRN Electronic Journal.
  27. Gillis, Talia B (2022). The input fallacy. Minnesota Law Review, 2022.
  28. Gillis, Talia B and Jann L Spiess (2018). Big Data and Discrimination. The University of Chicago Law Review, page 29.
  29. Green, Ben and Yiling Chen (2019). Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 90–99, Atlanta GA USA. ACM.
  30. Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing. Proc. ACM Hum.-Comput. Interact., 3(CSCW):1–25.
  31. Grimon, Marie-Pascale and Christopher Mills (2022). The Impact of Algorithmic Tools on Child Protection: Evidence from a Randomized Controlled Trial.
  32. Hellman, Deborah (2020). Measuring algorithmic fairness. Va. L. Rev., 106:811.
  33. Human-AI Complementarity in Hybrid Intelligence Systems: A Structured Literature Review. PACIS 2021 Proceedings.
  34. Huq, Aziz Z (2020). A right to a human decision. Va. L. Rev., 106:611.
  35. Eliciting Human Judgment for Prediction Algorithms. Management Science, 67(4):2314–2325.
  36. Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment.
  37. Jiang, Heinrich and Ofir Nachum (2020). Identifying and Correcting Label Bias in Machine Learning. In International Conference on Artificial Intelligence and Statistics, pages 702–712. PMLR.
  38. Kamenica, Emir and Matthew Gentzkow (2011). Bayesian Persuasion. American Economic Review, 101(6):2590–2615.
  39. Kim, Pauline (2022). Race-aware algorithms: Fairness, nondiscrimination and affirmative action. California Law Review, Forthcoming.
  40. Human Decisions and Machine Predictions. The Quarterly Journal of Economics, 133(1):237–293.
  41. Algorithmic fairness. In AEA papers and proceedings, volume 108, pages 22–27.
  42. Kleinberg, Jon and Sendhil Mullainathan (2019). Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC ’19, pages 807–808, New York, NY, USA. Association for Computing Machinery.
  43. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.
  44. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv:2112.11471.
  45. Lakkaraju, Himabindu and Osbert Bastani (2020). ”how do I fool you?”: Manipulating user trust via misleading black box explanations. In Markham, Annette N., Julia Powles, Toby Walsh, and Anne L. Washington, editors, AIES ’20: AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, February 7-8, 2020, pages 79–85. ACM.
  46. The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 275–284, Halifax NS Canada. ACM.
  47. Judgmental forecasting: A review of progress over the last 25 years. International Journal of Forecasting, 22(3):493–518.
  48. Algorithmic design: Fairness versus accuracy.
  49. Ludwig, Jens and Sendhil Mullainathan (2021). Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System. The journal of economic perspectives: a journal of the American Economic Association, 35(4):71–96.
  50. Meyer, Margaret A. (1991). Learning from Coarse Information: Biased Contests and Career Profiles. The Review of Economic Studies, 58(1):15.
  51. Miller, Jeffrey W (2018). A detailed treatment of Doob’s theorem. arXiv preprint arXiv:1801.03122.
  52. Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 386–400, New York, NY, USA. Association for Computing Machinery.
  53. Morgan, Andrew and Rafael Pass (2019). Paradoxes in Fair Computer-Aided Decision Making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, pages 85–90, New York, NY, USA. Association for Computing Machinery.
  54. Mullainathan, Sendhil and Ziad Obermeyer (2019). Diagnosing Physician Error: A Machine Learning Approach to Low-Value Health Care. National Bureau of Economic Research working paper.
  55. Mitigating bias in algorithmic hiring: evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 469–481, Barcelona Spain. ACM.
  56. The Algorithmic Automation Problem: Prediction, Triage, and Human Effort.
  57. An Economic Perspective on Algorithmic Fairness. AEA Papers and Proceedings, 110:91–95.
  58. Algorithm Reliance Under Pressure: The Effect of Customer Load on Service Workers. SSRN 4066823.
  59. Stevenson, Megan and Jennifer L. Doleac (2019). Algorithmic Risk Assessment in the Hands of Humans. IZA Discussion Paper No. 12853.
  60. Suen, Wing (2004). The Self‐Perpetuation of Biased Beliefs. The Economic Journal, 114(495):377–396.
  61. Yang, Crystal S and Will Dobbie (2020). Equal protection under algorithms: A new statistical and legal framework. Michigan Law Review, 119(2):291–395.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Talia Gillis (1 paper)
  2. Bryce McLaughlin (4 papers)
  3. Jann Spiess (17 papers)
Citations (14)