Equity in Healthcare: Analyzing Disparities in Machine Learning Predictions of Diabetic Patient Readmissions (2403.19057v1)
Abstract: This study investigates how ML models can predict hospital readmissions for diabetic patients fairly and accurately across different demographics (age, gender, race). We compared models like Deep Learning, Generalized Linear Models, Gradient Boosting Machines (GBM), and Naive Bayes. GBM stood out with an F1-score of 84.3% and accuracy of 82.2%, accurately predicting readmissions across demographics. A fairness analysis was conducted across all the models. GBM minimized disparities in predictions, achieving balanced results across genders and races. It showed low False Discovery Rates (FDR) (6-7%) and False Positive Rates (FPR) (5%) for both genders. Additionally, FDRs remained low for racial groups, such as African Americans (8%) and Asians (7%). Similarly, FPRs were consistent across age groups (4%) for both patients under 40 and those above 40, indicating its precision and ability to reduce bias. These findings emphasize the importance of choosing ML models carefully to ensure both accuracy and fairness for all patients. By showcasing effectiveness of various models with fairness metrics, this study promotes personalized medicine and the need for fair ML algorithms in healthcare. This can ultimately reduce disparities and improve outcomes for diabetic patients of all backgrounds.
- P. Braveman and L. Gottlieb, “The social determinants of health: It’s time to consider the causes of the causes,” Public Health Reports, vol. 129, no. 1_suppl2, p. 19–31, Jan. 2014. [Online]. Available: http://dx.doi.org/10.1177/00333549141291S206
- “The Root Causes of Health Inequity — ncbi.nlm.nih.gov,” https://www.ncbi.nlm.nih.gov/books/NBK425845/, [Accessed 04-03-2024].
- D. R. Williams and S. A. Mohammed, “Discrimination and racial disparities in health: evidence and needed research,” J. Behav. Med., vol. 32, no. 1, pp. 20–47, Feb. 2009.
- S. Raza, P. O. Pour, and S. R. Bashir, “Fairness in machine learning meets with equity in healthcare,” in Proceedings of the AAAI Symposium Series, vol. 1, no. 1, 2023, pp. 149–153.
- S. Raza and S. R. Bashir, “Auditing icu readmission rates in an clinical database: An analysis of risk factors and clinical outcomes,” in 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI). IEEE, 2023, pp. 722–726.
- D. R. Williams and S. A. Mohammed, “Racism and health i: Pathways and scientific evidence,” American Behavioral Scientist, vol. 57, no. 8, p. 1152–1173, May 2013. [Online]. Available: http://dx.doi.org/10.1177/0002764213487340
- A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and abstraction in sociotechnical systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 59–68. [Online]. Available: https://doi.org/10.1145/3287560.3287598
- B. Berendt and S. Preibusch, “Toward accountable discrimination-aware data mining: The importance of keeping the human in the loop—and under the looking glass,” Big Data, vol. 5, no. 2, p. 135–152, Jun. 2017. [Online]. Available: http://dx.doi.org/10.1089/big.2016.0055
- S. Reddy, S. Allan, S. Coghlan, and P. Cooper, “A governance model for the application of ai in health care,” Journal of the American Medical Informatics Association, vol. 27, no. 3, p. 491–497, Nov. 2019. [Online]. Available: http://dx.doi.org/10.1093/jamia/ocz192
- Aileen Nielsen, “Practical Fairness — oreilly.com,” https://www.oreilly.com/library/view/practical-fairness/9781492075721/, [Accessed 05-03-2024].
- S. Raza, “Connecting fairness in machine learning with public health equity,” in 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI). IEEE, 2023, pp. 704–708.
- F. Kamiran and T. Calders, “Data preprocessing techniques for classification without discrimination,” Knowledge and Information Systems, vol. 33, no. 1, p. 1–33, Dec. 2011. [Online]. Available: http://dx.doi.org/10.1007/s10115-011-0463-8
- J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg, F. Viegas, and J. Wilson, “The what-if tool: Interactive probing of machine learning models,” IEEE Transactions on Visualization and Computer Graphics, p. 1–1, 2019. [Online]. Available: http://dx.doi.org/10.1109/TVCG.2019.2934619
- G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger, “On fairness and calibration,” 2017.
- L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman, “Measuring and mitigating unintended bias in text classification,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, ser. AIES ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 67–73. [Online]. Available: https://doi.org/10.1145/3278721.3278729
- K. Holstein, J. Wortman Vaughan, H. Daumé, M. Dudik, and H. Wallach, “Improving fairness in machine learning systems: What do industry practitioners need?” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ser. CHI ’19. ACM, May 2019. [Online]. Available: http://dx.doi.org/10.1145/3290605.3300830
- Mayo Clinic, “Hyperglycemia in diabetes-Hyperglycemia in diabetes - Symptoms & causes - Mayo Clinic — mayoclinic.org,” https://www.mayoclinic.org/diseases-conditions/hyperglycemia/symptoms-causes/syc-20373631, [Accessed 04-03-2024].
- B. Strack, J. P. DeShazo, C. Gennings, J. L. Olmo, S. Ventura, K. J. Cios, and J. N. Clore, “Impact of hba1c measurement on hospital readmission rates: Analysis of 70, 000 clinical database patient records,” BioMed Research International, vol. 2014, p. 1–11, 2014. [Online]. Available: http://dx.doi.org/10.1155/2014/781670
- T. P. Pagano, R. B. Loureiro, F. V. N. Lisboa, G. O. R. Cruz, R. M. Peixoto, G. A. de Sousa Guimarães, L. L. dos Santos, M. M. Araujo, M. Cruz, E. L. S. de Oliveira, I. Winkler, and E. G. S. Nascimento, “Bias and unfairness in machine learning models: a systematic literature review,” 2022.
- B. Giovanola and S. Tiribelli, “Beyond bias and discrimination: redefining the ai ethics principle of fairness in healthcare machine-learning algorithms,” AI & SOCIETY, vol. 38, no. 2, p. 549–563, May 2022. [Online]. Available: http://dx.doi.org/10.1007/s00146-022-01455-6
- U. Gohar and L. Cheng, “A survey on intersectional fairness in machine learning: Notions, mitigation, and challenges,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, ser. IJCAI-2023. International Joint Conferences on Artificial Intelligence Organization, Aug. 2023. [Online]. Available: http://dx.doi.org/10.24963/ijcai.2023/742
- Z. Chen, J. M. Zhang, F. Sarro, and M. Harman, “A comprehensive empirical study of bias mitigation methods for machine learning classifiers,” 2023.
- D. Pessach and E. Shmueli, “Algorithmic fairness,” 2020.
- T. P. Pagano, R. B. Loureiro, F. V. N. Lisboa, G. O. R. Cruz, R. M. Peixoto, G. A. d. S. Guimarães, E. L. S. Oliveira, I. Winkler, and E. G. S. Nascimento, “Context-based patterns in machine learning bias and fairness metrics: A sensitive attributes-based approach,” Big Data and Cognitive Computing, vol. 7, no. 1, 2023. [Online]. Available: https://www.mdpi.com/2504-2289/7/1/27
- M. Wan, D. Zha, N. Liu, and N. Zou, “In-processing modeling techniques for machine learning fairness: A survey,” ACM Transactions on Knowledge Discovery from Data, vol. 17, no. 3, p. 1–27, Mar. 2023. [Online]. Available: http://dx.doi.org/10.1145/3551390
- J. Yang, A. A. S. Soltan, D. W. Eyre, Y. Yang, and D. A. Clifton, “An adversarial training framework for mitigating algorithmic biases in clinical machine learning,” npj Digital Medicine, vol. 6, no. 1, Mar. 2023. [Online]. Available: http://dx.doi.org/10.1038/s41746-023-00805-y
- R. Wang, P. Chaudhari, and C. Davatzikos, “Bias in machine learning models can be significantly mitigated by careful training: Evidence from neuroimaging studies,” Proceedings of the National Academy of Sciences, vol. 120, no. 6, Jan. 2023. [Online]. Available: http://dx.doi.org/10.1073/pnas.2211613120
- Z. Wang, Y. Zhou, M. Qiu, I. Haque, L. Brown, Y. He, J. Wang, D. Lo, and W. Zhang, “Towards fair machine learning software: Understanding and addressing model bias through counterfactual thinking,” 2023.
- Public Health Agency of Canada, “Social determinants of health and health inequalities - Canada.ca — canada.ca,” https://www.canada.ca/en/public-health/services/health-promotion/population-health/what-determines-health.html, [Accessed 05-03-2024].
- L. Mosca, E. Barrett-Connor, and N. K. Wenger, “Sex/gender differences in cardiovascular disease prevention: what a difference a decade makes,” Circulation, vol. 124, no. 19, pp. 2145–2154, Nov. 2011.
- D. Hartley, “Rural health disparities, population health, and rural culture,” Am. J. Public Health, vol. 94, no. 10, pp. 1675–1678, Oct. 2004.
- H. K. Koh, G. Graham, and S. A. Glied, “Reducing racial and ethnic disparities: the action plan from the department of health and human services,” Health Aff. (Millwood), vol. 30, no. 10, pp. 1822–1829, Oct. 2011.
- L. López, A. R. Green, A. Tan-McGrory, R. King, and J. R. Betancourt, “Bridging the digital divide in health care: the role of health information technology in addressing racial and ethnic disparities,” Jt. Comm. J. Qual. Patient Saf., vol. 37, no. 10, pp. 437–445, Oct. 2011.
- P. Saleiro, B. Kuester, L. Hinkson, J. London, A. Stevens, A. Anisfeld, K. T. Rodolfa, and R. Ghani, “Aequitas: A bias and fairness audit toolkit,” 2019.
- B. Strack, J. DeShazo, C. Gennings, J. Olmo, S. Ventura, K. Cios, and J. Clore, “UCI machine learning repository: Diabetes 130-US hospitals for years 1999–2008 dataset,” https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008, 2014.