Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted? (2306.11985v1)
Abstract: The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms. Although various methods of explainable artificial intelligence (XAI) have been suggested, there is a lack of literature that delves into their practicality and assesses them based on criteria that could foster trust in clinical environments. To address this gap this study evaluates two popular XAI methods used for explaining predictive models in the healthcare context in terms of whether they (i) generate domain-appropriate representation, i.e. coherent with respect to the application task, (ii) impact clinical workflow and (iii) are consistent. To that end, explanations generated at the cohort and patient levels were analysed. The paper reports the first benchmarking of the XAI methods applied to risk prediction models obtained by evaluating the concordance between generated explanations and the trigger of a future clinical deterioration episode recorded by the data collection system. We carried out an analysis using two Electronic Medical Records (EMR) datasets sourced from Australian major hospitals. The findings underscore the limitations of state-of-the-art XAI methods in the clinical context and their potential benefits. We discuss these limitations and contribute to the theoretical development of trustworthy XAI solutions where clinical decision support guides the choice of intervention by suggesting the pattern or drivers for clinical deterioration in the future.
- Tonekaboni, S., Joshi, S., McCradden, M.D., Goldenberg, A.: What clinicians want: contextualizing explainable machine learning for clinical end use. In: Machine Learning for Healthcare Conference, pp. 359–380 (2019). PMLR Tjoa and Guan [2020] Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems 32(11), 4793–4813 (2020) Vayena et al. [2018] Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems 32(11), 4793–4813 (2020) Vayena et al. [2018] Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems 32(11), 4793–4813 (2020) Vayena et al. [2018] Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Vayena, E., Blasimme, A., Cohen, I.G.: Machine learning in medicine: addressing ethical challenges. PLoS medicine 15(11), 1002689 (2018) Smilkov et al. [2017] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017) Sundararajan et al. [2017] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Montavon et al. [2017] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition 65, 211–222 (2017) Montavon et al. [2019] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Lundberg et al. [2020] Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable ai for trees. Nature machine intelligence 2(1), 56–67 (2020) Krishna et al. [2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022) Brankovic et al. [2023] Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Brankovic, A., Huang, W., Cook, D., Khanna, S., Bialkowski, K.: Elucidating discrepancy in explanations of predictive models developed using emr. MedInfo2023 (2023) Brankovic et al. [2022a] Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Brankovic, A., Hassanzadeh, H., Good, N., Mann, K., Khanna, S., Abdel-Hafez, A., Cook, D.: Explainable machine learning for real-time deterioration alert prediction to guide pre-emptive treatment. Scientific Reports 12(1), 11734 (2022) Brankovic et al. [2022b] Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Brankovic, A., Rolls, D., Boyle, J., Niven, P., Khanna, S.: Identifying patients at risk of unplanned re-hospitalisation using statewide electronic health records. Scientific Reports 12(1), 16592 (2022) Sufriyana et al. [2020] Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Sufriyana, H., Husnayain, A., Chen, Y.-L., Kuo, C.-Y., Singh, O., Yeh, T.-Y., Wu, Y.-W., Su, E.C.-Y., et al.: Comparison of multivariable logistic regression and other machine learning algorithms for prognostic prediction studies in pregnancy care: systematic review and meta-analysis. JMIR medical informatics 8(11), 16503 (2020) Shin et al. [2021] Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Shin, S., Austin, P.C., Ross, H.J., Abdel-Qadir, H., Freitas, C., Tomlinson, G., Chicco, D., Mahendiran, M., Lawler, P.R., Billia, F., et al.: Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC heart failure 8(1), 106–115 (2021) Cummings et al. [2021] Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Cummings, B.C., Ansari, S., Motyka, J.R., Wang, G., Medlin Jr, R.P., Kronick, S.L., Singh, K., Park, P.K., Napolitano, L.M., Dickson, R.P., et al.: Predicting intensive care transfers and other unforeseen events: analytic model validation study and comparison to existing methods. JMIR Medical Informatics 9(4), 25066 (2021) Lejarza et al. [2021] Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Lejarza, F., Calvert, J., Attwood, M.M., Evans, D., Mao, Q.: Optimal discharge of patients from intensive care via a data-driven policy learning framework. arXiv preprint arXiv:2112.09315 (2021) Moreno-Sanchez [2020] Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Moreno-Sanchez, P.A.: Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 4902–4910 (2020). IEEE Zhou et al. [2022] Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Zhou, X., Nakamura, K., Sahara, N., Asami, M., Toyoda, Y., Enomoto, Y., Hara, H., Noro, M., Sugi, K., Moroi, M., et al.: Exploring and identifying prognostic phenotypes of patients with heart failure guided by explainable machine learning. Life 12(6), 776 (2022) Wang et al. [2021] Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Wang, K., Tian, J., Zheng, C., Yang, H., Ren, J., Liu, Y., Han, Q., Zhang, Y.: Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and shap. Computers in Biology and Medicine 137, 104813 (2021) Song et al. [2020] Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Song, X., Yu, A.S., Kellum, J.A., Waitman, L.R., Matheny, M.E., Simpson, S.Q., Hu, Y., Liu, M.: Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nature communications 11(1), 5668 (2020) Lauritsen et al. [2020] Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.J., Lange, J., Thiesson, B.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nature communications 11(1), 3852 (2020) Saporta et al. [2022] Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.Q., Nguyen, C.D., Ngo, V.-D., Seekins, J., Blankenberg, F.G., Ng, A.Y., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence 4(10), 867–878 (2022) Petch et al. [2022] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213 (2022) Ghanvatkar and Rajan [2022] Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Ghanvatkar, S., Rajan, V.: Towards a theory-based evaluation of explainable predictions in healthcare (2022) Aas et al. [2021] Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence 298, 103502 (2021) Liu et al. [2022] Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Liu, C.-F., Chen, Z.-C., Kuo, S.-C., Lin, T.-C.: Does ai explainability affect physicians’ intention to use ai? International Journal of Medical Informatics 168, 104884 (2022) Diprose et al. [2020] Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27(4), 592–600 (2020) Panigutti et al. [2022] Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–9 (2022) Schwartz et al. [2022] Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022) Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- Schwartz, J.M., George, M., Rossetti, S.C., Dykes, P.C., Minshall, S.R., Lucas, E., Cato, K.D.: Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Human Factors 9(2), 33960 (2022)
- David Cook (17 papers)
- Jessica Rahman (1 paper)
- Wenjie Huang (31 papers)
- Sankalp Khanna (2 papers)
- Aida Brankovic (3 papers)