Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 84 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Kimi K2 229 tok/s Pro
2000 character limit reached

Improving Health Professionals' Onboarding with AI and XAI for Trustworthy Human-AI Collaborative Decision Making (2405.16424v1)

Published 26 May 2024 in cs.HC, cs.AI, and cs.LG

Abstract: With advanced AI/ML, there has been growing research on explainable AI (XAI) and studies on how humans interact with AI and XAI for effective human-AI collaborative decision-making. However, we still have a lack of understanding of how AI systems and XAI should be first presented to users without technical backgrounds. In this paper, we present the findings of semi-structured interviews with health professionals (n=12) and students (n=4) majoring in medicine and health to study how to improve onboarding with AI and XAI. For the interviews, we built upon human-AI interaction guidelines to create onboarding materials of an AI system for stroke rehabilitation assessment and AI explanations and introduce them to the participants. Our findings reveal that beyond presenting traditional performance metrics on AI, participants desired benchmark information, the practical benefits of AI, and interaction trials to better contextualize AI performance, and refine the objectives and performance of AI. Based on these findings, we highlight directions for improving onboarding with AI and XAI and human-AI collaborative decision-making.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
  2. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–13.
  3. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & society 35 (2020), 611–623.
  4. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).
  5. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–12.
  6. The domesticated robot: design guidelines for assisting older adults to age in place. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. 335–342.
  7. Embedding comparator: Visualizing differences in global structure and local neighborhoods via small multiples. In 27th international conference on intelligent user interfaces. 746–766.
  8. Computer proficiency questionnaire: assessing low and high computer proficient seniors. The Gerontologist 55, 3 (2015), 404–411.
  9. Virginia Braun and Victoria Clarke. 2012. Thematic analysis. American Psychological Association.
  10. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160–169.
  11. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces. 258–262.
  12. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–14.
  13. ” Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1–24.
  14. Onboarding Materials as Cross-functional Boundary Objects for Developing AI Assistants. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7.
  15. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721–1730.
  16. Interactive model cards: A human-centered approach to model documentation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 427–439.
  17. Factors predicting the use of technology: Findings from the center for research and education on aging and technology enhancement (CREATE). Psychology and aging 21, 2 (2006), 333.
  18. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
  19. Dermatologist-level classification of skin cancer with deep neural networks. nature 542, 7639 (2017), 115–118.
  20. Luciano Floridi. 2019. Establishing the rules for building trustworthy AI. Nature Machine Intelligence 1, 6 (2019), 261–262.
  21. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC medical research methodology 13, 1 (2013), 1–8.
  22. What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine 1 (2020), 100001.
  23. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
  24. The Fugl-Meyer assessment of motor recovery after stroke: a critical review of its measurement properties. Neurorehabilitation and neural repair 16, 3 (2002), 232–240.
  25. Factual and counterfactual explanations for black box decision making. IEEE Intelligent Systems 34, 6 (2019), 14–23.
  26. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 624–635.
  27. Reasons for physicians not adopting clinical decision support systems: critical analysis. JMIR medical informatics 6, 2 (2018), e8912.
  28. Ajay Kohli and Saurabh Jha. 2018. Why CAD failed in mammography. Journal of the American College of Radiology 15, 3 (2018), 535–537.
  29. Understanding Frontline Workers’ and Unhoused Individuals’ Perspectives on AI Used in Homeless Services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  30. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021).
  31. Explaining machine learning predictions: State-of-the-art, challenges, and opportunities. NeurIPS Tutorial (2020).
  32. Min Hun Lee and Chong Jun Chew. 2023. Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023), 1–22.
  33. Learning to assess the quality of stroke rehabilitation exercises. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 218–228.
  34. Co-design and evaluation of an intelligent decision support system for stroke rehabilitation assessment. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–27.
  35. A human-ai collaborative approach for clinical decision making on rehabilitation assessment. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–14.
  36. Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems 17, 1 (2008), 39–71.
  37. Advances, challenges and opportunities in creating data for trustworthy AI. Nature Machine Intelligence 4, 8 (2022), 669–677.
  38. Alex John London. 2019. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Report 49, 1 (2019), 15–21.
  39. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  40. Questions for artificial intelligence in health care. Jama 321, 1 (2019), 31–32.
  41. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
  42. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1–23.
  43. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
  44. Jakob Mökander and Luciano Floridi. 2021. Ethics-based auditing to develop trustworthy AI. Minds and Machines 31, 2 (2021), 323–327.
  45. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607–617.
  46. Efficient saliency maps for explainable AI. arXiv preprint arXiv:1911.11293 (2019).
  47. The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics 167 (2020), 209–234.
  48. Development and validation of deep learning–based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 290, 1 (2019), 218–228.
  49. A review of instance selection methods. Artificial Intelligence Review 34, 2 (2010), 133–143.
  50. Google PAIR. 2019. People + AI Guidebook. https://pair.withgoogle.com/guidebook/
  51. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019).
  52. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–52.
  53. Alun Preece. 2018. Asking ‘Why’in AI: Explainability of intelligent systems–perspectives and challenges. Intelligent Systems in Accounting, Finance and Management 25, 2 (2018), 63–72.
  54. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
  55. AI in health and medicine. Nature medicine 28, 1 (2022), 31–38.
  56. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  57. ” The human body is a black box” supporting clinical decision-making with deep learning. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 99–109.
  58. Designing alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–22.
  59. Deep learning in chest radiography: detection of findings and presence of change. PloS one 13, 10 (2018), e0204155.
  60. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
  61. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ digital medicine 3, 1 (2020), 17.
  62. Eric J Topol. 2019. High-performance medicine: the convergence of human and artificial intelligence. Nature medicine 25, 1 (2019), 44–56.
  63. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 272–283.
  64. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008).
  65. Kush R Vashney. 2022. Trustworthy machine learning. Independently published.
  66. Counterfactual explanations and algorithmic recourses for machine learning: A review. arXiv preprint arXiv:2010.10596 (2020).
  67. “Brilliant AI doctor” in rural clinics: Challenges in AI-powered clinical decision support system deployment. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–18.
  68. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
  69. Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th international conference on intelligent user interfaces. 318–328.
  70. Jeannette M Wing. 2021. Trustworthy ai. Commun. ACM 64, 10 (2021), 64–71.
  71. Principal component analysis. Chemometrics and intelligent laboratory systems 2, 1-3 (1987), 37–52.
  72. Unremarkable AI: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–11.
  73. Steven Zauderer. 2023. Statistics, Facts & Demographics of Physical Therapy. https://www.crossrivertherapy.com/research/physical-therapy-statistics#:~:text=67%25%20of%20physical%20therapists%20are,common%20gender%20in%20the%20occupation.
  74. Aleš Završnik. 2020. Criminal justice, artificial intelligence systems, and human rights. In ERA forum, Vol. 20. Springer, 567–583.
  75. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295–305.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets