Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The AI-DEC: A Card-based Design Method for User-centered AI Explanations (2405.16711v1)

Published 26 May 2024 in cs.HC and cs.AI

Abstract: Increasing evidence suggests that many deployed AI systems do not sufficiently support end-user interaction and information needs. Engaging end-users in the design of these systems can reveal user needs and expectations, yet effective ways of engaging end-users in the AI explanation design remain under-explored. To address this gap, we developed a design method, called AI-DEC, that defines four dimensions of AI explanations that are critical for the integration of AI systems -- communication content, modality, frequency, and direction -- and offers design examples for end-users to design AI explanations that meet their needs. We evaluated this method through co-design sessions with workers in healthcare, finance, and management industries who regularly use AI systems in their daily work. Findings indicate that the AI-DEC effectively supported workers in designing explanations that accommodated diverse levels of performance and autonomy needs, which varied depending on the AI system's workplace role and worker values. We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (153)
  1. 2011. Miro. https://miro.com. Accessed: 2024-04-29.
  2. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022).
  3. Michael J Ahn and Yu-Che Chen. 2020. Artificial intelligence in government: potentials, challenges, and the future. In The 21st annual international conference on digital government research. 243–252.
  4. Ashraf Alam. 2023. Harnessing the Power of AI to Create Intelligent Tutoring Systems for Enhanced Classroom Experience and Improved Learning Outcomes. In Intelligent Communication Technologies and Virtual Mobile Networks. Springer, 571–591.
  5. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th international conference on intelligent user interfaces. 275–285.
  6. Collaborative learning with block-based programming: investigating human-centered artificial intelligence in education. Behaviour & Information Technology 41, 9 (2022), 1830–1847.
  7. Co-Designing with AI in Sight. Proceedings of the Design Society 2 (2022), 101–110.
  8. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
  9. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).
  10. Co-designing personal health? Multidisciplinary benefits and challenges in informing diabetes self-care technologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–26.
  11. Tailoring risk communication to improve comprehension: Do patient preferences help or hurt? Health Psychology 35, 9 (2016), 1007.
  12. Sander Beckers. 2022. Causal explanations and xai. In Conference on Causal Learning and Reasoning. PMLR, 90–109.
  13. Tilde Bekker and Alissa N Antle. 2011. Developmentally situated design (DSD) making theoretical knowledge accessible to designers of children’s technology. In Proceedings of the SIGCHI conference on human factors in computing systems. 2531–2540.
  14. Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. Proceedings of the ACM on Human-Computer Interaction 6, GROUP (2022), 1–25.
  15. On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
  16. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 401–413.
  17. From tool to companion: Storywriters want AI writers to respect their personal values and writing strategies. In Designing Interactive Systems Conference. 1209–1227.
  18. Health literacy, numeracy, and interpretation of graphical breast cancer risk estimates. Patient education and counseling 83, 1 (2011), 92–98.
  19. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  20. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160–169.
  21. Ruth MJ Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning.. In IJCAI. 6276–6282.
  22. Managing artificial intelligence deployment in the public sector. Computer 53, 10 (2020), 28–37.
  23. Tara Capel and Margot Brereton. 2023. What is Human-Centered about Human-Centered AI? A Map of the Research Landscape. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–23.
  24. ilo Cards: A tool to support the design of interactive artifacts. (2012).
  25. Artificial intelligence in manufacturing and logistics systems: algorithms, applications, and case studies. , 2730–2731 pages.
  26. Concept-based explanations for out-of-distribution detectors. In International Conference on Machine Learning. PMLR, 5817–5837.
  27. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion 81 (2022), 59–83.
  28. Michael Chui. 2017. Artificial intelligence the next digital frontier. McKinsey and Company Global Institute 47, 3.6 (2017).
  29. Notes from the AI frontier: Insights from hundreds of use cases. McKinsey Global Institute 2 (2018).
  30. Scenario-Based Requirements Elicitation for User-Centric Explainable AI: A Case in Fraud Detection. In International cross-domain conference for machine learning and knowledge extraction. Springer, 321–341.
  31. Victoria Clarke and Virginia Braun. 2014. Thematic analysis. In Encyclopedia of critical psychology. Springer, 1947–1952.
  32. Subgoal-based explanations for unreliable intelligent decision support systems. In Proceedings of the 28th International Conference on Intelligent User Interfaces. 240–250.
  33. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP). IEEE, 598–617.
  34. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Proceedings of the 2021 ACM Designing Interactive Systems Conference. 1591–1602.
  35. A secure ai-driven architecture for automated insurance systems: Fraud detection and risk measurement. IEEE Access 8 (2020), 58546–58558.
  36. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
  37. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021).
  38. Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22. Springer, 449–466.
  39. Human-Centered Explainable AI (HCXAI): Coming of Age. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7.
  40. An informative security and privacy “nutrition” label for internet of things devices. IEEE Security & Privacy 20, 2 (2021), 31–39.
  41. Communicating and organizing. Vol. 1077. Addison-Wesley Reading, MA.
  42. Marcelle: composing interactive machine learning workflows and interfaces. In The 34th Annual ACM Symposium on User Interface Software and Technology. 39–53.
  43. Batya Friedman and David Hendry. 2012. The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. In Proceedings of the SIGCHI conference on human factors in computing systems. 1145–1148.
  44. Towards understanding people’s experiences of ai computer vision fitness instructor apps. In Designing interactive systems conference 2021. 1619–1637.
  45. Towards automatic concept-based explanations. Advances in neural information processing systems 32 (2019).
  46. Toward user-driven sound recognizer personalization with people who are d/deaf or hard of hearing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1–23.
  47. Lessons learned from designing an AI-enabled diagnosis tool for pathologists. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–25.
  48. Bruce Hanington and Bella Martin. 2019. Universal methods of design expanded and revised: 125 Ways to research complex problems, develop innovative ideas, and design effective solutions. Rockport publishers.
  49. Human–machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system. NPJ digital medicine 5, 1 (2022), 97.
  50. Analogies and metaphors in creative design. International Journal of Engineering Education 24, 2 (2008), 283.
  51. ” It’s Not a Replacement:” Enabling Parent-Robot Collaboration to Support In-Home Learning Experiences of Young Children. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–18.
  52. Wayne Holmes and Ilkka Tuomi. 2022. State of the art and practice in AI in education. European Journal of Education 57, 4 (2022), 542–570.
  53. Eva Hornecker. 2010. Creative idea exploration within the structure of a guiding framework: the card brainstorming game. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction. 101–108.
  54. What is in the Cards: Exploring Uses, Patterns, and Trends in Design Cards. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
  55. XAITK: The explainable AI toolkit. Applied AI Letters 2, 4 (2021), e40.
  56. Investigating Day-to-day Experiences with Conversational Agents by Users with Traumatic Brain Injury. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1–15.
  57. Polite or direct? Conversation design of a smart display for older adults based on politeness theory. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.
  58. IDEO. 2003. IDEO Method Cards: 51 ways to inspire design. https://www.ideo.com
  59. IDEO. 2014. IDEO Nature Cards. https://www.ideo.com
  60. Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology 2, 4 (2017).
  61. Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies 165 (2022), 102839.
  62. ” You Have to Piece the Puzzle Together” Implications for Designing Decision Support in Intensive Care. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1509–1522.
  63. IoT and AI for smart government: A research agenda. , 304–309 pages.
  64. Mark T Keane and Eoin M Kenny. 2019. How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In Case-Based Reasoning Research and Development: 27th International Conference, ICCBR 2019, Otzenhausen, Germany, September 8–12, 2019, Proceedings 27. Springer, 155–171.
  65. A” nutrition label” for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security. 1–12.
  66. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence 294 (2021), 103459.
  67. Conceptual metaphors impact perceptions of human-ai collaboration. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–26.
  68. Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS biology 14, 5 (2016), e1002456.
  69. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. PMLR, 2668–2677.
  70. Understanding Large-Language Model (LLM)-powered Human-Robot Interaction. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 371–380.
  71. Understanding Frontline Workers’ and Unhoused Individuals’ Perspectives on AI Used in Homeless Services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  72. Contrastive explanations for a deep learning model on time-series data. In International Conference on Big Data Analytics and Knowledge Discovery. Springer, 235–244.
  73. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1675–1684.
  74. The unboxing experience: Exploration and design of initial interactions between children and social robots. In Proceedings of the 2022 CHI conference on human factors in computing systems. 1–14.
  75. WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–35.
  76. Understanding challenges for developers to create accurate privacy nutrition labels. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–24.
  77. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
  78. Q Vera Liao and S Shyam Sundar. 2022. Designing for Responsible Trust in AI Systems: A Communication Perspective. arXiv preprint arXiv:2204.13828 (2022).
  79. Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
  80. Brian Y Lim and Anind K Dey. 2010. Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing. 13–22.
  81. Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI. arXiv preprint arXiv:2308.07213 (2023).
  82. Daniel Lockton. 2013. Design with intent: a design pattern toolkit for environmental and social behaviour change. Ph. D. Dissertation. Brunel University School of Engineering and Design PhD Theses.
  83. The Design with Intent Method: A design tool for influencing user behaviour. Applied ergonomics 41, 3 (2010), 382–392.
  84. Expert-authored and machine-generated short-answer questions for assessing students learning performance. Educational Technology & Society 24, 3 (2021), 159–173.
  85. Designing with cards. Collaboration in creative design: Methods and tools (2016), 75–95.
  86. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  87. Reliability and Inter-rater Reliability in Qualitative Research: Norms and Guidelines for CSCW and HCI Practice. Proceedings of the ACM on Human-Computer Interaction 3 (11 2019), 1–23. https://doi.org/10.1145/3359174
  88. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  89. Swati Mishra and Jeffrey M Rzeszotarski. 2021. Designing interactive transfer learning tools for ML non-experts. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
  90. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279–288.
  91. Jakki Mohr and John R Nevin. 1990. Communication strategies in marketing channels: A theoretical perspective. Journal of marketing 54, 4 (1990), 36–51.
  92. Jakki Mohr and Robert Spekman. 1994. Characteristics of partnership success: partnership attributes, communication behavior, and conflict resolution techniques. Strategic management journal 15, 2 (1994), 135–152.
  93. Jakki J Mohr and Ravipreet S Sohi. 1995. Communication flows in distribution channels: Impact on assessments of communication quality and satisfaction. Journal of retailing 71, 4 (1995), 393–415.
  94. Social Sensemaking with AI: Designing an Open-ended AI experience with a Blind Child. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
  95. Supporting the creative game design process with exertion cards. In Proceedings of the sigchi conference on human factors in computing systems. 2211–2220.
  96. Metaphors for designers working with AI. (2022).
  97. The TAC toolkit: supporting design for user acceptance of health technologies from a macro-temporal perspective. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.
  98. Ashok Kumar Reddy Nadikattu. 2021. Influence of Artificial Intelligence on Robotics Industry. International Journal of Creative Research Thoughts (IJCRT), ISSN (2021), 2320–2882.
  99. Improving risk understanding across ability levels: Encouraging active processing with dynamic icon arrays. Journal of Experimental Psychology: Applied 21, 2 (2015), 178.
  100. When higher bars are not larger quantities: On individual differences in the use of spatial information in graph comprehension. Spatial Cognition & Computation 12, 2-3 (2012), 195–218.
  101. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
  102. Challenges in deploying machine learning: a survey of case studies. ACM computing surveys 55, 6 (2022), 1–29.
  103. Designing fair AI in human resource management: Understanding tensions surrounding algorithmic evaluation and envisioning stakeholder-centered solutions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–22.
  104. James Pierce and Carl DiSalvo. 2018. Addressing network anxieties with alternative design metaphors. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
  105. Communicating risk information: the influence of graphical display format on quantitative information perception—accuracy, comprehension and preferences. Patient education and counseling 69, 1-3 (2007), 121–128.
  106. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  107. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016).
  108. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
  109. Marcel Jurriaan Robeer. 2018. Contrastive explanation for machine learning. Master’s thesis.
  110. Robin Roy and James P Warren. 2019. Card-based design tools: A review and analysis of 155 card decks for designers and designing. Design Studies 63 (2019), 125–154.
  111. Failure factors of AI projects: results from expert interviews. International journal of information systems and project management: IJISPM 11, 3 (2023), 25–40.
  112. Ralf Schmälzle and Shelby Wilcox. 2022. Harnessing artificial intelligence for health message generation: The folic acid message engine. Journal of Medical Internet Research 24, 1 (2022), e28858.
  113. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies 154 (2021), 102684.
  114. Anna Marie Schröder and Maliheh Ghajargar. 2021. Unboxing the algorithm: Designing an understandable algorithmic experience in music recommender systems. In 15th ACM Conference on Recommender Systems (RecSys 2021), Amsterdam, The Netherlands, September 25, 2021.
  115. Prototyping machine learning through diffractive art practice. In Designing Interactive Systems Conference 2021. 2013–2025.
  116. An approach for prediction of loan approval using machine learning algorithm. In 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). IEEE, 490–494.
  117. Chaehan So. 2020. Human-in-the-loop design cycles–A process framework that integrates design sprints, agile processes, and machine learning with humans. In Artificial Intelligence in HCI: First International Conference, AI-HCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22. Springer, 136–145.
  118. Aaron Springer and Steve Whittaker. 2019. Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th international conference on intelligent user interfaces. 107–120.
  119. Aaron Springer and Steve Whittaker. 2020. Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4 (2020), 1–32.
  120. Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1162–1177.
  121. Protoai: Model-informed prototyping for ai-powered interfaces. In 26th International Conference on Intelligent User Interfaces. 48–58.
  122. Towards a process model for co-creating AI experiences. In Designing Interactive Systems Conference 2021. 1529–1543.
  123. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces. 212–228.
  124. Storyboarding: an empirical determination of best practices and effective guidelines. In Proceedings of the 6th conference on Designing Interactive systems. 12–21.
  125. Towards a socially responsible smart city: dynamic resource allocation for smarter community service. In Proceedings of the 4th ACM International Conference on Systems for Energy-Efficient Built Environments. 1–4.
  126. Applications of artificial intelligence and machine learning in smart cities. Computer Communications 154 (2020), 313–323.
  127. Philip Van Allen. 2018. Prototyping ways of prototyping AI. Interactions 25, 6 (2018), 46–51.
  128. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404.
  129. Design-inclusive UX research: design as a part of doing user experience research. Behaviour & Information Technology 35, 1 (2016), 21–37.
  130. Giulia Vilone and Luca Longo. 2021. A quantitative evaluation of global, rule-based explanations of post-hoc, model agnostic methods. Frontiers in artificial intelligence 4 (2021), 717899.
  131. Situational Recommender: Are You On the Spot, Refining Plans, or Just Bored?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
  132. Jerold W Wallis and Edward H Shortliffe. 1984. Customized explanations using causal knowledge. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (1984), 371–388.
  133. Community-driven AI: Empowering people through responsible data-driven decision-making. In Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing. 532–536.
  134. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
  135. Understanding the Design Space of AI-Mediated Social Interaction in Online Learning: Challenges and Opportunities. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–26.
  136. Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI. arXiv preprint arXiv:2204.10152 (2022).
  137. Joyce Weiner. 2022. Why AI/data science projects fail: how to avoid project pitfalls. Springer Nature.
  138. Failure of AI projects: understanding the critical factors. Procedia computer science 196 (2022), 69–76.
  139. Christine T Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 252–257.
  140. Christine T Wolf and Kathryn E Ringland. 2020. Designing accessible, explainable AI (XAI) experiences. ACM SIGACCESS Accessibility and Computing 125 (2020), 1–1.
  141. Christiane Wölfel and Timothy Merritt. 2013. Method card design dimensions: A survey of card-based design tools. In Human-Computer Interaction–INTERACT 2013: 14th IFIP TC 13 International Conference, Cape Town, South Africa, September 2-6, 2013, Proceedings, Part I 14. Springer, 479–486.
  142. Harnessing biomedical literature to calibrate clinicians’ trust in AI decision support systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14.
  143. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
  144. Artificial intelligence in healthcare. Nature biomedical engineering 2, 10 (2018), 719–731.
  145. Stakeholder-Centered AI Design: Co-Designing Worker Tools with Gig Workers through Data Probes. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
  146. Wencan Zhang and Brian Y Lim. 2022. Towards relatable explainable AI with the perceptual process. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–24.
  147. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295–305.
  148. Telling stories from computational notebooks: Ai-assisted presentation slides creation for presenting data science work. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–20.
  149. Gabe Zichermann and Joselin Linder. 2010. Game-based marketing: inspire customer loyalty through rewards, challenges, and contests. John Wiley & Sons.
  150. A demonstration of “less can be more”in risk graphics. Medical Decision Making 30, 6 (2010), 661–671.
  151. Blocks, ovals, or people? Icon type affects risk perceptions and recall of pictographs. Medical decision making 34, 4 (2014), 443–453.
  152. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI conference on Human factors in computing systems. 493–502.
  153. Participatory design of AI systems: Opportunities and challenges across diverse users, relationships, and application domains. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–4.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets