Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability (2309.13965v2)

Published 25 Sep 2023 in cs.HC and cs.AI

Abstract: Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users' comprehension of static explanations, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. Participants are presented with static explanations, followed by a conversation with a human expert regarding the explanations. We measure the effect of the conversation on participants' ability to choose, from three machine learning models, the most accurate one based on explanations and their self-reported comprehension, acceptance, and trust. Empirical results show that conversations significantly improve comprehension, acceptance, trust, and collaboration. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (131)
  1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
  2. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138–52160.
  3. FROTE: feedback rule-driven oversampling for editing models. Proceedings of Machine Learning and Systems 4 (2022), 276–301.
  4. David Alvarez-Melis and Tommi Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 412–421. https://doi.org/10.18653/v1/D17-1042
  5. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4 (Dec. 2014), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  6. OpenCrowd: A Human-AI Collaborative Approach for Finding Social Influencers via Open-Ended Answers Aggregation (WWW ’20). Association for Computing Machinery, New York, NY, USA, 1851–1862. https://doi.org/10.1145/3366423.3380254
  7. Human-AI Collaboration in a Cooperative Game Setting: Measuring Social Perception and Outcomes. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 96 (oct 2020), 20 pages. https://doi.org/10.1145/3415167
  8. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014), 15 pages.
  9. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
  10. Taking the Time to Care: Empowering Low Health Literacy Hospital Patients with Virtual Nurse Agents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 1265–1274. https://doi.org/10.1145/1518701.1518891
  11. Jacob Bien and Robert Tibshirani. 2011. Prototype selection for interpretable classification. The Annals of Applied Statistics 5, 4 (2011), 2403–2424.
  12. Arijit Biswas and Devi Parikh. 2013. Simultaneous active learning of classifiers & attributes via relative feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 644–651.
  13. Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery 37, 5 (01 Sep 2023), 1719–1778. https://doi.org/10.1007/s10618-023-00933-9
  14. MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 5016–5026. https://doi.org/10.18653/v1/D18-1547
  15. ”Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. 3, CSCW (2019), 24 pages. https://doi.org/10.1145/3359206
  16. The curse of knowledge in economic settings: An experimental analysis. Journal of political Economy 97, 5 (1989), 1232–1254.
  17. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia). ACM, New York, NY, USA, 1721–1730. https://doi.org/10.1145/2783258.2788613
  18. This Looks like That: Deep Learning for Interpretable Image Recognition. Curran Associates Inc., Red Hook, NY, USA.
  19. Hydra: Hypergradient data relevance analysis for interpreting deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 7081–7089.
  20. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
  21. Herbert H Clark and Susan E Brennan. 1991. Grounding in communication. (1991).
  22. Herbert H Clark and Catherine R Marshall. 1981. Definite knowledge and mutual knowledge. (1981).
  23. Paulo Cortez and Mark J Embrechts. 2013. Using sensitivity analysis and visualization techniques to open black box data mining models. Information Sciences 225 (2013), 1–17.
  24. Auditing deep learning processes through kernel-based explanatory models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 4037–4046.
  25. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Suzhou, China, 447–459.
  26. Fred D. Davis. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly 13, 3 (1989), 319–340.
  27. User acceptance of computer technology: A comparison of two theoretical models. Management science 35, 8 (1989), 982–1003.
  28. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299 (2021), 103525. https://doi.org/10.1016/j.artint.2021.103525
  29. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer vision and pattern recognition. IEEE, Miami, Florida, 248–255.
  30. An extension of the technology acceptance model for understanding travelers’ adoption of variable message signs. PLoS one 14, 4 (2019), e0216007.
  31. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
  32. Graph-sparse lda: a topic model with structured sparsity. In Twenty-Ninth AAAI conference on artificial intelligence.
  33. PALM-E: An Embodied Multimodal Language Model. arXiv preprint arXiv:2303.03378 (2023). https://doi.org/10.48550/arXiv.2303.03378
  34. On Multi-Agent Cognitive Cooperation: Can Virtual Agents Behave like Humans? Neurocomput. 480, C (apr 2022), 27–38. https://doi.org/10.1016/j.neucom.2022.01.025
  35. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021).
  36. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. Association for Computational Linguistics, Saarbrücken, Germany, 207–219. https://doi.org/10.18653/v1/W17-5526
  37. Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces. 39–45.
  38. Mediators: conversational agents explaining NLP model behavior. arXiv preprint arXiv:2206.06029 (2022).
  39. Shi Feng and Jordan Boyd-Graber. 2019. What Can AI Do for Me? Evaluating Machine Learning Interpretations in Cooperative Play. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 229–239. https://doi.org/10.1145/3301275.3302265
  40. Meric Altug Gemalmaz and Ming Yin. 2022. Understanding Decision Subjects’ Fairness Perceptions and Retention in Repeated Interactions with AI-Based Decision Systems. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. ACM, 295–306.
  41. Mental Models of AI Agents in a Cooperative Game Setting (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376316
  42. MultiModal-GPT: A Vision and Language Model for Dialogue with Humans. arXiv:2305.04790 [cs.CV]
  43. SupportingTrust in Autonomous Driving (IUI ’17). Association for Computing Machinery, New York, NY, USA, 319–329. https://doi.org/10.1145/3025171.3025198
  44. Denis J Hilton. 1996. Mental models and causal explanation: Judgements of probable cause and explanatory relevance. Thinking & Reasoning 2, 4 (1996), 273–308.
  45. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
  46. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300809
  47. Locally interpretable models and effects based on supervised partitioning (LIME-SUP). arXiv preprint arXiv:1806.00663 (2018).
  48. Abduction-based explanations for machine learning models. In Proceedings of the AAAI Conference on Artificial Intelligence. 1511–1519.
  49. How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 4211–4222.
  50. Model-agnostic counterfactual explanations for consequential decisions. In International Conference on Artificial Intelligence and Statistics. PMLR, 895–905.
  51. J. F. Kelley. 1984. An Iterative Design Methodology for User-Friendly Natural Language Office Information Applications. ACM Transactions on Information Systems 2, 1 (jan 1984), 26–41. https://doi.org/10.1145/357417.357420
  52. Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability. In Proceedings of the 30th International Conference on Neural Information Processing Systems (Barcelona, Spain) (NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 2288–2296.
  53. ”Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany). Association for Computing Machinery, New York, NY, USA, Article 250, 17 pages. https://doi.org/10.1145/3544548.3581001
  54. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012).
  55. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (Atlanta, Georgia, USA) (IUI ’15). Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/2678025.2701399
  56. Survey of Human–Robot Collaboration in Industrial Settings: Awareness, Intelligence, and Compliance. IEEE Transactions on Systems, Man, and Cybernetics: Systems 51, 1 (2021), 280–297. https://doi.org/10.1109/TSMC.2020.3041231
  57. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300717
  58. Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 54, 18 pages. https://doi.org/10.1145/3491102.3501999
  59. Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 29–38. https://doi.org/10.1145/3287560.3287590
  60. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1675–1684.
  61. Rethinking Explainability as a Dialogue: A Practitioner’s Perspective. arXiv preprint arXiv:2202.01875 (2022).
  62. A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence Organization, 4483–4491. https://doi.org/10.24963/ijcai.2021/611 Survey Track.
  63. The effect of explanation styles on user’s trust. In 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies. https://oro.open.ac.uk/70421/
  64. FIND: Human-in-the-Loop Debugging Deep Text Classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 332–348. https://doi.org/10.18653/v1/2020.emnlp-main.24
  65. ALICE: Active Learning with Contrastive Natural Language Explanations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 4380–4391. https://doi.org/10.18653/v1/2020.emnlp-main.355
  66. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590
  67. Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
  68. Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–45.
  69. On interpretation of network embedding via taxonomy induction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1812–1820.
  70. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision. 10012–10022.
  71. Tania Lombrozo. 2006. The structure and function of explanations. Trends in cognitive sciences 10, 10 (2006), 464–470.
  72. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  73. End-to-end Knowledge Retrieval with Multi-modal Queries. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, 8573–8589. https://doi.org/10.18653/v1/2023.acl-long.478
  74. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 759, 19 pages. https://doi.org/10.1145/3544548.3581058
  75. Initial Trust Formation in New Organizational Relationships. The Academy of Management Review 23, 3 (1998), 473–490. http://www.jstor.org/stable/259290
  76. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013).
  77. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  78. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607–617.
  79. Bonnie M Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (1996), 429–460.
  80. The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. Advances in Neural Information Processing Systems 34 (2021), 26422–26436.
  81. Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. Neural Information Processing Systems (NeurIPS) (2022).
  82. Recent advances in deep learning based dialogue systems: A systematic survey. Artificial intelligence review 56, 4 (2023), 3055–3155.
  83. Achieving affective human–virtual agent communication by enabling virtual agents to imitate positive expressions. Scientific reports 10, 1 (2020), 5977.
  84. Physical Human–Robot Collaboration: Robotic Systems, Learning Methods, Collaborative Strategies, Sensors, and Actuators. IEEE Transactions on Cybernetics 51, 4 (2021), 1888–1901. https://doi.org/10.1109/TCYB.2019.2947532
  85. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
  86. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 237, 52 pages. https://doi.org/10.1145/3411764.3445315
  87. Julia Powles and Hal Hodson. 2017. Google DeepMind and healthcare in an age of algorithms. Health and Technology 7, 4 (01 Dec 2017), 351–367. https://doi.org/10.1007/s12553-017-0179-1
  88. FACE: feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344–350.
  89. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 28, 4 (2021), 890–894.
  90. ” Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  91. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13, 3 (2020), 717–728.
  92. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. 2662–2670. https://doi.org/10.24963/ijcai.2017/371
  93. Filip Rudziński. 2016. A multi-objective genetic optimization of interpretability-oriented fuzzy rule-based classifiers. Applied Soft Computing 38 (2016), 118–133.
  94. I Can Do Better than Your AI: Expertise and Explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 240–251. https://doi.org/10.1145/3301275.3302308
  95. Ute Schmid and Britta Wrede. 2022. What is Missing in XAI So Far? An Interdisciplinary Perspective. KI-Künstliche Intelligenz 36, 3-4 (2022), 303–315.
  96. Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Machine Intelligence 2, 8 (2020), 476–486.
  97. Voice in Human–Agent Interaction: A Survey. ACM Comput. Surv. 54, 4, Article 81 (may 2021), 43 pages. https://doi.org/10.1145/3386867
  98. The influence of robot verbal support on human team members: Encouraging outgroup contributions and suppressing ingroup supportive behavior. Frontiers in Psychology (2020), 3584.
  99. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
  100. Certifai: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. arXiv preprint arXiv:1905.07857 (2019).
  101. A symbolic approach to explaining bayesian network classifiers. arXiv preprint arXiv:1805.03364 (2018).
  102. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188 (2022).
  103. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).
  104. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of 3rd International Conference on Learning Representations. http://arxiv.org/abs/1409.1556
  105. No explainability without accountability: An empirical study of explanations and feedback in interactive ml. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
  106. Aaron Springer and Steve Whittaker. 2019. Progressive Disclosure: Empirically Motivated Approaches to Designing Effective Transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 107–120. https://doi.org/10.1145/3301275.3302322
  107. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319–3328.
  108. Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and Their Needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 74, 16 pages. https://doi.org/10.1145/3411764.3445088
  109. Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. Advances in Neural Information Processing Systems 35 (2022), 34287–34301.
  110. The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 107–118. https://doi.org/10.18653/v1/2020.emnlp-demos.15
  111. Interactive label cleaning with example-based explanations. Advances in Neural Information Processing Systems 34 (2021), 12966–12977.
  112. Stefano Teso and Kristian Kersting. 2019. Explanatory Interactive Machine Learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 239–245. https://doi.org/10.1145/3306618.3314293
  113. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  114. Counterfactual Explanations for Neural Recommenders. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1627–1631.
  115. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596 (2020).
  116. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
  117. Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI ’21). Association for Computing Machinery, New York, NY, USA, 318–328. https://doi.org/10.1145/3397481.3450650
  118. A Network-based End-to-End Trainable Task-oriented Dialogue System. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, 438–449. https://aclanthology.org/E17-1042
  119. Maleakhi Agung Wijaya. 2021. Multilayer Counterfactual Explanations for Machine Learning Classifiers. Ph. D. Dissertation. Imperial College London.
  120. Maximiliane Wilkesmann and Uwe Wilkesmann. 2011. Knowledge transfer as interaction between experts and novices supported by technology. Vine 41, 2 (2011), 96–112.
  121. Is underestimation less detrimental than overestimation? The impact of experts’ beliefs about a layperson’s knowledge on learning and question asking. Instructional Science 36 (2008), 27–52.
  122. Evaluating explanation without ground truth in interpretable machine learning. arXiv preprint arXiv:1907.06831 (2019).
  123. How Do Visual Explanations Foster End Users’ Appropriate Trust in Machine Learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 189–201. https://doi.org/10.1145/3377325.3377480
  124. Scalable Bayesian rule lists. In International conference on machine learning. PMLR, 3921–3930.
  125. Data valuation using reinforcement learning. In International Conference on Machine Learning. PMLR, 10842–10851.
  126. Do I Trust My Machine Teammate? An Investigation from Perception to Decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 460–468. https://doi.org/10.1145/3301275.3302277
  127. FC-KBQA: A Fine-to-Coarse Composition Framework for Knowledge Base Question Answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, 1002–1017. https://doi.org/10.18653/v1/2023.acl-long.57
  128. History-Aware Hierarchical Transformer for Multi-session Open-domain Dialogue System. In Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 3395–3407. https://doi.org/10.18653/v1/2022.findings-emnlp.247
  129. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 295–305. https://doi.org/10.1145/3351095.3372852
  130. A Survey of Large Language Models. arXiv preprint arXiv:2303.18223 (2023). http://arxiv.org/abs/2303.18223
  131. MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. arXiv preprint arXiv:2304.10592 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tong Zhang (569 papers)
  2. X. Jessie Yang (38 papers)
  3. Boyang Li (106 papers)
Citations (2)