2000 character limit reached
LLMs for XAI: Future Directions for Explaining Explanations (2405.06064v1)
Published 9 May 2024 in cs.AI, cs.CL, cs.HC, and cs.LG
Abstract: In response to the demand for Explainable Artificial Intelligence (XAI), we investigate the use of LLMs to transform ML explanations into natural, human-readable narratives. Rather than directly explaining ML models using LLMs, we focus on refining explanations computed using existing XAI algorithms. We outline several research directions, including defining evaluation metrics, prompt design, comparing LLM models, exploring further training methods, and integrating external data. Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
- Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
- Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 648–657. DOI:http://dx.doi.org/10.1145/3351095.3375624
- Towards LLM-guided Causal Explainability for Black-box Text Classifiers. (Jan. 2024).
- John Brooke. 1996. Sus: a “quick and dirty’usability. Usability evaluation in industry 189, 3 (1996), 189–194.
- Natural Language Explanations for Machine Learning Classification Decisions. In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–9.
- A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology (2023).
- Paulo Cortez and Alice Silva. 2008. Using data mining to predict secondary school student performance. EUROSIS (Jan. 2008).
- Dean De Cock. 2011. Ames, Iowa: Alternative to the Boston Housing Data as an End of Semester Regression Project. Journal of Statistics Education 19, 3 (Nov. 2011). DOI:http://dx.doi.org/10.1080/10691898.2011.11889627
- Adapting Prompt for Few-shot Table-to-Text Generation. (Aug. 2023).
- Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science 5 (2023). https://www.frontiersin.org/articles/10.3389/fcomp.2023.1096257
- Helen Jiang and Erwen Senge. 2021. On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System. In Human Centered AI (HCAI) workshop at NeurIPS 2021. http://arxiv.org/abs/2112.01016 arXiv:2112.01016 [cs].
- Are Large Language Models Post Hoc Explainers? arXiv preprint arXiv:2310.05797 (2023).
- Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
- From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent. In World Conference on Explainable Artificial Intelligence. Springer, 71–96.
- Considerations for Deploying xAI Tools in the Wild: Lessons Learned from xAI Deployment in a Cybersecurity Operations Setting.. In Proposed for presentation at the ACM SIG Knowledge Discovery and Data Mining Workshop on Responsible AI held August 14-18, 2021 in Singapore, Singapore. US DOE. DOI:http://dx.doi.org/10.2172/1869535
- ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing. Computer Supported Cooperative Work and Social Computing (Oct. 2023), 384–387. DOI:http://dx.doi.org/10.1145/3584931.3607492
- Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence 5, 8 (2023), 873–883.
- Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era. (March 2024).
- Interpretation Quality Score for Measuring the Quality of Interpretability Methods. ArXiv (2022). DOI:http://dx.doi.org/10.48550/arXiv.2205.12254
- Survey on explainable AI: From approaches, limitations and Applications aspects. Human-Centric Intelligent Systems 3, 3 (2023), 161–188.
- Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 10, 5 (Jan. 2021), 593. DOI:http://dx.doi.org/10.3390/electronics10050593
- Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making. IEEE Transactions on Visualization and Computer Graphics (2021), 1–1. DOI:http://dx.doi.org/10.1109/TVCG.2021.3114864 Conference Name: IEEE Transactions on Visualization and Computer Graphics.
- Alexandra Zytek (10 papers)
- Sara Pidò (1 paper)
- Kalyan Veeramachaneni (38 papers)