Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XAI for All: Can Large Language Models Simplify Explainable AI? (2401.13110v1)

Published 23 Jan 2024 in cs.AI and cs.HC

Abstract: The field of Explainable Artificial Intelligence (XAI) often focuses on users with a strong technical background, making it challenging for non-experts to understand XAI methods. This paper presents "x-[plAIn]", a new approach to make XAI more accessible to a wider audience through a custom LLM, developed using ChatGPT Builder. Our goal was to design a model that can generate clear, concise summaries of various XAI methods, tailored for different audiences, including business professionals and academics. The key feature of our model is its ability to adapt explanations to match each audience group's knowledge level and interests. Our approach still offers timely insights, facilitating the decision-making process by the end users. Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations, regardless of the XAI method used. This adaptability improves the accessibility of XAI, bridging the gap between complex AI technologies and their practical applications. Our findings indicate a promising direction for LLMs in making advanced AI concepts more accessible to a diverse range of users.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Huntgpt: Integrating machine learning-based anomaly detection and explainable ai with large language models (llms). arXiv preprint arXiv:2309.16021 .
  2. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion 58, 82–115.
  3. How to meet user expectations for artifcial intelligence. Medium. Retrieved September .
  4. Language models are few-shot learners. Advances in neural information processing systems 33, 1877–1901.
  5. Interpretability of deep learning models: A survey of results, in: 2017 IEEE smartworld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, Internet of people and smart city innovation (smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI), IEEE. pp. 1–6.
  6. Visual analytics for explainable deep learning. IEEE computer graphics and applications 38, 84–92.
  7. explainable ai with gpt4 for story analysis and generation: A novel framework for diachronic sentiment analysis. International Journal of Digital Humanities , 1–26.
  8. Github copilot ai pair programmer: Asset or liability? Journal of Systems and Software , 111734.
  9. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 .
  10. Can large language models beat wall street? unveiling the potential of ai in stock selection. arXiv preprint arXiv:2401.03737 .
  11. Transforming sentiment analysis in the financial domain with chatgpt. Machine Learning with Applications 14, 100508.
  12. Saliency map verbalization: Comparing feature importance representations from model-free and instruction-based methods, in: Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE), pp. 30–46.
  13. Shap and lime: an evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence 4, 752558.
  14. Explainable artificial intelligence (xai) darpa-baa-16-53. Defense Advanced Research Projects Agency .
  15. Ai trust in business processes: the need for process-aware explanations, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 13403–13404.
  16. JasperAI, 2023. The ai in business trend report. URL: https://www.jasper.ai/blog/ai-business-trend-report. accessed:May 26, 2023.
  17. Reducing bias in ai-based financial services .
  18. Prescriptive analytics: Literature review and research challenges. International Journal of Information Management 50, 57–70.
  19. Assessing demand for intelligibility in context-aware applications, in: Proceedings of the 11th international conference on Ubiquitous computing, pp. 195–204.
  20. A unified approach to interpreting model predictions. Advances in neural information processing systems 30.
  21. Towards a unified multidimensional explainability metric: Evaluating trustworthiness in ai models, in: 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), IEEE. pp. 504–511.
  22. Xai for time-series classification leveraging image highlight methods. arXiv preprint arXiv:2311.17110 .
  23. Evaluating machine learning techniques to define the factors related to boar taint. Livestock Science 264, 105045.
  24. Predictive maintenance leveraging machine learning for time-series forecasting in the maritime industry, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), IEEE. pp. 1–8.
  25. Interpretable machine learning–a brief history, state-of-the-art and challenges, in: Joint European conference on machine learning and knowledge discovery in databases, Springer. pp. 417–431.
  26. Explaining hyperparameter optimization via partial dependence plots. Advances in Neural Information Processing Systems 34, 2280–2291.
  27. Combining cnn and grad-cam for covid-19 disease prediction and visual explanation. Intelligent Automation & Soft Computing 32.
  28. ” why should i trust you?” explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144.
  29. Anchors: High-precision model-agnostic explanations, in: Proceedings of the AAAI conference on artificial intelligence.
  30. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems 263, 110273.
  31. Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns, in: Healthcare, MDPI. p. 887.
  32. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 .
  33. Grad-cam: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE international conference on computer vision, pp. 618–626.
  34. Trusted artificial intelligence in manufacturing; trusted artificial intelligence in manufacturing: A review of the emerging wave of ethical and human centric ai technologies for smart production; a review of the emerging wave of ethical and human centric ai technologies for smart production .
  35. Kdd cup 1999 data. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C51C7N.
  36. Beyond predictive learning analytics modelling and onto explainable artificial intelligence with prescriptive analytics and chatgpt. International Journal of Artificial Intelligence in Education , 1–31.
  37. New explainability method for bert-based model in fake news detection. Scientific reports 11, 23705.
  38. Causal abstraction for chain-of-thought reasoning in arithmetic word problems, in: Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pp. 155–168.
  39. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 .
  40. Unbiased look at dataset bias, in: CVPR 2011, IEEE. pp. 1521–1528.
  41. Designing with ai. Retrieved July 29, 2022.
  42. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech. 31, 841.
  43. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35, 24824–24837.
  44. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564 .
  45. Graying the black box: Understanding dqns, in: International conference on machine learning, PMLR. pp. 1899–1908.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
Citations (11)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com