Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Review on Financial Explainable AI (2309.11960v1)

Published 21 Sep 2023 in cs.AI, cs.CE, and q-fin.CP

Abstract: The success of AI, and deep learning models in particular, has led to their widespread adoption across various industries due to their ability to process huge amounts of data and learn complex patterns. However, due to their lack of explainability, there are significant concerns regarding their use in critical sectors, such as finance and healthcare, where decision-making transparency is of paramount importance. In this paper, we provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance. We categorize the collection of explainable AI methods according to their corresponding characteristics, and we review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (139)
  1. 2021. Microsoft Responsible AI Toolbox. Retrieved February 23, 2023 from https://github.com/microsoft/responsible-ai-toolbox
  2. Interpretable online banking fraud detection based on hierarchical attention mechanism. In 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 1–6.
  3. Janet Adams and Hani Hagras. 2020. A type-2 fuzzy logic approach to explainable AI for regulatory compliance, fair customer outcomes and market stability in the global financial sector. In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 1–8.
  4. Neural additive models: Interpretable machine learning with neural nets. Advances in Neural Information Processing Systems 34 (2021), 4699–4711.
  5. To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science 7 (2021), e479.
  6. Daniel W Apley and Jingyu Zhu. 2016. Visualizing the effects of predictor variables in black box supervised learning models. arXiv preprint arXiv:1612.08468 (2016).
  7. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
  8. Explainable artificial intelligence for crypto asset allocation. Finance Research Letters 47 (2022), 102941.
  9. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10, 7 (2015), e0130140.
  10. Arash Bahrammirzaee. 2010. A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems. Neural Computing and Applications 19, 8 (2010), 1165–1195.
  11. Integrated Technical and Sentiment Analysis Tool for Market Index Movement Prediction, comprehensible using XAI. In 2021 International Conference on Communication information and Computing Technology (ICCICT). IEEE, 1–8.
  12. Explainable AI (XAI) models applied to planning in financial markets. (2021).
  13. Enabling machine learning algorithms for credit scoring–explainable artificial intelligence (XAI) methods for clear understanding complex predictive models. arXiv preprint arXiv:2104.06735 (2021).
  14. Machine learning explainability in finance: an application to default risk analysis. (2019).
  15. Machine learning interpretability for a stress scenario generation in credit scoring based on counterfactuals. Expert Systems with Applications 202 (2022), 117271.
  16. Explainable AI in fintech risk management. Frontiers in Artificial Intelligence 3 (2020), 26.
  17. Explainable machine learning in credit risk management. Computational Economics 57 (2021), 203–216.
  18. SenticNet 7: A commonsense-based neurosymbolic AI framework for explainable sentiment analysis. In Proceedings of the Thirteenth Language Resources and Evaluation Conference. 3829–3839.
  19. A survey on XAI and natural language explanations. Information Processing & Management 60, 1 (2023), 103111.
  20. Explainable AI for financial forecasting. In Machine Learning, Optimization, and Data Science: 7th International Conference, LOD 2021, Grasmere, UK, October 4–8, 2021, Revised Selected Papers, Part II. Springer, 51–69.
  21. Explainable machine learning exploiting news and domain-specific lexicon for stock market forecasting. IEEE Access 9 (2021), 30193–30205.
  22. Dangxing Chen and Weicheng Ye. 2022. Generalized Gloves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance. arXiv preprint arXiv:2209.10082 (2022).
  23. Jiahao Chen and Victor Storchan. 2021. Seven challenges for harmonizing explainability requirements. arXiv preprint arXiv:2108.05390 (2021).
  24. Explainable deep convolutional candlestick learner. arXiv preprint arXiv:2001.02767 (2020).
  25. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 785–794.
  26. Soo Hyun Cho and Kyung-shik Shin. 2023. Feature-Weighted Counterfactual-Based Explanation for Bankruptcy Prediction. Expert Systems with Applications 216 (2023), 119390.
  27. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014).
  28. Instance-level explanations for fraud detection: A case study. arXiv preprint arXiv:1806.07129 (2018).
  29. AlphaPortfolio: Direct construction through deep reinforcement learning and interpretable AI. Available at SSRN 3554486 (2021).
  30. Look Who’s Talking: Interpretable Machine Learning for Assessing Italian SMEs Credit Default. arXiv preprint arXiv:2108.13914 (2021).
  31. A survey of the state of explainable AI for natural language processing. arXiv preprint arXiv:2010.00711 (2020).
  32. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP). IEEE, 598–617.
  33. Explainable machine learning models of consumer credit risk. Available at SSRN (2022).
  34. AD-DICE: an implementation of adaptation in the DICE model. Climatic Change 95 (2009), 63–81.
  35. AJ Dellinger. 2019. Understanding The First American Financial Data Leak: How Did It Happen And What Does It Mean? Retrieved February 22, 2023 from https://www.forbes.com/sites/ajdellinger/2019/05/26/understanding-the-first-american-financial-data-leak-how-did-it-happen-and-what-does-it-mean/?sh=7716df86567f
  36. Explainable AI for Interpretable Credit Scoring. In CS & IT Conference Proceedings, Vol. 10. CS & IT Conference Proceedings.
  37. Knowledge-driven stock trend prediction and explanation via temporal convolutional network. In Companion Proceedings of The 2019 World Wide Web Conference. 678–685.
  38. Murat Dikmen and Catherine Burns. 2022. The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending. International Journal of Human-Computer Studies 162 (2022), 102792.
  39. Machine learning for credit scoring: Improving logistic regression with non-linear decision-tree effects. European Journal of Operational Research 297, 3 (2022), 1178–1192.
  40. Taghi Farzad. 2019. Determinants of Mortgage Loan Delinquency: Application of Interpretable Machine Learning.
  41. Leveraging Explainable AI to Support Cryptocurrency Investors. Future Internet 14, 9 (2022), 251.
  42. Financial risk management and explainable, trustworthy, responsible AI. Frontiers in Artificial Intelligence 5 (2022), 5.
  43. Indranil Ghosh and Manas K Sanyal. 2021. Introspecting predictability of market fear in Indian context during COVID-19 pandemic: An integrated approach of applied predictive modelling and explainable AI. International Journal of Information Management Data Insights 1, 2 (2021), 100039.
  44. Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Computer Science 7 (2021), e340.
  45. Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine 38, 3 (2017), 50–57.
  46. Alex Gramegna and Paolo Giudici. 2020. Why to buy insurance? An explainable artificial intelligence approach. Risks 8, 4 (2020), 137.
  47. Thomas Gramespacher and Jan-Alexander Posth. 2021. Employing explainable AI to optimize the return target function of a loan portfolio. Frontiers in Artificial Intelligence 4 (2021), 693022.
  48. Interpretable credit application predictions with counterfactual explanations. arXiv preprint arXiv:1811.05245 (2018).
  49. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
  50. Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 260–269.
  51. Hierarchical Attention Network for Explainable Depression Detection on Twitter Aided by Metaphor Concept Mappings. In Proceedings of the 29th International Conference on Computational Linguistics (COLING). International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 94–104.
  52. JCBIE: A Joint Continual Learning Neural Network for Biomedical Information Extraction. BMC Bioinformatics 23, 549 (2022). https://doi.org/10.1186/s12859-022-05096-w
  53. Meta-based Self-training and Re-weighting for Aspect-based Sentiment Analysis. IEEE Transactions on Affective Computing (2022). https://doi.org/10.1109/TAFFC.2022.3202831
  54. AI HLEG. 2019. Ethics guidelines for trustworthy AI. Retrieved February 8, 2023 from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  55. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
  56. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–13.
  57. Infusing domain knowledge in AI-based “ black box” models for better explainability with application in bankruptcy prediction. arXiv preprint arXiv:1905.11474 (2019).
  58. GINN: gradient interpretable neural networks for visualizing financial texts. International Journal of Data Science and Analytics 9 (2020), 431–445.
  59. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? arXiv preprint arXiv:2004.03685 (2020).
  60. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. arXiv preprint arXiv:1911.06194 (2019).
  61. Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  62. Marko Kolanovic and Rajesh T. Krishnamachari. 2017. Big Data and AI Strategies, Machine Learning and Alternative Data Approach to Investing.
  63. Exploring explainable ai in the financial sector: Perspectives of banks and supervisory authorities. In Artificial Intelligence and Machine Learning: 33rd Benelux Conference on Artificial Intelligence, BNAIC/Benelearn 2021, Esch-sur-Alzette, Luxembourg, November 10–12, 2021, Revised Selected Papers 33. Springer, 105–119.
  64. Opening the black box of financial ai with clear-trade: A class-enhanced attentive response approach for explaining and visualizing deep learning-driven stock market prediction. arXiv preprint arXiv:1709.01574 (2017).
  65. Explainable Reinforcement Learning on Financial Stock Trading using SHAP. arXiv preprint arXiv:2208.08790 (2022).
  66. Julien Lachuer and Sami Ben Jabeur. 2022. Explainable artificial intelligence modeling for corporate social responsibility and financial performance. Journal of Asset Management 23, 7 (2022), 619–630.
  67. Timothy R Levine. 2014. Truth-default theory (TDT) a theory of human deception and deception detection. Journal of Language and Social Psychology 33, 4 (2014), 378–392.
  68. Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. Knowledge-Based Systems 235 (2022), 107643.
  69. XRR: Explainable Risk Ranking for Financial Reports. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part IV 21. Springer, 253–268.
  70. Peter Lipton. 1990. Contrastive explanation. Royal Institute of Philosophy Supplements 27 (1990), 247–266.
  71. Zachary C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31–57.
  72. Predicting shareholder litigation on insider trading from financial text: An interpretable deep learning approach. Information & Management 57, 8 (2020), 103387.
  73. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 (2018).
  74. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  75. Beyond Polarity: Interpretable Financial Sentiment Analysis with Hierarchical Query-driven Attention. In IJCAI. 4244–4250.
  76. Multi-source Aggregated Classification for Stock Price Movement Prediction. Information Fusion 91 (2023), 515–528. https://doi.org/10.1016/j.inffus.2022.10.025
  77. Rui Mao and Xiao Li. 2021. Bridging Towers of Multi-task Learning with a Gating Mechanism for Aspect-based Sentiment Analysis and Sequential Metaphor Identification. Proceedings of the AAAI Conference on Artificial Intelligence 35, 15 (2021), 13534–13542. https://doi.org/10.1609/aaai.v35i15.17596
  78. MetaPro: A computational metaphor processing model for text pre-processing. Information Fusion 86-87 (2022), 30–43. https://doi.org/10.1016/j.inffus.2022.06.002
  79. Towards responsible AI for financial transactions. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 16–21.
  80. Charl Maree and Christian W Omlin. 2022a. Can interpretable reinforcement learning manage prosperity your way? AI 3, 2 (2022), 526–537.
  81. Charl Maree and Christian W Omlin. 2022b. Understanding Spending Behavior: Recurrent Neural Network Explanation and Interpretation. In 2022 IEEE Symposium on Computational Intelligence for Financial Engineering and Economics (CIFEr). IEEE, 1–7.
  82. Harry Markowitz. 1952. Portfolio Selection. The Journal of Finance 7, 1 (1952), 77–91. http://www.jstor.org/stable/2975974
  83. BNY Mellon. 2021. Why Every Financial Institution Should Consider Explainable AI. Retrieved February 8, 2023 from https://www.bnymellon.com/us/en/insights/all-insights/why-every-financial-institution-should-consider-explainable-ai.html
  84. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
  85. Explainable AI in credit risk management. arXiv preprint arXiv:2103.00949 (2021).
  86. Machine learning explanations to prevent overtrust in fake news detection. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 15. 421–431.
  87. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 11, 3-4 (2021), 1–45.
  88. Christoph Molnar. 2020. Interpretable machine learning. Lulu. com.
  89. Agnieszka Mroczkowska. 2020. What is a Fintech Application?, Definition and Insights for Business Owners. Retrieved February 7, 2023 from https://www.thedroidsonroids.com/blog/what-is-a-fintech-application-definition-and-insights-for-business-owners/
  90. Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876 (2019).
  91. RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations. In Proceedings of the Third ACM International Conference on AI in Finance. 174–182.
  92. Interpretable Machine Learning for Creditor Recovery Rates. Available at SSRN 4190345 (2022).
  93. What is being transferred in transfer learning? Advances in neural information processing systems 33 (2020), 512–523.
  94. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019).
  95. Monetary Authority of Singapore. 2021. Veritas Initiative Addresses Implementation Challenges in the Responsible Use of Artificial Intelligence and Data Analytics. Retrieved February 8, 2023 from https://www.mas.gov.sg/news/media-releases/2021/veritas-initiative-addresses-implementation-challenges
  96. FinXABSA: Explainable Finance through Aspect-Based Sentiment Analysis. arXiv:2303.02563 [cs.CL]
  97. Sangjin Park and Jae-Suk Yang. 2022. Interpretable deep learning LSTM model for intelligent economic decision-making. Knowledge-Based Systems 248 (2022), 108907.
  98. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
  99. Interpretation of net promoter score attributes using explainable AI. In Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments. 113–117.
  100. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  101. Ethically Responsible Machine Learning in Fintech. IEEE Access 10 (2022), 97531–97554.
  102. Explainable artificial intelligence (xai) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 (2021).
  103. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1, 5 (2019), 206–215.
  104. Cynthia Rudin and Joanna Radin. 2019. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review 1, 2 (2019), 10–1162.
  105. Explainable artificial intelligence for tabular data: A survey. IEEE Access 9 (2021), 135392–135422.
  106. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
  107. A Comparative Study of Machine Learning Approaches for Non Performing Loan Prediction with Explainability. International Journal of Machine Learning and Computing 12, 5 (2022).
  108. Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? arXiv preprint arXiv:1906.03731 (2019).
  109. Lloyd S Shapley et al. 1953. A value for n-person games. (1953).
  110. XPM: An explainable deep reinforcement learning framework for portfolio management. In Proceedings of the 30th ACM international conference on information & knowledge management. 1661–1670.
  111. Kacper Sokol and Peter A Flach. 2019. Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety. SafeAI@ AAAI (2019).
  112. Generating user-friendly explanations for loan denials using GANs. arXiv preprint arXiv:1906.10244 (2019).
  113. Agus Sudjianto and Aijun Zhang. 2021. Designing Inherently Interpretable Machine Learning Models. arXiv preprint arXiv:2111.01743 (2021).
  114. CFI Team. 2022. Finance Overview: Personal,business and government. Retrieved February 7, 2023 from https://corporatefinanceinstitute.com/resources/wealth-management/finance-industry-overview/
  115. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018).
  116. Explainable Machine Learning for Financial Distress Prediction: Evidence from Vietnam. Data 7, 11 (2022), 160.
  117. Evaluation of post-hoc interpretability methods in time-series classification. Nature Machine Intelligence (2023). https://doi.org/10.1038/s42256-023-00620-w
  118. Martin van den Berg and Ouren Kuiper. 2020. XAI in the financial sector: a conceptual framework for explainable AI (XAI). https://www. hu. nl/-/media/hu/documenten/onderzoek/projecten/ (2020).
  119. Attention is all you need. Advances in neural information processing systems 30 (2017).
  120. JAMES VINCENT. 2018. Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech. Retrieved February 8, 2023 from https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
  121. Explainable Artificial Intelligence and Causal Inference based ATM Fraud Detection. arXiv preprint arXiv:2211.10595 (2022).
  122. Identifying Dominant Industrial Sectors in Market States of the S&P 500 Financial Data. arXiv preprint arXiv:2208.14106 (2022).
  123. ” Do you trust me?” Increasing user-trust by integrating virtual agents in explainable AI interaction design. In Proceedings of the 19th ACM international conference on intelligent virtual agents. 7–9.
  124. Analysis of financial pressure impacts on the health care industry with an explainable machine learning method: China versus the USA. Expert Systems with Applications 210 (2022), 118482.
  125. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1 (2019), 56–65.
  126. Active learning for domain adaptation: An energy-based approach. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8708–8716.
  127. New Trend in FinTech: Research on Artificial Intelligence Model Interpretability in Financial Fields. Open Journal of Applied Sciences 9, 10 (2019), 761.
  128. Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification. In Proceedings of the 28th International Conference on Computational Linguistics. 6150–6160.
  129. Explainable text-driven neural network for stock prediction. In 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS). IEEE, 441–445.
  130. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1–19.
  131. On the trustworthiness of tree ensemble explainability methods. In Machine Learning and Knowledge Extraction: 5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2021, Virtual Event, August 17–20, 2021, Proceedings 5. Springer, 293–308.
  132. Tan Wen Rui Yeong Zee Kin, Lee Wan Sie. 2023. How Singapore is developing trustworthy AI. Retrieved February 8, 2023 from https://www.weforum.org/agenda/2023/01/how-singapore-is-demonstrating-trustworthy-ai-davos2023/
  133. Jie Yuan and Zhu Zhang. 2020. Connecting the dots: forecasting and explaining short-term market volatility. In Proceedings of the First ACM International Conference on AI in Finance. 1–8.
  134. Explainable Artificial Intelligence (XAI) in auditing. International Journal of Accounting Information Systems 46 (2022), 100572.
  135. Explainable machine learning for regime-based asset allocation. In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 5480–5485.
  136. An Interpretable Deep Classifier for Counterfactual Generation. In Proceedings of the Third ACM International Conference on AI in Finance. 36–43.
  137. Understanding Counterfactual Generation using Maximum Mean Discrepancy. In Proceedings of the Third ACM International Conference on AI in Finance. 44–52.
  138. An explainable machine learning framework for fake financial news detection. In 2020 International Conference on Information Systems-Making Digital Inclusive: Blending the Local and the Global, ICIS 2020. Association for Information Systems.
  139. An explainable artificial intelligence approach for financial distress prediction. Information Processing & Management 59, 4 (2022), 102988.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Jie Yeo (7 papers)
  2. Wihan van der Heever (2 papers)
  3. Rui Mao (54 papers)
  4. Erik Cambria (136 papers)
  5. Ranjan Satapathy (15 papers)
  6. Gianmarco Mengaldo (34 papers)
Citations (9)