Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models (2309.13246v1)

Published 23 Sep 2023 in cs.LG, cs.AI, and q-fin.CP

Abstract: In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. 2021. Model risk management. https://www.occ.treas.gov/publications-and-resources/publications/comptrollers-handbook/files/model-risk-management/index-model-risk-management.html
  2. Applications of Integrated Gradients in Credit Risk Modeling. In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 4997–5005.
  3. Integrated Gradients is a Nonlinear Generalization of the Industry Standard Approach to Variable Attribution for Credit Risk Models. In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 5012–5023.
  4. Explainable machine learning in credit risk management. Computational Economics 57 (2021), 203–216.
  5. Dangxing Chen and Weicheng Ye. 2022. Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring. In Proceedings of the Third ACM International Conference on AI in Finance. 70–78.
  6. Dangxing Chen and Weicheng Ye. 2023. How to address monotonicity for model risk management?. In Proceedings of the 40th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 202). PMLR, 5282–5295.
  7. Shape constraints for set functions. In International conference on machine learning. PMLR, 1388–1396.
  8. Eric Friedman and Herve Moulin. 1999. Three methods to share joint costs or surplus. Journal of economic Theory 87, 2 (1999), 275–312.
  9. Hamiltonian neural networks. Advances in neural information processing systems 32 (2019).
  10. Multidimensional shape constraints. In International Conference on Machine Learning. PMLR, 3918–3928.
  11. Enguerrand Horel and Kay Giesecke. 2020. Significance tests for neural networks. Journal of Machine Learning Research 21, 227 (2020), 1–29.
  12. Sensitivity based Neural Networks Explanations. arXiv preprint arXiv:1812.01029 (2018).
  13. Physics-informed machine learning. Nature Reviews Physics 3, 6 (2021), 422–440.
  14. Certified monotonic neural networks. Advances in Neural Information Processing Systems 33 (2020), 15427–15438.
  15. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  16. A rigorous study of integrated gradients method and extensions to internal neuron attributions. In International Conference on Machine Learning. PMLR, 14485–14508.
  17. Fast and flexible monotonic functions with ensembles of lattices. Advances in neural information processing systems 29 (2016).
  18. Marco Repetto. 2022. Multicriteria interpretability driven deep learning. Annals of Operations Research (2022), 1–15.
  19. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
  20. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
  21. Mukund Sundararajan and Amir Najmi. 2020. The many Shapley values for model explanation. In International conference on machine learning. PMLR, 9269–9278.
  22. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319–3328.
  23. Deep lattice networks and partial monotonic functions. Advances in neural information processing systems 30 (2017).
Citations (1)

Summary

We haven't generated a summary for this paper yet.