Interpreting Deep Learning Models in Natural Language Processing: A Review (2110.10470v2)
Abstract: Neural network models have achieved state-of-the-art performances in a wide range of NLP tasks. However, a long-standing criticism against neural network models is the lack of interpretability, which not only reduces the reliability of neural NLP systems but also limits the scope of their applications in areas where interpretability is essential (e.g., health care applications). In response, the increasing interest in interpreting neural NLP models has spurred a diverse array of interpretation methods over recent years. In this survey, we provide a comprehensive review of various interpretation methods for neural models in NLP. We first stretch out a high-level taxonomy for interpretation methods in NLP, i.e., training-based approaches, test-based approaches, and hybrid approaches. Next, we describe sub-categories in each category in detail, e.g., influence-function based methods, KNN-based methods, attention-based models, saliency-based methods, perturbation-based methods, etc. We point out deficiencies of current methods and suggest some avenues for future research.
- Xiaofei Sun (36 papers)
- Diyi Yang (151 papers)
- Xiaoya Li (42 papers)
- Tianwei Zhang (199 papers)
- Yuxian Meng (37 papers)
- Han Qiu (60 papers)
- Guoyin Wang (108 papers)
- Eduard Hovy (115 papers)
- Jiwei Li (137 papers)