Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Local Interpretations for Explainable Natural Language Processing: A Survey (2103.11072v3)

Published 20 Mar 2021 in cs.CL and cs.AI

Abstract: As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for NLP tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are specifically divided into three categories: 1) interpreting the model's predictions through related input features; 2) interpreting through natural language explanation; 3) probing the hidden states of models and word representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Siwen Luo (14 papers)
  2. Hamish Ivison (14 papers)
  3. Caren Han (11 papers)
  4. Josiah Poon (41 papers)
Citations (36)