Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neuron-level Interpretation of Deep NLP Models: A Survey (2108.13138v2)

Published 30 Aug 2021 in cs.CL

Abstract: The proliferation of deep neural networks in various domains has seen an increased need for interpretability of these models. Preliminary work done along this line and papers that surveyed such, are focused on high-level representation analysis. However, a recent branch of work has concentrated on interpretability at a more granular level of analyzing neurons within these models. In this paper, we survey the work done on neuron analysis including: i) methods to discover and understand neurons in a network, ii) evaluation methods, iii) major findings including cross architectural comparisons that neuron analysis has unraveled, iv) applications of neuron probing such as: controlling the model, domain adaptation etc., and v) a discussion on open issues and future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hassan Sajjad (64 papers)
  2. Nadir Durrani (48 papers)
  3. Fahim Dalvi (45 papers)
Citations (68)

Summary

We haven't generated a summary for this paper yet.