Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpreting Pretrained Language Models via Concept Bottlenecks (2311.05014v1)

Published 8 Nov 2023 in cs.CL and cs.AI

Abstract: Pretrained LLMs (PLMs) have made significant strides in various natural language processing tasks. However, the lack of interpretability due to their black-box'' nature poses challenges for responsible implementation. Although previous studies have attempted to improve interpretability by using, e.g., attention weights in self-attention layers, these weights often lack clarity, readability, and intuitiveness. In this research, we propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans. For example, we learn the concept ofFood'' and investigate how it influences the prediction of a model's sentiment towards a restaurant review. We introduce C$3$M, which combines human-annotated and machine-generated concepts to extract hidden neurons designed to encapsulate semantically meaningful and task-specific concepts. Through empirical evaluations on real-world datasets, we manifest that our approach offers valuable insights to interpret PLM behavior, helps diagnose model failures, and enhances model robustness amidst noisy concept labels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhen Tan (68 papers)
  2. Lu Cheng (73 papers)
  3. Song Wang (313 papers)
  4. Yuan Bo (1 paper)
  5. Jundong Li (126 papers)
  6. Huan Liu (283 papers)
Citations (14)