Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models (2311.01732v2)

Published 3 Nov 2023 in cs.CL

Abstract: LLMs have significantly advanced the field of NLP, but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method's applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sean Xie (4 papers)
  2. Soroush Vosoughi (90 papers)
  3. Saeed Hassanpour (43 papers)
Citations (2)