Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactively Providing Explanations for Transformer Language Models (2110.02058v4)

Published 2 Sep 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Transformer LLMs are state of the art in a multitude of NLP tasks. Despite these successes, their opaqueness remains problematic. Recent methods aiming to provide interpretability and explainability to black-box models primarily focus on post-hoc explanations of (sometimes spurious) input-output correlations. Instead, we emphasize using prototype networks directly incorporated into the model architecture and hence explain the reasoning process behind the network's decisions. Our architecture performs on par with several LLMs and, moreover, enables learning from user interactions. This not only offers a better understanding of LLMs but uses human capabilities to incorporate knowledge outside of the rigid range of purely data-driven approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Felix Friedrich (40 papers)
  2. Patrick Schramowski (48 papers)
  3. Christopher Tauchmann (3 papers)
  4. Kristian Kersting (205 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.