Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models (2009.07053v1)

Published 3 Sep 2020 in cs.HC and cs.CL

Abstract: Advances in LLMing have led to the development of deep attention-based models that are performant across a wide variety of NLP problems. These LLMs are typified by a pre-training process on large unlabeled text corpora and subsequently fine-tuned for specific tasks. Although considerable work has been devoted to understanding the attention mechanisms of pre-trained models, it is less understood how a model's attention mechanisms change when trained for a target NLP task. In this paper, we propose a visual analytics approach to understanding fine-tuning in attention-based LLMs. Our visualization, Attention Flows, is designed to support users in querying, tracing, and comparing attention within layers, across layers, and amongst attention heads in Transformer-based LLMs. To help users gain insight on how a classification decision is made, our design is centered on depicting classification-based attention at the deepest layer and how attention from prior layers flows throughout words in the input. Attention Flows supports the analysis of a single model, as well as the visual comparison between pre-trained and fine-tuned models via their similarities and differences. We use Attention Flows to study attention mechanisms in various sentence understanding tasks and highlight how attention evolves to address the nuances of solving these tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Joseph F DeRose (1 paper)
  2. Jiayao Wang (4 papers)
  3. Matthew Berger (22 papers)
Citations (78)

Summary

We haven't generated a summary for this paper yet.