Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visualizing Attention in Transformer-Based Language Representation Models (1904.02679v2)

Published 4 Apr 2019 in cs.HC, cs.LG, and stat.ML

Abstract: We present an open-source tool for visualizing multi-head self-attention in Transformer-based language representation models. The tool extends earlier work by visualizing attention at three levels of granularity: the attention-head level, the model level, and the neuron level. We describe how each of these views can help to interpret the model, and we demonstrate the tool on the BERT model and the OpenAI GPT-2 model. We also present three use cases for analyzing GPT-2: detecting model bias, identifying recurring patterns, and linking neurons to model behavior.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Jesse Vig (18 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.