Papers
Topics
Authors
Recent
2000 character limit reached

Why Is Attention Sparse In Particle Transformer? (2512.00210v1)

Published 28 Nov 2025 in hep-ph, hep-ex, and physics.data-an

Abstract: Transformer-based models have achieved state-of-the-art performance in jet tagging at the CERN Large Hadron Collider (LHC), with the Particle Transformer (ParT) representing a leading example of such models. A striking feature of ParT is its sparse, nearly binary, attention structure, raising questions about the origin of this behavior and whether it encodes physically meaningful correlations. In this work, we investigate the source of ParT's sparse attention by comparing models trained on multiple benchmark datasets and examine the relative contributions of the attention term and the physics-inspired interaction matrix before softmax. We find that binary sparsity arises primarily from the attention mechanism itself, with the interaction matrix playing a secondary role. Moreove, we show that ParT is able to identify key jet substructure elements, such as leptons in semileptonic top decays, even without explicit particle identification inputs. These results provide new insight into the interpretability of transformer-based jet taggers and clarify the conditions under which sparse attention patterns emerge in ParT.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.