IAFormer: Interaction-Aware Transformer network for collider data analysis (2505.03258v1)
Abstract: In this paper, we introduce IAFormer, a novel Transformer-based architecture that efficiently integrates pairwise particle interactions through a dynamic sparse attention mechanism. The IAformer has two new mechanisms within the model. First, the attention matrix depends on predefined boost invariant pairwise quantities, reducing the network parameter significantly from the original particle transformer models. Second, IAformer incorporate the sparse attention mechanism by utilizing the ``differential attention'', so that it can dynamically prioritizes relevant particle tokens while reducing computational overhead associated with less informative ones. This approach significantly lowers the model complexity without compromising performance. Despite being computationally efficient by more than an order of magnitude than the Particle Transformer network, IAFormer achieves state-of-the-art performance in classification tasks on the Top and quark-gluon datasets. Furthermore, we employ AI interpretability techniques, verifying that the model effectively captures physically meaningful information layer by layer through its sparse attention mechanism, building an efficient network output that is resistant to statistical fluctuations. IAformer highlights the need to sparse attention in any Transformer analysis to reduce the network size while improving its performance.