Papers
Topics
Authors
Recent
2000 character limit reached

Investigating 1-Bit Quantization in Transformer-Based Top Tagging (2508.07431v1)

Published 10 Aug 2025 in hep-ph

Abstract: The increasing scale of deep learning models in high-energy physics (HEP) has posed challenges to their deployment on low-power, latency-sensitive platforms, such as FPGAs and ASICs used in trigger systems, as well as in offline data reconstruction and processing pipelines. In this work, we introduce BitParT, a 1-bit Transformer-based architecture designed specifically for the top-quark tagging method. Building upon recent advances in ultra-low-bit LLMs, we extended these ideas to the HEP domain by developing a binary-weight variant (BitParT) of the Particle Transformer (ParT) model. Our findings indicate a potential for substantial reduction in model size and computational complexity, while maintaining high tagging performance. We benchmark BitParT on the public Top Quark Tagging Reference Dataset and show that it achieves competitive performance relative to its full-precision counterpart. This work demonstrates the design of extreme quantized models for physics applications, paving the way for real-time inference in collider experiments with minimal and optimized resource usage.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.