A Stochastic Rounding-Enabled Low-Precision Floating-Point MAC for DNN Training (2404.14010v2)
Abstract: Training Deep Neural Networks (DNNs) can be computationally demanding, particularly when dealing with large models. Recent work has aimed to mitigate this computational challenge by introducing 8-bit floating-point (FP8) formats for multiplication. However, accumulations are still done in either half (16-bit) or single (32-bit) precision arithmetic. In this paper, we investigate lowering accumulator word length while maintaining the same model accuracy. We present a multiply-accumulate (MAC) unit with FP8 multiplier inputs and FP12 accumulations, which leverages an optimized stochastic rounding (SR) implementation to mitigate swamping errors that commonly arise during low precision accumulations. We investigate the hardware implications and accuracy impact associated with varying the number of random bits used for rounding operations. We additionally attempt to reduce MAC area and power by proposing a new scheme to support SR in floating-point MAC and by removing support for subnormal values. Our optimized eager SR unit significantly reduces delay and area when compared to a classic lazy SR design. Moreover, when compared to MACs utilizing single-or half-precision adders, our design showcases notable savings in all metrics. Furthermore, our approach consistently maintains near baseline accuracy across a diverse range of computer vision tasks, making it a promising alternative for low-precision DNN training.
- N. Wang, J. Choi, D. Brand, C.-Y. Chen, and K. Gopalakrishnan, “Training Deep Neural Networks with 8-bit Floating Point Numbers,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- P. Zamirai, J. Zhang, C. R. Aberger, and C. De Sa, “Revisiting BFloat16 Training,” arXiv preprint arXiv:2010.06192, 2020.
- P. Blanchard, N. J. Higham, and T. Mary, “A class of fast and accurate summation algorithms,” SIAM journal on scientific computing, vol. 42, no. 3, pp. A1541–A1557, 2020.
- M. Fasi and M. Mikaitis, “Algorithms for Stochastically Rounded Elementary Arithmetic Operations in IEEE 754 Floating-Point,” IEEE Tran. on Emerging Topics in Comp., vol. 9, no. 3, pp. 1451–1466, 2021.
- M. Croci, M. Fasi, N. J. Higham, T. Mary, and M. Mikaitis, “Stochastic rounding: implementation, error analysis and applications,” Royal Society Open Science, vol. 9, no. 3, p. 211631, 2022.
- P. Micikevicius, D. Stosic, N. Burgess, M. Cornea, P. Dubey, R. Grisenthwaite, S. Ha, A. Heinecke, P. Judd, J. Kamalu et al., “FP8 formats for deep learning,” arXiv preprint arXiv:2209.05433, 2022.
- X. Sun, J. Choi, C.-Y. Chen, N. Wang, S. Venkataramani, V. V. Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan, “Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks,” Advances in neural information processing systems, vol. 32, 2019.
- L. Cambier, A. Bhiwandiwalla, T. Gong, M. Nekuii, O. H. Elibol, and H. Tang, “Shifted and squeezed 8-bit floating point format for low-precision training of deep neural networks,” arXiv preprint arXiv:2001.05674, 2020.
- N. Mellempudi, S. Srinivasan, D. Das, and B. Kaul, “Mixed precision training with 8-bit floating point,” arXiv preprint arXiv:1905.12334, 2019.
- M. Tatsumi, S.-I. Filip, C. White, O. Sentieys, and G. Lemieux, “Mixing Low-Precision Formats in Multiply-Accumulate Units for DNN Training,” in Int. Conf. on Field-Prog. Tech. (ICFPT), 2022, pp. 1–9.
- P. Micikevicius et al., “Mixed precision training,” arXiv preprint arXiv:1710.03740, 2017.
- J. Osorio, A. Armejach, E. Petit, G. Henry, and M. Casas, “A BF16 FMA is all you need for DNN training,” IEEE Trans. on Emerging Topics in Computing, vol. 10, no. 3, pp. 1302–1314, 2022.
- D. Kalamkar et al., “A study of BFLOAT16 for deep learning training,” arXiv preprint arXiv:1905.12322, 2019.
- S. Q. Zhang, B. McDanel, and H. Kung, “FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding,” in IEEE Int. Symp. on High-Perf. Comp. Arch. (HPCA), 2022, pp. 846–860.
- X. Sun, N. Wang, C.-Y. Chen, J. Ni, A. Agrawal, X. Cui, S. Venkataramani, K. El Maghraoui, V. V. Srinivasan, and K. Gopalakrishnan, “Ultra-low precision 4-bit training of deep neural networks,” Advances in Neural Information Processing Systems, vol. 33, pp. 1796–1807, 2020.
- S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in Int. Conf. on Machine Learning, 2015, pp. 1737–1746.
- S.-E. Chang et al., “ESRU: Extremely Low-Bit and Hardware-Efficient Stochastic Rounding Unit Design for Low-Bit DNN Training,” in IEEE/ACM Design, Automation & Test in Europe Conference (DATE), 2023, pp. 1–6.
- G. Yuan et al., “You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding,” in European Conf. on Computer Vision, 2022, pp. 34–51.
- J. D. Bradbury, S. R. Carlough, B. R. Prasky, and E. M. Schwarz, “Stochastic rounding floating-point add instruction using entropy from a register,” 2019, US Patent 10,489,152.
- Sami Ben Ali (1 paper)
- Silviu-Ioan Filip (9 papers)
- Olivier Sentieys (11 papers)