Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Study of BFLOAT16 for Deep Learning Training

Published 29 May 2019 in cs.LG and stat.ML | (1905.12322v3)

Abstract: This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.

Citations (297)

Summary

  • The paper demonstrates BFLOAT16's capability to match FP32 performance across various architectures such as CNNs, RNNs, GANs, and recommendation systems.
  • It introduces a novel Quantlib methodology that simulates BFLOAT16 operations on standard FP32 hardware, eliminating the need for hyperparameter tuning.
  • Experimental results consistently validate BFLOAT16's reliability, enabling state-of-the-art outcomes in deep learning without sacrificing precision.

A Study of BFLOAT16 for Deep Learning Training

The paper under consideration presents an extensive empirical analysis of the Brain Floating Point (BFLOAT16) format in the context of deep learning training. The research highlights the effectiveness of BFLOAT16 in achieving state-of-the-art (SOTA) results across a variety of applications, including image classification, speech recognition, language modeling, generative networks, and industrial recommendation systems. BFLOAT16's appeal lies in its expansive dynamic range, mirroring that of the standard 32-bit floating-point (FP32) format, and in eliminating the need for hyper-parameter tuning—which is a necessity with IEEE-compliant FP16 format.

Significant coverage is provided on tensor flow operations within mixed precision training, key operations such as rounding modes during FP32 to BFLOAT16 conversion, and the implementation details across frameworks like Tensorflow, Caffe2, IntelCaffe, and Neon. A pivotal contribution of this study is the development of Quantlib, a library that simulates BFLOAT16 operations by modifying the elements of an FP32 tensor to reflect BFLOAT16 behavior. This strategic approach allows for operations to be performed using typical FP32 hardware while retaining the advantages of BFLOAT16 precision and rounding.

Experimental Results

The study delivers robust experimental results demonstrating the viability of BFLOAT16 for various neural network architectures. Landmark models like AlexNet, ResNet-50, DeepSpeech2, GNMT, DC-GAN, SR-GAN, and others achieved results on par with baseline FP32 precision while adopting BFLOAT16 emulation without necessitating changes to hyperparameters.

  1. Convolutional Neural Networks (CNNs): The paper reports on training benchmarks like AlexNet and ResNet-50 in mixed precision employing BFLOAT16. With accurate measurements matching FP32 baselines, the results confirmed that training with BFLOAT16 does not hinder the performance of these models.
  2. Recurrent Neural Networks (RNNs): Models such as DeepSpeech2 and GNMT demonstrate BFLOAT16's competence in handling complex sequence-based learning tasks. For the GNMT, BFLOAT16 achieved equal or better BLEU scores compared to FP32.
  3. Generative Adversarial Networks (GANs): The evaluation of GANs, including DC-GAN and SR-GAN, revealed comparable performance metrics (e.g., inception scores, SSIM) between BFLOAT16 and FP32.
  4. Recommendation Systems: BFLOAT16 was equally successful in industrial-scale recommendation systems, such as the Deep & Cross Network, demonstrating negligible differences in log loss metrics when compared to FP32.

Conclusions and Future Directions

The research advocates for the formal adoption of BFLOAT16 in deep learning paradigms, citing its capacity to achieve SOTA results without the computational overheads associated with FP16 and other reduced precision formats. Importantly, the BFLOAT16 format facilitates straightforward integration into existing infrastructures due to its congruent range with FP32.

The implications of this study are manifold. Practically, BFLOAT16 offers a streamlined pathway for transitioning deep learning workloads to lower precision with minimal performance trade-offs, potentially spurring industry-wide adoption. Theoretically, it illuminates mixed precision training methodologies by validating the applicability of BFLOAT16 across diverse neural architectures and application domains. Future work is likely to track the evolution of hardware support for BFLOAT16, as the established emulation strategies pave the way for eventual bare-metal execution.

By presenting a comprehensive experimental matrix and a versatile software implementation, this paper significantly contributes to the understanding and application of mixed precision training, particularly concerning the practical and logistical benefits of employing BFLOAT16 in real-world AI applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.