Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation (2209.09815v2)

Published 20 Sep 2022 in cs.LG

Abstract: The large number of parameters of some prominent LLMs, such as BERT, makes their fine-tuning on downstream tasks computationally intensive and energy hungry. Previously researchers were focused on lower bit-width integer data types for the forward propagation of LLMs to save memory and computation. As for the backward propagation, however, only 16-bit floating-point data type has been used for the fine-tuning of BERT. In this work, we use integer arithmetic for both forward and back propagation in the fine-tuning of BERT. We study the effects of varying the integer bit-width on the model's metric performance. Our integer fine-tuning uses integer arithmetic to perform forward propagation and gradient computation of linear, layer-norm, and embedding layers of BERT. We fine-tune BERT using our integer training method on SQuAD v1.1 and SQuAD v2., and GLUE benchmark. We demonstrate that metric performance of fine-tuning 16-bit integer BERT matches both 16-bit and 32-bit floating-point baselines. Furthermore, using the faster and more memory efficient 8-bit integer data type, integer fine-tuning of BERT loses an average of 3.1 points compared to the FP32 baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mohammadreza Tayaranian (6 papers)
  2. Alireza Ghaffari (11 papers)
  3. Marzieh S. Tahaei (6 papers)
  4. Mehdi Rezagholizadeh (78 papers)
  5. Masoud Asgharian (20 papers)
  6. Vahid Partovi Nia (40 papers)
Citations (5)