Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models (2312.09211v3)

Published 14 Dec 2023 in cs.CL

Abstract: Low-precision fine-tuning of LLMs has gained prominence as a cost-effective and energy-efficient approach to deploying large-scale models in various applications. However, this approach is susceptible to the existence of outlier values in activation. The outlier values in the activation can negatively affect the performance of fine-tuning LLMs in the low-precision regime since they affect the scaling factor and thus make representing smaller values harder. This paper investigates techniques for mitigating outlier activation in low-precision integer fine-tuning of the LLMs. Our proposed novel approach enables us to represent the outlier activation values in 8-bit integers instead of floating-point (FP16) values. The benefit of using integers for outlier values is that it enables us to use operator tiling to avoid performing 16-bit integer matrix multiplication to address this problem effectively. We provide theoretical analysis and supporting experiments to demonstrate the effectiveness of our approach in improving the robustness and performance of low-precision fine-tuned LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alireza Ghaffari (11 papers)
  2. Justin Yu (10 papers)
  3. Mahsa Ghazvini Nejad (1 paper)
  4. Masoud Asgharian (20 papers)
  5. Boxing Chen (67 papers)
  6. Vahid Partovi Nia (40 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.