Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparsity-Accelerated Training for Large Language Models (2406.01392v2)

Published 3 Jun 2024 in cs.CL

Abstract: LLMs have demonstrated proficiency across various NLP tasks but often require additional training, such as continual pre-training and supervised fine-tuning. However, the costs associated with this, primarily due to their large parameter count, remain high. This paper proposes leveraging \emph{sparsity} in pre-trained LLMs to expedite this training process. By observing sparsity in activated neurons during forward iterations, we identify the potential for computational speed-ups by excluding inactive neurons. We address associated challenges by extending existing neuron importance evaluation metrics and introducing a ladder omission rate scheduler. Our experiments on Llama-2 demonstrate that Sparsity-Accelerated Training (SAT) achieves comparable or superior performance to standard training while significantly accelerating the process. Specifically, SAT achieves a $45\%$ throughput improvement in continual pre-training and saves $38\%$ training time in supervised fine-tuning in practice. It offers a simple, hardware-agnostic, and easily deployable framework for additional LLM training. Our code is available at https://github.com/OpenDFM/SAT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Da Ma (28 papers)
  2. Lu Chen (244 papers)
  3. Pengyu Wang (63 papers)
  4. Hongshen Xu (21 papers)
  5. Hanqi Li (9 papers)
  6. Liangtai Sun (8 papers)
  7. Su Zhu (29 papers)
  8. Shuai Fan (17 papers)
  9. Kai Yu (201 papers)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub