Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LEMON: Lossless model expansion (2310.07999v1)

Published 12 Oct 2023 in cs.LG and stat.ML

Abstract: Scaling of deep neural networks, especially Transformers, is pivotal for their surging performance and has further led to the emergence of sophisticated reasoning capabilities in foundation models. Such scaling generally requires training large models from scratch with random initialization, failing to leverage the knowledge acquired by their smaller counterparts, which are already resource-intensive to obtain. To tackle this inefficiency, we present $\textbf{L}$ossl$\textbf{E}$ss $\textbf{MO}$del Expansio$\textbf{N}$ (LEMON), a recipe to initialize scaled models using the weights of their smaller but pre-trained counterparts. This is followed by model training with an optimized learning rate scheduler tailored explicitly for the scaled models, substantially reducing the training time compared to training from scratch. Notably, LEMON is versatile, ensuring compatibility with various network structures, including models like Vision Transformers and BERT. Our empirical results demonstrate that LEMON reduces computational costs by 56.7% for Vision Transformers and 33.2% for BERT when compared to training from scratch.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yite Wang (5 papers)
  2. Hanlin Lu (8 papers)
  3. Cong Xie (33 papers)
  4. Tianyi Liu (58 papers)
  5. Jianbo Yuan (33 papers)
  6. Haibin Lin (35 papers)
  7. Ruoyu Sun (70 papers)
  8. Hongxia Yang (130 papers)
  9. JiaHao Su (19 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.