Selecting Large Language Model to Fine-tune via Rectified Scaling Law (2402.02314v3)
Abstract: The ever-growing ecosystem of LLMs has posed a challenge in selecting the most appropriate pre-trained model to fine-tune amidst a sea of options. Given constrained resources, fine-tuning all models and making selections afterward is unrealistic. In this work, we formulate this resource-constrained selection task into predicting fine-tuning performance and illustrate its natural connection with Scaling Law. Unlike pre-training, we find that the fine-tuning scaling curve includes not just the well-known "power phase" but also the previously unobserved "pre-power phase". We also explain why existing Scaling Law fails to capture this phase transition phenomenon both theoretically and empirically. To address this, we introduce the concept of "pre-learned data size" into our Rectified Scaling Law, which overcomes theoretical limitations and fits experimental results much better. By leveraging our law, we propose a novel LLM selection algorithm that selects the near-optimal model with hundreds of times less resource consumption, while other methods may provide negatively correlated selection. The project page is available at rectified-scaling-law.github.io.
- Haowei Lin (21 papers)
- Baizhou Huang (8 papers)
- Haotian Ye (39 papers)
- Qinyu Chen (21 papers)
- Zihao Wang (216 papers)
- Sujian Li (83 papers)
- Jianzhu Ma (48 papers)
- Xiaojun Wan (99 papers)
- James Zou (232 papers)
- Yitao Liang (53 papers)