Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion (2406.15480v2)

Published 17 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Efficient fine-tuning of LLMs for task-specific applications is imperative, yet the vast number of parameters in these models makes their training increasingly challenging. Despite numerous proposals for effective methods, a substantial memory overhead remains for gradient computations during updates. \thm{Can we fine-tune a series of task-specific small models and transfer their knowledge directly to a much larger model without additional training?} In this paper, we explore weak-to-strong specialization using logit arithmetic, facilitating a direct answer to this question. Existing weak-to-strong methods often employ a static knowledge transfer ratio and a single small model for transferring complex knowledge, which leads to suboptimal performance. % To address this, To surmount these limitations, we propose a dynamic logit fusion approach that works with a series of task-specific small models, each specialized in a different task. This method adaptively allocates weights among these models at each decoding step, learning the weights through Kullback-Leibler divergence constrained optimization problems. We conduct extensive experiments across various benchmarks in both single-task and multi-task settings, achieving leading results. By transferring expertise from the 7B model to the 13B model, our method closes the performance gap by 96.4\% in single-task scenarios and by 86.3\% in multi-task scenarios compared to full fine-tuning of the 13B model. Notably, we achieve surpassing performance on unseen tasks. Moreover, we further demonstrate that our method can effortlessly integrate in-context learning for single tasks and task arithmetic for multi-task scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chenghao Fan (7 papers)
  2. Zhenyi Lu (9 papers)
  3. Wei Wei (424 papers)
  4. Jie Tian (28 papers)
  5. Xiaoye Qu (62 papers)
  6. Dangyang Chen (20 papers)
  7. Yu Cheng (354 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets