Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging (2406.15479v2)

Published 17 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: In the era of LLMs, model merging is a promising way to combine multiple task-specific models into a single multitask model without extra training. However, two challenges remain: (a) interference between different models and (b) heterogeneous data during testing. Traditional model merging methods often show significant performance gaps compared to fine-tuned models due to these issues. Additionally, a one-size-fits-all model lacks flexibility for diverse test data, leading to performance degradation. We show that both shared and exclusive task-specific knowledge are crucial for merging performance, but directly merging exclusive knowledge hinders overall performance. In view of this, we propose Twin-Merging, a method that encompasses two principal stages: (1) modularizing knowledge into shared and exclusive components, with compression to reduce redundancy and enhance efficiency; (2) dynamically merging shared and task-specific knowledge based on the input. This approach narrows the performance gap between merged and fine-tuned models and improves adaptability to heterogeneous data. Extensive experiments on $20$ datasets for both language and vision tasks demonstrate the effectiveness of our method, showing an average improvement of $28.34\%$ in absolute normalized score for discriminative tasks and even surpassing the fine-tuned upper bound on the generative tasks. Our implementation is available in \url{https://github.com/LZY-the-boys/Twin-Merging}

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhenyi Lu (9 papers)
  2. Chenghao Fan (7 papers)
  3. Wei Wei (424 papers)
  4. Xiaoye Qu (62 papers)
  5. Dangyang Chen (20 papers)
  6. Yu Cheng (354 papers)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets