Mixup Model Merge: Enhancing Model Merging Performance through Randomized Linear Interpolation (2502.15434v2)
Abstract: Model merging aims to integrate multiple task-specific models into a unified model that inherits the capabilities of the task-specific models, without additional training. Existing model merging methods often lack consideration of the varying contribution ratios of different task-specific models to the final merged model. In this paper, we propose Mixup Model Merge (M3), a simple yet effective method inspired by the randomized linear interpolation strategy from the Mixup data augmentation technique. M3 performs randomized linear interpolation in parameter space between two task-specific LLMs, where interpolation coefficients are sampled from a Beta distribution to explore diverse contribution ratios. This controllable randomness allows M3 to outperform standard equal-ratio merging by discovering better contribution ratio combinations. Extensive experiments show that M3 significantly (1) improves merged LLM performance across tasks, (2) enhances out-of-distribution and adversarial robustness, and (3) outperforms the positive effects of the sparsification method DARE on model merging and can be further combined with DARE to achieve superior results. By tuning the Beta distribution's shape parameters, (4) M3 balances exploration efficiency and diversity in contribution ratios. The code is available at: https://github.com/MLGroupJLU/MixupModelMerge