Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PLeaS -- Merging Models with Permutations and Least Squares (2407.02447v1)

Published 2 Jul 2024 in cs.LG

Abstract: The democratization of machine learning systems has made the process of fine-tuning accessible to a large number of practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalities. However, prior approaches are restricted to models that are fine-tuned from the same base model. Furthermore, the final merged model is typically restricted to be of the same size as the original models. In this work, we propose a new two-step algorithm to merge models-termed PLeaS-which relaxes these constraints. First, leveraging the Permutation symmetries inherent in the two models, PLeaS partially matches nodes in each layer by maximizing alignment. Next, PLeaS computes the weights of the merged model as a layer-wise Least Squares solution to minimize the approximation error between the features of the merged model and the permuted features of the original models. into a single model of a desired size, even when the two original models are fine-tuned from different base models. We also present a variant of our method which can merge models without using data from the fine-tuning domains. We demonstrate our method to merge ResNet models trained with shared and different label spaces, and show that we can perform better than the state-of-the-art merging methods by 8 to 15 percentage points for the same target compute while merging models trained on DomainNet and on fine-grained classification tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Git re-basin: Merging models modulo permutation symmetries. In The Eleventh International Conference on Learning Representations, 2022.
  2. Evolutionary optimization of model merging recipes, 2024. arXiv preprint: 2403.13187.
  3. Procedural image programs for representation learning. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022.
  4. Vicuna: An open-source chatbot impressing GPT-4 with 90% ChatGPT quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
  5. ShERPA: Leveraging neuron alignment for knowledge-preserving fine-tuning. In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2024. URL https://openreview.net/forum?id=BqIxUKqrdD.
  6. Cinic-10 is not imagenet or cifar-10, 2018.
  7. M. Demircan. The dma and the gdpr: Making sense of data accumulation, cross-use and data sharing provisions. In IFIP International Summer School on Privacy and Identity Management, pages 148–164. Springer, 2022.
  8. Ensemble deep learning: A review. Engineering Applications of Artificial Intelligence, 115:105151, Oct. 2022.
  9. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com.
  10. Scaling expert language models with unsupervised domain discovery. arXiv preprint arXiv:2303.14177, 2023.
  11. Multi-task zipping via layer-wise neuron sharing. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/ad8e88c0f76fa4fc8e5474384142a00a-Paper.pdf.
  12. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations, 2023.
  13. Damex: Dataset-aware mixture-of-experts for visual understanding of mixture-of-datasets. In Advances in Neural Information Processing Systems, volume 36, 2023.
  14. Dataless knowledge fusion by merging weights of language models. In The Eleventh International Conference on Learning Representations, 2023.
  15. Repair: Renormalizing permuted activations for interpolation repair, 2023.
  16. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
  17. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017.
  18. Branch-train-merge: Embarrassingly parallel training of expert language models. arXiv preprint arXiv:2208.03306, 2022.
  19. M. Matena and C. Raffel. Merging models with fisher-weighted averaging, 2022.
  20. Equivariant deep weight space alignment. arXiv preprint arXiv:2310.13397, 2023.
  21. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012.
  22. Pytorch: An imperative style, high-performance deep learning library, 2019.
  23. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1406–1415, 2019.
  24. Bittensor: A peer-to-peer intelligence market. arXiv preprint arXiv:2003.03917, 2020.
  25. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
  26. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
  27. S. P. Singh and M. Jaggi. Model fusion via optimal transport. Advances in Neural Information Processing Systems, 33:22045–22055, 2020.
  28. Zipit! merging models from different tasks without training. In The Twelfth International Conference on Learning Representations, 2024.
  29. Branch-train-mix: Mixing expert llms into a mixture-of-experts llm. arXiv preprint arXiv:2403.07816, 2024.
  30. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 595–604, 2015.
  31. The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  32. Ties-merging: Resolving interference when merging models. In Advances in Neural Information Processing Systems, volume 36, 2023.
  33. Revisiting permutation symmetry for merging models between different datasets. arXiv preprint arXiv:2306.05641, 2023.
  34. Representation surgery for multi-task model merging. arXiv preprint arXiv:2402.02705, 2024a.
  35. Adamerging: Adaptive model merging for multi-task learning. In The Twelfth International Conference on Learning Representations, 2024b.
  36. Language models are super mario: Absorbing abilities from homologous models as a free lunch. arXiv preprint arXiv:2311.03099, 2024.
Citations (1)

Summary

We haven't generated a summary for this paper yet.