Papers
Topics
Authors
Recent
Search
2000 character limit reached

SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models

Published 19 Aug 2024 in cs.LG and cs.AI | (2408.10174v2)

Abstract: Deep model training on extensive datasets is increasingly becoming cost-prohibitive, prompting the widespread adoption of deep model fusion techniques to leverage knowledge from pre-existing models. From simple weight averaging to more sophisticated methods like AdaMerging, model fusion effectively improves model performance and accelerates the development of new models. However, potential interference between parameters of individual models and the lack of interpretability in the fusion progress remain significant challenges. Existing methods often try to resolve the parameter interference issue by evaluating attributes of parameters, such as their magnitude or sign, or by parameter pruning. In this study, we begin by examining the fine-tuning of linear layers through the lens of subspace analysis and explicitly define parameter interference as an optimization problem to shed light on this subject. Subsequently, we introduce an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction, which allows for the upscaling of source models into an MoE model without extra data or further training. Our approach relies on the observation that fine-tuning mostly keeps the important parts from the pre-training, but it uses less significant or unused areas to adapt to new tasks. Also, the issue of parameter interference, which is intrinsically intractable in the original parameter space, can be managed by expanding the dimensions. We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning, and we apply our method to LLMs (CLIP models, Flan-T5 models, and Mistral-7B models), highlighting the adaptability and scalability of SMILE. Code is available at https://github.com/tanganke/fusion_bench

Summary

  • The paper presents a novel zero-shot strategy that constructs sparse mixtures of low-rank experts from pre-trained models while mitigating parameter interference.
  • It leverages singular value decomposition to analyze fine-tuning, effectively partitioning the parameter space to preserve key pre-trained features.
  • Experimental results on image classification and text tasks show that SMILE achieves near-tuned performance with significantly reduced parameter overhead.

SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models

The paper introduces the Sparse Mixture of Low-Rank Experts (SMILE) as a novel approach for constructing mixture of experts (MoE) models from pre-trained foundation models without requiring additional training data or further fine-tuning. This technique specifically addresses the challenge of parameter interference, common in model fusion approaches.

Key Points and Contributions

The paper posits that as training large-scale models on extensive datasets becomes increasingly cost-prohibitive, leveraging knowledge from existing models through model fusion techniques becomes essential. However, existing approaches face key challenges, such as parameter interference between individual models and lack of interpretability. SMILE addresses these by providing an alternative model fusion method rooted in subspace analysis.

Subspace Perspective on Fine-Tuning:

SMILE's theoretical foundation is built upon analyzing the fine-tuning process through Singular Value Decomposition (SVD). It breaks down how pre-trained weights are updated during fine-tuning, dividing the parameter space into zones based on the significance of singular values. Observations reveal:

  • Task-specific fine-tuning largely maintains the most important pre-trained features, utilizing less significant and previously unused dimensions for task-specific learning.
  • Parameter interference in the original parameter space is unavoidable, though more manageable when the parameter space is expanded.

Parameter Interference Optimization:

The paper frames parameter interference as an optimization problem. It recognizes that direct parameter merging approaches (e.g., weighted averaging) will often lead to suboptimal performance due to interference. Instead, SMILE proposes using a subspace perspective to mitigate this issue by expanding dimensions where interference is managed better.

SMILE Architecture and Routing:

The SMILE model is composed of three components:

  1. Shared Pre-trained Part: Maintains the critical pre-training knowledge.
  2. Router: Projects input vectors into subspaces defined by low-rank experts, dynamically selecting pertinent experts based on the input.
  3. Low-Rank Experts: Each expert model is a low-rank approximation of the fine-tuned updates from the corresponding individual models.

This architecture ensures that fine-tuned information is distributed across the parameter space in a way that efficiently utilizes less significant dimensions. The router employs the L2L_2 norm of the projections of input vectors onto these low-rank spaces to handle the routing logic, balancing performance and parameter count.

Experimental Validation

Image Classification and Text Generalization Tasks:

The paper offers comprehensive experiments to validate the SMILE approach, highlighting both image classification and text generalization tasks using well-established datasets. For instance, in image classification with CLIP models, the results show that SMILE can achieve 98-99% of the performance of eight individually fine-tuned models with only 50% additional parameters.

For text generalization tasks, particularly with Flan-T5-base models, SMILE demonstrates it can maintain 99% performance with only 2% extra parameters when using LoRA fine-tuning. This efficiency underscores SMILE's capability to preserve performance while keeping the parameter count manageably low.

Scalability to Large Models:

Further experiments with Mistral-7B models emphasize SMILE's scalability, showing its effectiveness even when dealing with large, diverse tasks and datasets. Here, individual expert models specializing in different tasks were combined into a single SMILE model that performed competitively, with significant parameter savings.

Implications and Future Directions

The practical implications of SMILE are significant. By enabling zero-shot construction of MoE models from pre-trained models without extra training data, SMILE offers a cost-effective and scalable solution for leveraging large, pre-trained models for multiple tasks. Theoretically, the subspace perspective provides a new avenue for understanding and mitigating parameter interference in deep learning.

Future work could further explore:

  • Dynamic Adjustment: Extending the dynamic adjustment of the number of experts activated per token could enhance efficiency without performance loss.
  • Applicability in Multi-Modal Learning: Applying SMILE to multi-modal LLMs could offer insights into its versatility.
  • Complexity Optimization: Further reduction in computational and parameter overhead can be explored by refining the low-rank approximation techniques.

Conclusion

SMILE presents a robust and theoretically grounded method for model fusion that effectively balances performance and parameter efficiency. By leveraging subspace decomposition and zero-shot routing mechanisms, SMILE mitigates parameter interference and offers a scalable solution suitable for a variety of tasks across domains.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 14 likes about this paper.