- The paper presents a novel zero-shot strategy that constructs sparse mixtures of low-rank experts from pre-trained models while mitigating parameter interference.
- It leverages singular value decomposition to analyze fine-tuning, effectively partitioning the parameter space to preserve key pre-trained features.
- Experimental results on image classification and text tasks show that SMILE achieves near-tuned performance with significantly reduced parameter overhead.
SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models
The paper introduces the Sparse Mixture of Low-Rank Experts (SMILE) as a novel approach for constructing mixture of experts (MoE) models from pre-trained foundation models without requiring additional training data or further fine-tuning. This technique specifically addresses the challenge of parameter interference, common in model fusion approaches.
Key Points and Contributions
The paper posits that as training large-scale models on extensive datasets becomes increasingly cost-prohibitive, leveraging knowledge from existing models through model fusion techniques becomes essential. However, existing approaches face key challenges, such as parameter interference between individual models and lack of interpretability. SMILE addresses these by providing an alternative model fusion method rooted in subspace analysis.
Subspace Perspective on Fine-Tuning:
SMILE's theoretical foundation is built upon analyzing the fine-tuning process through Singular Value Decomposition (SVD). It breaks down how pre-trained weights are updated during fine-tuning, dividing the parameter space into zones based on the significance of singular values. Observations reveal:
- Task-specific fine-tuning largely maintains the most important pre-trained features, utilizing less significant and previously unused dimensions for task-specific learning.
- Parameter interference in the original parameter space is unavoidable, though more manageable when the parameter space is expanded.
Parameter Interference Optimization:
The paper frames parameter interference as an optimization problem. It recognizes that direct parameter merging approaches (e.g., weighted averaging) will often lead to suboptimal performance due to interference. Instead, SMILE proposes using a subspace perspective to mitigate this issue by expanding dimensions where interference is managed better.
SMILE Architecture and Routing:
The SMILE model is composed of three components:
- Shared Pre-trained Part: Maintains the critical pre-training knowledge.
- Router: Projects input vectors into subspaces defined by low-rank experts, dynamically selecting pertinent experts based on the input.
- Low-Rank Experts: Each expert model is a low-rank approximation of the fine-tuned updates from the corresponding individual models.
This architecture ensures that fine-tuned information is distributed across the parameter space in a way that efficiently utilizes less significant dimensions. The router employs the L2​ norm of the projections of input vectors onto these low-rank spaces to handle the routing logic, balancing performance and parameter count.
Experimental Validation
Image Classification and Text Generalization Tasks:
The paper offers comprehensive experiments to validate the SMILE approach, highlighting both image classification and text generalization tasks using well-established datasets. For instance, in image classification with CLIP models, the results show that SMILE can achieve 98-99% of the performance of eight individually fine-tuned models with only 50% additional parameters.
For text generalization tasks, particularly with Flan-T5-base models, SMILE demonstrates it can maintain 99% performance with only 2% extra parameters when using LoRA fine-tuning. This efficiency underscores SMILE's capability to preserve performance while keeping the parameter count manageably low.
Scalability to Large Models:
Further experiments with Mistral-7B models emphasize SMILE's scalability, showing its effectiveness even when dealing with large, diverse tasks and datasets. Here, individual expert models specializing in different tasks were combined into a single SMILE model that performed competitively, with significant parameter savings.
Implications and Future Directions
The practical implications of SMILE are significant. By enabling zero-shot construction of MoE models from pre-trained models without extra training data, SMILE offers a cost-effective and scalable solution for leveraging large, pre-trained models for multiple tasks. Theoretically, the subspace perspective provides a new avenue for understanding and mitigating parameter interference in deep learning.
Future work could further explore:
- Dynamic Adjustment: Extending the dynamic adjustment of the number of experts activated per token could enhance efficiency without performance loss.
- Applicability in Multi-Modal Learning: Applying SMILE to multi-modal LLMs could offer insights into its versatility.
- Complexity Optimization: Further reduction in computational and parameter overhead can be explored by refining the low-rank approximation techniques.
Conclusion
SMILE presents a robust and theoretically grounded method for model fusion that effectively balances performance and parameter efficiency. By leveraging subspace decomposition and zero-shot routing mechanisms, SMILE mitigates parameter interference and offers a scalable solution suitable for a variety of tasks across domains.