Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction (2308.14604v3)

Published 28 Aug 2023 in cs.CV

Abstract: Segment Anything Model (SAM) has received remarkable attention as it offers a powerful and versatile solution for object segmentation in images. However, fine-tuning SAM for downstream segmentation tasks under different scenarios remains a challenge, as the varied characteristics of different scenarios naturally requires diverse model parameter spaces. Most existing fine-tuning methods attempt to bridge the gaps among different scenarios by introducing a set of new parameters to modify SAM's original parameter space. Unlike these works, in this paper, we propose fine-tuning SAM efficiently by parameter space reconstruction (SAM-PARSER), which introduce nearly zero trainable parameters during fine-tuning. In SAM-PARSER, we assume that SAM's original parameter space is relatively complete, so that its bases are able to reconstruct the parameter space of a new scenario. We obtain the bases by matrix decomposition, and fine-tuning the coefficients to reconstruct the parameter space tailored to the new scenario by an optimal linear combination of the bases. Experimental results show that SAM-PARSER exhibits superior segmentation performance across various scenarios, while reducing the number of trainable parameters by $\approx 290$ times compared with current parameter-efficient fine-tuning methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zelin Peng (14 papers)
  2. Zhengqin Xu (9 papers)
  3. Zhilin Zeng (4 papers)
  4. Xiaokang Yang (207 papers)
  5. Wei Shen (181 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.