The paper introduces a new technique named EfficientSAM, which stands for Efficient Segment Anything Model. This methodology aims to lower the computational complexity of large-scale Transformer models used in vision tasks, like the Segment Anything Model (SAM). SAMs are highly effective for a variety of image segmentation tasks, but their large size often restricts their deployment in real-world applications due to high computational demands.
To make SAMs more accessible and practical for use, the researchers developed a strategy involving the use of lightweight Vision Transformer (ViT) models, which retain respectable performance while significantly reducing complexity. The key innovation in this approach is the application of masked image pretraining, referred to as "SAMI", which essentially teaches the smaller models to reconstruct features from the more extensive image encoders used in SAM. After pretraining on a large dataset (SA-1B), these efficient models are fine-tuned to carry out the segment anything task.
The research team conducted extensive evaluations across multiple vision tasks including image classification, object detection, instance segmentation, and semantic detection. The results show that the SAMI method surpasses other masked image pretraining approaches, and the EfficientSAM models, with their lightweight encoders, achieve significant performance gains over other existing models, particularly in terms of accuracy compared to complexity trade-offs.
EfficientSAM models are especially noteworthy because they allow segmenting objects in images without specifically being trained on those objects, which is sometimes referred to as zero-shot learning. For instance, on the COCO and LVIS datasets, the EfficientSAM model demonstrates superior segment anything performance in comparison to commonly used lightweight models.
In summary, the EfficientSAM methodology presents a more computationally efficient alternative to SAM, maintaining high accuracy in various vision tasks, including the advanced capability of zero-shot learning. The researchers plan to release the code and models to support further development and application of efficient SAM models.