DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (2405.10185v1)
Abstract: Instance segmentation is data-hungry, and as model capacity increases, data scale becomes crucial for improving the accuracy. Most instance segmentation datasets today require costly manual annotation, limiting their data scale. Models trained on such data are prone to overfitting on the training set, especially for those rare categories. While recent works have delved into exploiting generative models to create synthetic datasets for data augmentation, these approaches do not efficiently harness the full potential of generative models. To address these issues, we introduce a more efficient strategy to construct generative datasets for data augmentation, termed DiverGen. Firstly, we provide an explanation of the role of generative data from the perspective of distribution discrepancy. We investigate the impact of different data on the distribution learned by the model. We argue that generative data can expand the data distribution that the model can learn, thus mitigating overfitting. Additionally, we find that the diversity of generative data is crucial for improving model performance and enhance it through various strategies, including category diversity, prompt diversity, and generative model diversity. With these strategies, we can scale the data to millions while maintaining the trend of model performance improvement. On the LVIS dataset, DiverGen significantly outperforms the strong model X-Paste, achieving +1.1 box AP and +1.1 mask AP across all categories, and +1.9 box AP and +2.5 mask AP for rare categories.
- K-means++ the advantages of careful seeding. In Proc. Annual ACM-SIAM Symposium on Discrete algorithms, pages 1027–1035, 2007.
- End-to-end object detection with transformers. In Proc. Eur. Conf. Comp. Vis. Springer, 2020.
- Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt. arXiv: Comp. Res. Repository, 2023.
- Masked-attention mask transformer for universal image segmentation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1290–1299, 2022.
- Christiane Fellbaum. Wordnet. In Theory and applications of ontology: computer applications, pages 231–243. Springer, 2010.
- Diverse data augmentation with diffusions for effective test-time prompt tuning. In Proc. IEEE Int. Conf. Comp. Vis., pages 2704–2714, 2023.
- Simple copy-paste is a strong data augmentation method for instance segmentation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2918–2928, 2021.
- Lvis: A dataset for large vocabulary instance segmentation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5356–5364, 2019.
- Mask r-cnn. In Proc. IEEE Int. Conf. Comp. Vis., pages 2961–2969, 2017.
- Towards open world object detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5830–5840, 2021.
- James M Joyce. Kullback-leibler divergence. In International Encyclopedia of Statistical Science, pages 720–722. Springer, 2011.
- Segment anything. In Proc. IEEE Int. Conf. Comp. Vis., pages 4015–4026, 2023.
- Bigdatasetgan: Synthesizing imagenet with pixel-wise annotations. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 21330–21340, 2022.
- Open-vocabulary object segmentation with diffusion models. In Proc. IEEE Int. Conf. Comp. Vis., pages 7667–7676, 2023.
- Microsoft coco: Common objects in context. In Proc. Eur. Conf. Comp. Vis., pages 740–755. Springer, 2014.
- Swin transformer: Hierarchical vision transformer using shifted windows. In Proc. IEEE Int. Conf. Comp. Vis., pages 10012–10022, 2021.
- Image segmentation using text and image prompts. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 7086–7096, 2022.
- Umap: Uniform manifold approximation and projection for dimension reduction. arXiv: Comp. Res. Repository, 2018.
- Dinov2: Learning robust visual features without supervision. Trans. Mach. Learn. Research, 2023.
- U2-net: Going deeper with nested u-structure for salient object detection. Pattern Recognition, 106:107404, 2020.
- Learning transferable visual models from natural language supervision. In Proc. Int. Conf. Mach. Learn., pages 8748–8763. PMLR, 2021.
- High-resolution image synthesis with latent diffusion models. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 10684–10695, 2022.
- Imagenet large scale visual recognition challenge. Int. J. Comput. Vision, 115:211–252, 2015.
- Deepfloyd-if, 2023.
- A unified transformer framework for group-based segmentation: Co-segmentation, co-saliency detection and video salient object detection. IEEE Trans. Multimedia, 2023.
- 1st place solution of LVIS challenge 2020: A good box is not a guarantee of a good mask. arXiv: Comp. Res. Repository, 2020.
- DatasetDM: Synthesizing data with perception annotations using diffusion models. Proc. Advances in Neural Inf. Process. Syst., 2023a.
- Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. Proc. IEEE Int. Conf. Comp. Vis., 2023b.
- Mosaicfusion: Diffusion models as data augmenters for large vocabulary instance segmentation. arXiv: Comp. Res. Repository, 2023.
- FreeMask: Synthetic images with dense annotations make stronger segmentation models. Proc. Advances in Neural Inf. Process. Syst., 2023.
- Selfreformer: Self-refined network with transformer for salient object detection. arXiv: Comp. Res. Repository, 2022.
- Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 15211–15222, 2023.
- Datasetgan: Efficient labeled data factory with minimal human effort. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 10145–10155, 2021.
- X-paste: Revisiting scalable copy-paste for instance segmentation using CLIP and stablediffusion. Proc. Int. Conf. Mach. Learn., 2023.
- Probabilistic two-stage detection. arXiv: Comp. Res. Repository, 2021.
- Detecting twenty-thousand classes using image-level supervision. In Proc. Eur. Conf. Comp. Vis., pages 350–368. Springer, 2022.
- Chengxiang Fan (5 papers)
- Muzhi Zhu (11 papers)
- Hao Chen (1006 papers)
- Yang Liu (2253 papers)
- Weijia Wu (47 papers)
- Huaqi Zhang (4 papers)
- Chunhua Shen (404 papers)