Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global Knowledge Calibration for Fast Open-Vocabulary Segmentation (2303.09181v2)

Published 16 Mar 2023 in cs.CV

Abstract: Recent advancements in pre-trained vision-LLMs, such as CLIP, have enabled the segmentation of arbitrary concepts solely from textual inputs, a process commonly referred to as open-vocabulary semantic segmentation (OVS). However, existing OVS techniques confront a fundamental challenge: the trained classifier tends to overfit on the base classes observed during training, resulting in suboptimal generalization performance to unseen classes. To mitigate this issue, recent studies have proposed the use of an additional frozen pre-trained CLIP for classification. Nonetheless, this approach incurs heavy computational overheads as the CLIP vision encoder must be repeatedly forward-passed for each mask, rendering it impractical for real-world applications. To address this challenge, our objective is to develop a fast OVS model that can perform comparably or better without the extra computational burden of the CLIP image encoder during inference. To this end, we propose a core idea of preserving the generalizable representation when fine-tuning on known classes. Specifically, we introduce a text diversification strategy that generates a set of synonyms for each training category, which prevents the learned representation from collapsing onto specific known category names. Additionally, we employ a text-guided knowledge distillation method to preserve the generalizable knowledge of CLIP. Extensive experiments demonstrate that our proposed model achieves robust generalization performance across various datasets. Furthermore, we perform a preliminary exploration of open-vocabulary video segmentation and present a benchmark that can facilitate future open-vocabulary research in the video domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Kunyang Han (2 papers)
  2. Yong Liu (721 papers)
  3. Jun Hao Liew (29 papers)
  4. Henghui Ding (87 papers)
  5. Yunchao Wei (151 papers)
  6. Jiajun Liu (61 papers)
  7. Yitong Wang (47 papers)
  8. Yansong Tang (81 papers)
  9. Yujiu Yang (155 papers)
  10. Jiashi Feng (295 papers)
  11. Yao Zhao (272 papers)
Citations (27)