Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open-vocabulary Object Segmentation with Diffusion Models (2301.05221v2)

Published 12 Jan 2023 in cs.CV

Abstract: The goal of this paper is to extract the visual-language correspondence from a pre-trained text-to-image diffusion model, in the form of segmentation map, i.e., simultaneously generating images and segmentation masks for the corresponding visual entities described in the text prompt. We make the following contributions: (i) we pair the existing Stable Diffusion model with a novel grounding module, that can be trained to align the visual and textual embedding space of the diffusion model with only a small number of object categories; (ii) we establish an automatic pipeline for constructing a dataset, that consists of {image, segmentation mask, text prompt} triplets, to train the proposed grounding module; (iii) we evaluate the performance of open-vocabulary grounding on images generated from the text-to-image diffusion model and show that the module can well segment the objects of categories beyond seen ones at training time; (iv) we adopt the augmented diffusion model to build a synthetic semantic segmentation dataset, and show that, training a standard segmentation model on such dataset demonstrates competitive performance on the zero-shot segmentation(ZS3) benchmark, which opens up new opportunities for adopting the powerful diffusion model for discriminative tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ziyi Li (14 papers)
  2. Qinye Zhou (3 papers)
  3. Xiaoyun Zhang (35 papers)
  4. Ya Zhang (222 papers)
  5. Yanfeng Wang (211 papers)
  6. Weidi Xie (132 papers)
Citations (53)

Summary

Open-vocabulary object segmentation with diffusion models is an advanced and promising area of research that effectively combines the generative power of diffusion models with the flexibility of open-vocabulary segmentation. This approach seeks to generate and segment objects in images based on textual descriptions, without being limited to a pre-defined set of categories.

Several noteworthy contributions have emerged in this field:

  1. Text-to-Image Diffusion Models: The core idea is to leverage diffusion models pre-trained on text-to-image tasks, such as Stable Diffusion, which have shown a strong visual-language correspondence. By pairing these models with novel grounding modules, researchers can extract segmentation maps that align visual and textual spaces efficiently (Li et al., 2023 ). This methodology demonstrates that generated images and their corresponding segmentation masks can accurately identify objects beyond the categories seen during training.
  2. Panoptic Segmentation: Building on the strengths of text-to-image diffusion models and CLIP's discriminative capabilities, researchers have developed systems like ODISE. This approach unifies these models to perform open-vocabulary panoptic segmentation, showing significant improvements in benchmarks like ADE20K (Xu et al., 2023 ).
  3. Zero-Shot Segmentation: Another critical advancement is in zero-shot open-vocabulary segmentation. This approach uses generative properties of diffusion models to sample support images for textual categories, addressing the ambiguity of visual appearances with similar captions. This method shows strong performance on various open-vocabulary segmentation benchmarks, highlighting its potential for handling real-world variability (Karazija et al., 2023 ).
  4. Synthesis for Discriminative Tasks: One innovative application involves using augmented diffusion models to construct synthetic datasets for training standard segmentation models. This synthetic data approach has shown competitive performance in zero-shot segmentation benchmarks, opening new avenues for applying diffusion models to discriminative tasks (Li et al., 2023 ).
  5. Medical and Specific Domain Applications: Specific implementations like MedSegDiff have adapted diffusion models for medical image segmentation, demonstrating superior performance across different medical imaging modalities by incorporating dynamic conditional encoding and feature frequency parsing (Wu et al., 2022 ).

These approaches collectively illustrate the significant potential of diffusion models in enabling open-vocabulary segmentation. They offer flexible, robust solutions for a wide range of applications, from standard object detection to specialized medical image segmentation, by efficiently bridging the gap between textual descriptions and visual entities.