Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual In-Context Prompting (2311.13601v1)

Published 22 Nov 2023 in cs.CV, cs.AI, and cs.LG
Visual In-Context Prompting

Abstract: In-context prompting in LLMs has become a prevalent approach to improve zero-shot capabilities, but this idea is less explored in the vision domain. Existing visual prompting methods focus on referring segmentation to segment the most relevant object, falling short of addressing many generic vision tasks like open-set segmentation and detection. In this paper, we introduce a universal visual in-context prompting framework for both tasks. In particular, we build on top of an encoder-decoder architecture, and develop a versatile prompt encoder to support a variety of prompts like strokes, boxes, and points. We further enhance it to take an arbitrary number of reference image segments as the context. Our extensive explorations show that the proposed visual in-context prompting elicits extraordinary referring and generic segmentation capabilities to refer and detect, yielding competitive performance to close-set in-domain datasets and showing promising results on many open-set segmentation datasets. By joint training on COCO and SA-1B, our model achieves $57.7$ PQ on COCO and $23.2$ PQ on ADE20K. Code will be available at https://github.com/UX-Decoder/DINOv.

Visual In-Context Prompting: An Overview

The paper "Visual In-Context Prompting" introduces a novel framework, DINOv, that expands the capabilities of visual prompting for vision tasks. The framework is designed specifically to tackle both generic and referring segmentation challenges by leveraging in-context visual prompts.

The research builds on the existing foundation of LLMs and adapts the in-context prompting mechanism—highly effective in LLMs—to the visual domain. Traditional text-based visual perception methods rely heavily on textual descriptions, which can be ambiguous or misaligned with real-world visual complexities. DINOv aims to address these limitations by utilizing visual inputs like strokes, boxes, and points, providing a more concrete basis for segmentation tasks.

Methodology

DINOv operates on an encoder-decoder architecture enhanced with a versatile prompt encoder. The framework takes a target image, along with reference visual prompts, and processes them to extract meaningful segmentation tasks. The visual prompts serve as a guide to the model, informing it of the areas of interest within the image, which can range from single objects to multiple instances of the same semantic concept.

Significantly, the research addresses two main types of segmentation tasks: generic segmentation and referring segmentation. For generic segmentation, DINOv considers all instances of a semantic category, whereas referring segmentation focuses on a particular instance as specified by the user-provided visual prompt.

Numerical Results and Evaluation

The framework's efficacy is demonstrated through extensive experiments across various datasets, including COCO and ADE20K for in-domain and out-domain tasks respectively. On the COCO dataset, DINOv achieved a Panoptic Quality (PQ) score of 57.7, while on ADE20K, it attained a PQ score of 23.2. These results position DINOv comparably against leading models like Mask DINO for in-domain tasks and showcase its promising generalization capabilities for open-set tasks.

Moreover, DINOv was tested for segmentation in the wild and object detection across diverse datasets, where it achieved significant improvements over existing frameworks, particularly those using visual prompts.

Implications and Future Directions

The research outlines several key implications for the field of computer vision. By introducing a unified framework that accommodates both semantic and task-specific data, DINOv emphasizes the potential of visual prompts to offer flexibility and adaptability in computational vision tasks. Unlike models relying on large pre-trained text encoders, DINOv manages to achieve competitive open-set segmentation performance using a significantly streamlined approach with visual prompts.

The successful application of visual in-context prompting opens new avenues for future exploration. It presents opportunities for refining segmentation accuracy and expanding adaptive capabilities beyond traditional methods. The paper also hints at the potential for integrating visual and multi-modal prompting techniques in AI models for more comprehensive and context-aware vision systems.

In conclusion, while DINOv establishes a robust foundation for visual prompting in segmentation tasks, further research could explore scaling training data or incorporating text prompts, offering a multi-dimensional approach to vision tasks. This could enhance both theoretical understanding and practical applications, contributing to more versatile and efficient AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Feng Li (286 papers)
  2. Qing Jiang (30 papers)
  3. Hao Zhang (947 papers)
  4. Tianhe Ren (25 papers)
  5. Shilong Liu (60 papers)
  6. Xueyan Zou (21 papers)
  7. Huaizhe Xu (6 papers)
  8. Hongyang Li (99 papers)
  9. Chunyuan Li (122 papers)
  10. Jianwei Yang (93 papers)
  11. Lei Zhang (1689 papers)
  12. Jianfeng Gao (344 papers)
Citations (21)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub