Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grounded Language-Image Pre-training (2112.03857v2)

Published 7 Dec 2021 in cs.CV, cs.AI, cs.CL, cs.LG, and cs.MM
Grounded Language-Image Pre-training

Abstract: This paper presents a grounded language-image pre-training (GLIP) model for learning object-level, language-aware, and semantic-rich visual representations. GLIP unifies object detection and phrase grounding for pre-training. The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich. In our experiments, we pre-train GLIP on 27M grounding data, including 3M human-annotated and 24M web-crawled image-text pairs. The learned representations demonstrate strong zero-shot and few-shot transferability to various object-level recognition tasks. 1) When directly evaluated on COCO and LVIS (without seeing any images in COCO during pre-training), GLIP achieves 49.8 AP and 26.9 AP, respectively, surpassing many supervised baselines. 2) After fine-tuned on COCO, GLIP achieves 60.8 AP on val and 61.5 AP on test-dev, surpassing prior SoTA. 3) When transferred to 13 downstream object detection tasks, a 1-shot GLIP rivals with a fully-supervised Dynamic Head. Code is released at https://github.com/microsoft/GLIP.

Grounded Language-Image Pre-training

The paper "Grounded Language-Image Pre-training" introduces GLIP, a novel model tailored for pre-training object-level, language-aware, and rich semantic visual representations. This model unifies object detection and phrase grounding through a groundbreaking approach that leverages massive image-text data, enabling seamless adaptation to downstream tasks with minimal human annotation while achieving state-of-the-art (SoTA) results.

Model and Methodological Approach

GLIP integrates object detection and phrase grounding under a unified framework. This integration enables the model to benefit from combined data, thereby improving in both tasks. Object detection is reformulated as phrase grounding by using text prompts that describe candidate categories during training. The model predicts bounding boxes corresponding to phrases found in the provided textual descriptions.

The model architecture incorporates deep cross-modality fusion layers that align the linguistic and visual modalities early during processing. This synergy ensures the visual features are language-aware, significantly enhancing the semantic richness captured by the visual representations. Notably, the model employs a dual-encoder structure, with a vision encoder and a language encoder, which facilitates handling complex tasks requiring fine-grained image understanding.

Data and Experimental Setup

In pre-training, GLIP leverages a substantial amount of labeled data. Specifically, it is pre-trained on 27 million grounding data examples, comprising both human-annotated data and web-crawled image-text pairs. The paper presents results from multiple GLIP variants differing in backbone architecture (Swin-Tiny and Swin-Large) and the nature of the pre-training datasets.

The proposed model was evaluated under various experimental settings, including zero-shot domain transfer and supervised fine-tuning, on benchmarks such as COCO, LVIS, and Flickr30K. These settings underscore GLIP's robustness in learning representations transferable across different tasks and datasets. Notably, the model exhibits impressive zero-shot performance, demonstrating its ability to generalize without task-specific re-training.

Numerical Results and Performance

Key numerical results highlight the model's efficacy:

  1. Zero-Shot Performance:
    • On COCO, GLIP-T (C) achieved 46.7 AP and GLIP-L obtained 49.8 AP, surpassing many traditional supervised baselines.
    • On LVIS, GLIP-T (C) and GLIP-L obtained 26.0 AP and 26.9 AP respectively, performing better than various supervised baselines.
  2. Supervised Fine-Tuning:
    • After fine-tuning on COCO, GLIP achieved a state-of-the-art AP of 60.8 on val and 61.5 on test-dev.
  3. Data Efficiency:
    • GLIP demonstrated significant data efficiency in 13 downstream object detection tasks. A zero-shot GLIP-L was competitive with a fully supervised Dynamic Head, highlighting the model's ability to generalize from minimal data.

Implications and Future Directions

From both a practical and theoretical perspective, GLIP presents a robust framework for advancing visual recognition systems. Practically, its data efficiency and transferability can significantly reduce the annotation effort required for deploying object detection systems in new domains. The unification of detection and grounding enhances model adaptability, making it suitable for a wide range of applications.

Theoretically, the deep cross-modality fusion and self-training using large-scale image-text data underscore the potential of leveraging linguistic information in visual tasks. The successful application of this approach hints at future explorations where more nuanced linguistic cues could be integrated, potentially improving fine-grained visual understanding tasks further.

Proposed future developments include:

  • Scaling up the pre-training datasets beyond the currently used 27 million pairs to explore the limits of the model's scalability.
  • Extending the model to handle even more complex visual tasks, such as video understanding and temporal object grounding.
  • Investigating the model's performance across other low-resource settings and diverse languages to generalize its applicability.

In summary, "Grounded Language-Image Pre-training" presents a significant step forward in unified visual-linguistic models, offering scalable and adaptable models that achieve impressive performance across varying tasks with minimal supervision. This work potentially paves the way for new research directions in AI and computer vision, emphasizing the symbiotic relationship between linguistic and visual data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Liunian Harold Li (19 papers)
  2. Pengchuan Zhang (58 papers)
  3. Haotian Zhang (107 papers)
  4. Jianwei Yang (93 papers)
  5. Chunyuan Li (122 papers)
  6. Yiwu Zhong (16 papers)
  7. Lijuan Wang (133 papers)
  8. Lu Yuan (130 papers)
  9. Lei Zhang (1689 papers)
  10. Jenq-Neng Hwang (103 papers)
  11. Kai-Wei Chang (292 papers)
  12. Jianfeng Gao (344 papers)
Citations (865)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com