Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OVLW-DETR: Open-Vocabulary Light-Weighted Detection Transformer (2407.10655v1)

Published 15 Jul 2024 in cs.CV

Abstract: Open-vocabulary object detection focusing on detecting novel categories guided by natural language. In this report, we propose Open-Vocabulary Light-Weighted Detection Transformer (OVLW-DETR), a deployment friendly open-vocabulary detector with strong performance and low latency. Building upon OVLW-DETR, we provide an end-to-end training recipe that transferring knowledge from vision-LLM (VLM) to object detector with simple alignment. We align detector with the text encoder from VLM by replacing the fixed classification layer weights in detector with the class-name embeddings extracted from the text encoder. Without additional fusing module, OVLW-DETR is flexible and deployment friendly, making it easier to implement and modulate. improving the efficiency of interleaved attention computation. Experimental results demonstrate that the proposed approach is superior over existing real-time open-vocabulary detectors on standard Zero-Shot LVIS benchmark. Source code and pre-trained models are available at [https://github.com/Atten4Vis/LW-DETR].

An Analysis of OVLW-DETR: Enhancing Open-Vocabulary Object Detection

The paper "OVLW-DETR: Open-Vocabulary Light-Weighted Detection Transformer" presents a novel approach to open-vocabulary object detection (OVOD). Developed by leveraging DETR architecture, the proposed system—OVLW-DETR—addresses significant challenges in deploying vision-LLMs (VLMs) for detecting novel object categories. The key contribution of this work lies in its ability to maintain high detection performance while ensuring low latency, which is critical for real-time applications.

Core Contributions

This research introduces a lightweight architecture based on the pre-existing LW-DETR framework, thus building an efficient real-time open-vocabulary detection system. The significant contributions and methodological advancements can be summarized as follows:

  1. Model Architecture: The OVWL-DETR system is built upon the Lightweight-DETR (LW-DETR) framework, incorporating a vision transformer (ViT) as encoder and a text encoder from VLM. By achieving synergy between the detector and the VLM text encoder through straightforward alignment, it facilitates open-vocabulary classification while preserving the architectural integrity of LW-DETR.
  2. Training Methodology: The training approach involves transferring learning from pre-trained VLM, involving frozen text encoding to retain generalizability. The use of an IoU-aware classification loss, IA-BCE loss, and parallel weight-sharing decoders ensures stable and efficient training.
  3. Elimination of Fusion Modules: The proposed system eliminates the need for additional fusion modules, commonly required in similar frameworks. This feature not only simplifies the architecture but also enhances inference speed and flexibility.

Strong Numerical Results

Quantitative results validate the efficacy of the proposed method. OVWL-DETR demonstrates commendable results on the Zero-Shot LVIS benchmark. Specifically, it surpasses previous state-of-the-art real-time detection methods, such as YOLO-World variants, across performance metrics like average precision (AP) and latency. The paper reports that the OVWL-DETR-L variant achieves an AP of 33.5 with minimal latency, showcasing substantial improvements in both detection accuracy and computational efficiency.

Implications and Future Directions

From a practical perspective, OVWL-DETR paves the way for efficient and scalable OVOD implementations in real-time systems. The proposed model aligns well with the industry's demand for low-latency and high-accuracy detection solutions in dynamic environments. Furthermore, the streamlined architecture without the need for complex fusion modules presents an attractive option for deploying deep learning models in resource-constrained settings.

Theoretically, the framework of OVWL-DETR suggests a promising avenue for further integration between VLM capacities and object detection models. By demonstrating successful knowledge transfer using a text encoder from VLMs, this paper opens avenues for exploring other lightweight VLM integrations with similar architectures.

Speculating on future developments, there is potential to further enhance the model's generalization by refining the alignment technique or incorporating adaptive learning paradigms. Moreover, extending this approach to encompass a broader range of object categories or deploying the model in various domain-specific applications might yield intriguing insights and advancements.

In conclusion, "OVLW-DETR: Open-Vocabulary Light-Weighted Detection Transformer" is a significant contribution to the field of computer vision, providing a robust blueprint for integrating VLM capabilities into real-time object detection with minimal architectural complexity and latency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yu Wang (939 papers)
  2. Xiangbo Su (5 papers)
  3. Qiang Chen (98 papers)
  4. Xinyu Zhang (296 papers)
  5. Teng Xi (11 papers)
  6. Kun Yao (32 papers)
  7. Errui Ding (156 papers)
  8. Gang Zhang (139 papers)
  9. Jingdong Wang (236 papers)