LaMI-DETR: Open-Vocabulary Detection with Language Model Instruction (2407.11335v2)
Abstract: Existing methods enhance open-vocabulary object detection by leveraging the robust open-vocabulary recognition capabilities of Vision-LLMs (VLMs), such as CLIP.However, two main challenges emerge:(1) A deficiency in concept representation, where the category names in CLIP's text space lack textual and visual knowledge.(2) An overfitting tendency towards base categories, with the open vocabulary knowledge biased towards base categories during the transfer from VLMs to detectors.To address these challenges, we propose the LLM Instruction (LaMI) strategy, which leverages the relationships between visual concepts and applies them within a simple yet effective DETR-like detector, termed LaMI-DETR.LaMI utilizes GPT to construct visual concepts and employs T5 to investigate visual similarities across categories.These inter-category relationships refine concept representation and avoid overfitting to base categories.Comprehensive experiments validate our approach's superior performance over existing methods in the same rigorous setting without reliance on external training resources.LaMI-DETR achieves a rare box AP of 43.4 on OV-LVIS, surpassing the previous best by 7.8 rare box AP.
- Penghui Du (6 papers)
- Yu Wang (939 papers)
- Yifan Sun (183 papers)
- Luting Wang (5 papers)
- Yue Liao (35 papers)
- Gang Zhang (139 papers)
- Errui Ding (156 papers)
- Yan Wang (733 papers)
- Jingdong Wang (236 papers)
- Si Liu (130 papers)