Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Three ways to improve feature alignment for open vocabulary detection (2303.13518v1)

Published 23 Mar 2023 in cs.CV, cs.AI, and cs.LG

Abstract: The core problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes. Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining, and struggles to prevent the LLM from forgetting unseen classes. We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training, while simultaneously saving memory and computation. Secondly, the feature pyramid network and the detection head are modified to include trainable gated shortcuts, which encourages vision-text feature alignment and guarantees it at the start of detection training. Finally, a self-training approach is used to leverage a larger corpus of image-text pairs thus improving detection performance on classes with no human annotated bounding boxes. Our three methods are evaluated on the zero-shot version of the LVIS benchmark, each of them showing clear and significant benefits. Our final network achieves the new stateof-the-art on the mAP-all metric and demonstrates competitive performance for mAP-rare, as well as superior transfer to COCO and Objects365.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alex Andonian (16 papers)
  2. Arthur Mensch (26 papers)
  3. Jean-Baptiste Alayrac (38 papers)
  4. Andrew Zisserman (248 papers)
  5. Relja Arandjelović (18 papers)
  6. Olivier J. Hénaff (14 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.