Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding (2207.13325v1)

Published 27 Jul 2022 in cs.CV

Abstract: In this paper, we investigate how to achieve better visual grounding with modern vision-language transformers, and propose a simple yet powerful Selective Retraining (SiRi) mechanism for this challenging task. Particularly, SiRi conveys a significant principle to the research of visual grounding, i.e., a better initialized vision-language encoder would help the model converge to a better local minimum, advancing the performance accordingly. In specific, we continually update the parameters of the encoder as the training goes on, while periodically re-initialize rest of the parameters to compel the model to be better optimized based on an enhanced encoder. SiRi can significantly outperform previous approaches on three popular benchmarks. Specifically, our method achieves 83.04% Top1 accuracy on RefCOCO+ testA, outperforming the state-of-the-art approaches (training from scratch) by more than 10.21%. Additionally, we reveal that SiRi performs surprisingly superior even with limited training data. We also extend it to transformer-based visual grounding models and other vision-language tasks to verify the validity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Mengxue Qu (7 papers)
  2. Yu Wu (196 papers)
  3. Wu Liu (56 papers)
  4. Qiqi Gong (1 paper)
  5. Xiaodan Liang (318 papers)
  6. Olga Russakovsky (62 papers)
  7. Yao Zhao (272 papers)
  8. Yunchao Wei (151 papers)
Citations (20)