Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MixBCT: Towards Self-Adapting Backward-Compatible Training (2308.06948v2)

Published 14 Aug 2023 in cs.CV

Abstract: Backward-compatible training circumvents the need for expensive updates to the old gallery database when deploying an advanced new model in the retrieval system. Previous methods achieved backward compatibility by aligning prototypes of the new model with the old one, yet they often overlooked the distribution of old features, limiting their effectiveness when the low quality of the old model results in a weakly feature discriminability. Instance-based methods like L2 regression take into account the distribution of old features but impose strong constraints on the performance of the new model itself. In this paper, we propose MixBCT, a simple yet highly effective backward-compatible training method that serves as a unified framework for old models of varying qualities. We construct a single loss function applied to mixed old and new features to facilitate backward-compatible training, which adaptively adjusts the constraint domain for new features based on the distribution of old features. We conducted extensive experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C to verify the effectiveness of our method. The experimental results clearly demonstrate its superiority over previous methods. Code is available at https://github.com/yuleung/MixBCT .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yu Liang (57 papers)
  2. Shiliang Zhang (132 papers)
  3. Yaowei Wang (151 papers)
  4. Sheng Xiao (17 papers)
  5. Xiaoyu Wang (200 papers)
  6. Yufeng Zhang (67 papers)
  7. Rong Xiao (44 papers)

Summary

We haven't generated a summary for this paper yet.