Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective (2109.14449v1)

Published 29 Sep 2021 in cs.CV and cs.LG

Abstract: A deep hashing model typically has two main learning objectives: to make the learned binary hash codes discriminative and to minimize a quantization error. With further constraints such as bit balance and code orthogonality, it is not uncommon for existing models to employ a large number (>4) of losses. This leads to difficulties in model training and subsequently impedes their effectiveness. In this work, we propose a novel deep hashing model with only a single learning objective. Specifically, we show that maximizing the cosine similarity between the continuous codes and their corresponding binary orthogonal codes can ensure both hash code discriminativeness and quantization error minimization. Further, with this learning objective, code balancing can be achieved by simply using a Batch Normalization (BN) layer and multi-label classification is also straightforward with label smoothing. The result is an one-loss deep hashing model that removes all the hassles of tuning the weights of various losses. Importantly, extensive experiments show that our model is highly effective, outperforming the state-of-the-art multi-loss hashing models on three large-scale instance retrieval benchmarks, often by significant margins. Code is available at https://github.com/kamwoh/orthohash

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiun Tian Hoe (4 papers)
  2. Kam Woh Ng (15 papers)
  3. Tianyu Zhang (111 papers)
  4. Chee Seng Chan (50 papers)
  5. Yi-Zhe Song (120 papers)
  6. Tao Xiang (324 papers)
Citations (98)

Summary

We haven't generated a summary for this paper yet.