Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaSample: Adaptive Sampling of Hard Positives for Descriptor Learning (1911.12110v1)

Published 27 Nov 2019 in cs.CV

Abstract: Triplet loss has been widely employed in a wide range of computer vision tasks, including local descriptor learning. The effectiveness of the triplet loss heavily relies on the triplet selection, in which a common practice is to first sample intra-class patches (positives) from the dataset for batch construction and then mine in-batch negatives to form triplets. For high-informativeness triplet collection, researchers mostly focus on mining hard negatives in the second stage, while paying relatively less attention to constructing informative batches. To alleviate this issue, we propose AdaSample, an adaptive online batch sampler, in this paper. Specifically, hard positives are sampled based on their informativeness. In this way, we formulate a hardness-aware positive mining pipeline within a novel maximum loss minimization training protocol. The efficacy of the proposed method is evaluated on several standard benchmarks, where it demonstrates a significant and consistent performance gain on top of the existing strong baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xin-Yu Zhang (12 papers)
  2. Le Zhang (180 papers)
  3. Zao-Yi Zheng (1 paper)
  4. Yun Liu (213 papers)
  5. Jia-Wang Bian (22 papers)
  6. Ming-Ming Cheng (185 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.