Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Similarity Bootstrapping for Self-Distillation based Representation Learning (2303.13606v2)

Published 23 Mar 2023 in cs.CV

Abstract: Most self-supervised methods for representation learning leverage a cross-view consistency objective i.e., they maximize the representation similarity of a given image's augmented views. Recent work NNCLR goes beyond the cross-view paradigm and uses positive pairs from different images obtained via nearest neighbor bootstrapping in a contrastive setting. We empirically show that as opposed to the contrastive learning setting which relies on negative samples, incorporating nearest neighbor bootstrapping in a self-distillation scheme can lead to a performance drop or even collapse. We scrutinize the reason for this unexpected behavior and provide a solution. We propose to adaptively bootstrap neighbors based on the estimated quality of the latent space. We report consistent improvements compared to the naive bootstrapping approach and the original baselines. Our approach leads to performance improvements for various self-distillation method/backbone combinations and standard downstream tasks. Our code is publicly available at https://github.com/tileb1/AdaSim.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tim Lebailly (8 papers)
  2. Thomas Stegmüller (6 papers)
  3. Behzad Bozorgtabar (36 papers)
  4. Jean-Philippe Thiran (86 papers)
  5. Tinne Tuytelaars (150 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.