Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels (2203.16038v2)

Published 30 Mar 2022 in cs.CV

Abstract: Establishing dense correspondences across semantically similar images remains a challenging task due to the significant intra-class variations and background clutters. Traditionally, a supervised learning was used for training the models, which required tremendous manually-labeled data, while some methods suggested a self-supervised or weakly-supervised learning to mitigate the reliance on the labeled data, but with limited performance. In this paper, we present a simple, but effective solution for semantic correspondence that learns the networks in a semi-supervised manner by supplementing few ground-truth correspondences via utilization of a large amount of confident correspondences as pseudo-labels, called SemiMatch. Specifically, our framework generates the pseudo-labels using the model's prediction itself between source and weakly-augmented target, and uses pseudo-labels to learn the model again between source and strongly-augmented target, which improves the robustness of the model. We also present a novel confidence measure for pseudo-labels and data augmentation tailored for semantic correspondence. In experiments, SemiMatch achieves state-of-the-art performance on various benchmarks, especially on PF-Willow by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiwon Kim (50 papers)
  2. Kwangrok Ryoo (8 papers)
  3. Junyoung Seo (14 papers)
  4. Gyuseong Lee (11 papers)
  5. Daehwan Kim (9 papers)
  6. Hansang Cho (8 papers)
  7. Seungryong Kim (103 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.