Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning (2106.04921v1)

Published 9 Jun 2021 in cs.CV

Abstract: Traditional self-supervised learning requires CNNs using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called \emph{internal pretext task}. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuhang Yang (34 papers)
  2. Zilin Ding (4 papers)
  3. Xuan Cheng (18 papers)
  4. Xiaomin Wang (24 papers)
  5. Ming Liu (421 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.