Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Visual Representation Learning Using Lightweight Architectures (2110.11160v1)

Published 21 Oct 2021 in cs.LG and cs.CV

Abstract: In self-supervised learning, a model is trained to solve a pretext task, using a data set whose annotations are created by a machine. The objective is to transfer the trained weights to perform a downstream task in the target domain. We critically examine the most notable pretext tasks to extract features from image data and further go on to conduct experiments on resource constrained networks, which aid faster experimentation and deployment. We study the performance of various self-supervised techniques keeping all other parameters uniform. We study the patterns that emerge by varying model type, size and amount of pre-training done for the backbone as well as establish a standard to compare against for future research. We also conduct comprehensive studies to understand the quality of representations learned by different architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Prathamesh Sonawane (2 papers)
  2. Sparsh Drolia (1 paper)
  3. Saqib Shamsi (5 papers)
  4. Bhargav Jain (1 paper)
Citations (1)

Summary

We haven't generated a summary for this paper yet.