Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Channel Selection in Self-Supervised Learning (2207.12065v2)

Published 25 Jul 2022 in cs.CV

Abstract: Whilst computer vision models built using self-supervised approaches are now commonplace, some important questions remain. Do self-supervised models learn highly redundant channel features? What if a self-supervised network could dynamically select the important channels and get rid of the unnecessary ones? Currently, convnets pre-trained with self-supervision have obtained comparable performance on downstream tasks in comparison to their supervised counterparts in computer vision. However, there are drawbacks to self-supervised models including their large numbers of parameters, computationally expensive training strategies and a clear need for faster inference on downstream tasks. In this work, our goal is to address the latter by studying how a standard channel selection method developed for supervised learning can be applied to networks trained with self-supervision. We validate our findings on a range of target budgets $t_{d}$ for channel computation on image classification task across different datasets, specifically CIFAR-10, CIFAR-100, and ImageNet-100, obtaining comparable performance to that of the original network when selecting all channels but at a significant reduction in computation reported in terms of FLOPs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tarun Krishna (10 papers)
  2. Ayush K. Rai (7 papers)
  3. Yasser A. D. Djilali (1 paper)
  4. Alan F. Smeaton (85 papers)
  5. Kevin McGuinness (76 papers)
  6. Noel E. O'Connor (70 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.