Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving out-of-distribution generalization via multi-task self-supervised pretraining (2003.13525v1)

Published 30 Mar 2020 in cs.CV and cs.LG

Abstract: Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, supervised learning for domain generalization in computer vision. We introduce a new self-supervised pretext task of predicting responses to Gabor filter banks and demonstrate that multi-task learning of compatible pretext tasks improves domain generalization performance as compared to training individual tasks alone. Features learnt through self-supervision obtain better generalization to unseen domains when compared to their supervised counterpart when there is a larger domain shift between training and test distributions and even show better localization ability for objects of interest. Self-supervised feature representations can also be combined with other domain generalization methods to further boost performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Isabela Albuquerque (17 papers)
  2. Nikhil Naik (25 papers)
  3. Junnan Li (56 papers)
  4. Nitish Keskar (2 papers)
  5. Richard Socher (115 papers)
Citations (38)

Summary

We haven't generated a summary for this paper yet.