Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Overcoming Shortcut Learning in a Target Domain by Generalizing Basic Visual Factors from a Source Domain (2207.10002v1)

Published 20 Jul 2022 in cs.CV and cs.AI

Abstract: Shortcut learning occurs when a deep neural network overly relies on spurious correlations in the training dataset in order to solve downstream tasks. Prior works have shown how this impairs the compositional generalization capability of deep learning models. To address this problem, we propose a novel approach to mitigate shortcut learning in uncontrolled target domains. Our approach extends the training set with an additional dataset (the source domain), which is specifically designed to facilitate learning independent representations of basic visual factors. We benchmark our idea on synthetic target domains where we explicitly control shortcut opportunities as well as real-world target domains. Furthermore, we analyze the effect of different specifications of the source domain and the network architecture on compositional generalization. Our main finding is that leveraging data from a source domain is an effective way to mitigate shortcut learning. By promoting independence across different factors of variation in the learned representations, networks can learn to consider only predictive factors and ignore potential shortcut factors during inference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Piyapat Saranrittichai (6 papers)
  2. Chaithanya Kumar Mummadi (16 papers)
  3. Claudia Blaiotta (3 papers)
  4. Mauricio Munoz (6 papers)
  5. Volker Fischer (23 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.