Papers
Topics
Authors
Recent
Search
2000 character limit reached

Invariance Principle Meets Vicinal Risk Minimization

Published 8 Jul 2024 in cs.CV | (2407.05765v2)

Abstract: Deep learning models excel in computer vision tasks but often fail to generalize to out-of-distribution (OOD) domains. Invariant Risk Minimization (IRM) aims to address OOD generalization by learning domain-invariant features. However, IRM struggles with datasets exhibiting significant diversity shifts. While data augmentation methods like Mixup and Semantic Data Augmentation (SDA) enhance diversity, they risk over-augmentation and label instability. To address these challenges, we propose a domain-shared Semantic Data Augmentation (SDA) module, a novel implementation of Variance Risk Minimization (VRM) designed to enhance dataset diversity while maintaining label consistency. We further provide a Rademacher complexity analysis, establishing a tighter generalization error bound compared to baseline methods. Extensive evaluations on OOD benchmarks, including PACS, VLCS, OfficeHome, and TerraIncognita, demonstrate consistent performance improvements over state-of-the-art domain generalization methods.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.