Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Learn with Variational Information Bottleneck for Domain Generalization (2007.07645v1)

Published 15 Jul 2020 in cs.CV

Abstract: Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift. In this paper, we address both problems. We introduce a probabilistic meta-learning model for domain generalization, in which classifier parameters shared across domains are modeled as distributions. This enables better handling of prediction uncertainty on unseen domains. To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB. MetaVIB is derived from novel variational bounds of mutual information, by leveraging the meta-learning setting of domain generalization. Through episodic training, MetaVIB learns to gradually narrow domain gaps to establish domain-invariant representations, while simultaneously maximizing prediction accuracy. We conduct experiments on three benchmarks for cross-domain visual recognition. Comprehensive ablation studies validate the benefits of MetaVIB for domain generalization. The comparison results demonstrate our method outperforms previous approaches consistently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yingjun Du (16 papers)
  2. Jun Xu (398 papers)
  3. Huan Xiong (42 papers)
  4. Qiang Qiu (70 papers)
  5. Xiantong Zhen (56 papers)
  6. Cees G. M. Snoek (134 papers)
  7. Ling Shao (244 papers)
Citations (145)

Summary

We haven't generated a summary for this paper yet.