Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance (2309.12530v1)

Published 21 Sep 2023 in cs.CV

Abstract: Domain generalization studies the problem of training a model with samples from several domains (or distributions) and then testing the model with samples from a new, unseen domain. In this paper, we propose a novel approach for domain generalization that leverages recent advances in large vision-LLMs, specifically a CLIP teacher model, to train a smaller model that generalizes to unseen domains. The key technical contribution is a new type of regularization that requires the student's learned image representations to be close to the teacher's learned text representations obtained from encoding the corresponding text descriptions of images. We introduce two designs of the loss function, absolute and relative distance, which provide specific guidance on how the training process of the student model should be regularized. We evaluate our proposed method, dubbed RISE (Regularized Invariance with Semantic Embeddings), on various benchmark datasets and show that it outperforms several state-of-the-art domain generalization methods. To our knowledge, our work is the first to leverage knowledge distillation using a large vision-LLM for domain generalization. By incorporating text-based information, RISE improves the generalization capability of machine learning models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zeyi Huang (25 papers)
  2. Andy Zhou (23 papers)
  3. Zijian Lin (9 papers)
  4. Mu Cai (21 papers)
  5. Haohan Wang (96 papers)
  6. Yong Jae Lee (88 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.