Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Unreasonable Effectiveness of Large Language-Vision Models for Source-free Video Domain Adaptation (2308.09139v2)

Published 17 Aug 2023 in cs.CV

Abstract: Source-Free Video Unsupervised Domain Adaptation (SFVUDA) task consists in adapting an action recognition model, trained on a labelled source dataset, to an unlabelled target dataset, without accessing the actual source data. The previous approaches have attempted to address SFVUDA by leveraging self-supervision (e.g., enforcing temporal consistency) derived from the target data itself. In this work, we take an orthogonal approach by exploiting "web-supervision" from Large Language-Vision Models (LLVMs), driven by the rationale that LLVMs contain a rich world prior surprisingly robust to domain-shift. We showcase the unreasonable effectiveness of integrating LLVMs for SFVUDA by devising an intuitive and parameter-efficient method, which we name Domain Adaptation with Large Language-Vision models (DALL-V), that distills the world prior and complementary source model information into a student network tailored for the target. Despite the simplicity, DALL-V achieves significant improvement over state-of-the-art SFVUDA methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Giacomo Zara (6 papers)
  2. Alessandro Conti (11 papers)
  3. Subhankar Roy (52 papers)
  4. Stéphane Lathuilière (79 papers)
  5. Paolo Rota (29 papers)
  6. Elisa Ricci (137 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.