Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption (2103.16201v2)

Published 30 Mar 2021 in cs.CV and cs.AI

Abstract: An unresolved problem in Deep Learning is the ability of neural networks to cope with domain shifts during test-time, imposed by commonly fixing network parameters after training. Our proposed method Meta Test-Time Training (MT3), however, breaks this paradigm and enables adaption at test-time. We combine meta-learning, self-supervision and test-time training to learn to adapt to unseen test distributions. By minimizing the self-supervised loss, we learn task-specific model parameters for different tasks. A meta-model is optimized such that its adaption to the different task-specific models leads to higher performance on those tasks. During test-time a single unlabeled image is sufficient to adapt the meta-model parameters. This is achieved by minimizing only the self-supervised loss component resulting in a better prediction for that image. Our approach significantly improves the state-of-the-art results on the CIFAR-10-Corrupted image classification benchmark. Our implementation is available on GitHub.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alexander Bartler (7 papers)
  2. Andre Bühler (2 papers)
  3. Felix Wiewel (6 papers)
  4. Mario Döbler (10 papers)
  5. Bin Yang (320 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.