Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A linearized framework and a new benchmark for model selection for fine-tuning (2102.00084v1)

Published 29 Jan 2021 in cs.CV and cs.LG

Abstract: Fine-tuning from a collection of models pre-trained on different domains (a "model zoo") is emerging as a technique to improve test accuracy in the low-data regime. However, model selection, i.e. how to pre-select the right model to fine-tune from a model zoo without performing any training, remains an open topic. We use a linearized framework to approximate fine-tuning, and introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation. Since all model selection algorithms in the literature have been tested on different use-cases and never compared directly, we introduce a new comprehensive benchmark for model selection comprising of: i) A model zoo of single and multi-domain models, and ii) Many target tasks. Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models. We show our model selection baseline can select optimal models to fine-tune in few selections and has the highest ranking correlation to fine-tuning accuracy compared to existing algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Aditya Deshpande (13 papers)
  2. Alessandro Achille (60 papers)
  3. Avinash Ravichandran (35 papers)
  4. Hao Li (803 papers)
  5. Luca Zancato (21 papers)
  6. Charless Fowlkes (35 papers)
  7. Rahul Bhotika (13 papers)
  8. Stefano Soatto (179 papers)
  9. Pietro Perona (78 papers)
Citations (40)