HyperCLIP: Adapting Vision-Language models with Hypernetworks (2412.16777v1)
Abstract: Self-supervised vision-LLMs trained with contrastive objectives form the basis of current state-of-the-art methods in AI vision tasks. The success of these models is a direct consequence of the huge web-scale datasets used to train them, but they require correspondingly large vision components to properly learn powerful and general representations from such a broad data domain. This poses a challenge for deploying large vision-LLMs, especially in resource-constrained environments. To address this, we propose an alternate vision-language architecture, called HyperCLIP, that uses a small image encoder along with a hypernetwork that dynamically adapts image encoder weights to each new set of text inputs. All three components of the model (hypernetwork, image encoder, and text encoder) are pre-trained jointly end-to-end, and with a trained HyperCLIP model, we can generate new zero-shot deployment-friendly image classifiers for any task with a single forward pass through the text encoder and hypernetwork. HyperCLIP increases the zero-shot accuracy of SigLIP trained models with small image encoders by up to 3% on ImageNet and 5% on CIFAR-100 with minimal training throughput overhead.
- Victor Akinwande (9 papers)
- Mohammad Sadegh Norouzzadeh (3 papers)
- Devin Willmott (11 papers)
- Anna Bair (4 papers)
- Madan Ravi Ganesh (13 papers)
- J. Zico Kolter (151 papers)