Papers
Topics
Authors
Recent
Search
2000 character limit reached

Meta-Contrastive Learning for Vision-Language Models via Task-Adaptive CLIP Training

Published 28 Mar 2026 in math.OC | (2603.27091v1)

Abstract: We propose Domain-Conditioned Meta-Contrastive Learning, a framework for improving the cross-domain generalization of vision-LLMs. While contrastive models such as CLIP achieve strong performance through large-scale training, they rely on a global objective that does not explicitly account for domain shift. To address this limitation, we formulate multimodal learning as a bilevel meta-learning problem over domain-conditioned tasks. Specifically, we introduce domain embeddings that modulate image and text representations, and optimize the model for rapid adaptation to domain-specific distributions via gradient-based inner-loop updates. In addition, we incorporate a cross-domain alignment regularization to encourage domain-invariant representations. Our approach is compatible with standard contrastive training pipelines and can be applied to heterogeneous datasets spanning natural and medical domains. We expect improved robustness under domain shift and enhanced few-shot adaptation performance, highlighting a promising direction for scalable multimodal learning.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.