Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 172 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Joint Generalized Cosine Similarity: A Novel Method for N-Modal Semantic Alignment Based on Contrastive Learning (2505.03532v1)

Published 6 May 2025 in stat.AP

Abstract: Alignment remains a crucial task in multi-modal deep learning, and contrastive learning has been widely applied in this field. However, when there are more than two modalities, existing methods typically calculate pairwise loss function and aggregate them into a composite loss function for the optimization of model parameters. This limitation mainly stems from the drawbacks of traditional similarity measurement method (i.e. they can only calculate the similarity between two vectors). To address this issue, we propose a novel similarity measurement method: the Joint Generalized Cosine Similarity (JGCS). Unlike traditional pairwise methods (e.g., dot product or cosine similarity), JGCS centers around the angle derived from the Gram determinant. To the best of our knowledge, this is the first similarity measurement method capable of handling tasks involving an arbitrary number of vectors. Based on this, we introduce the corresponding contrastive learning loss function , GHA Loss, and the new inter-modal contrastive learning paradigm. Additionally, comprehensive experiments conducted on the Derm7pt dataset and simulated datasets demonstrate that our method achieves superior performance while exhibiting remarkable advantages such as noise robustness, computational efficiency, and scalability. Finally, it is worth mentioning that the Joint Generalized Cosine Similarity proposed by us can not only be applied in contrastive learning, but also be easily extended to other domains.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.