Joint Generalized Cosine Similarity: A Novel Method for N-Modal Semantic Alignment Based on Contrastive Learning (2505.03532v1)
Abstract: Alignment remains a crucial task in multi-modal deep learning, and contrastive learning has been widely applied in this field. However, when there are more than two modalities, existing methods typically calculate pairwise loss function and aggregate them into a composite loss function for the optimization of model parameters. This limitation mainly stems from the drawbacks of traditional similarity measurement method (i.e. they can only calculate the similarity between two vectors). To address this issue, we propose a novel similarity measurement method: the Joint Generalized Cosine Similarity (JGCS). Unlike traditional pairwise methods (e.g., dot product or cosine similarity), JGCS centers around the angle derived from the Gram determinant. To the best of our knowledge, this is the first similarity measurement method capable of handling tasks involving an arbitrary number of vectors. Based on this, we introduce the corresponding contrastive learning loss function , GHA Loss, and the new inter-modal contrastive learning paradigm. Additionally, comprehensive experiments conducted on the Derm7pt dataset and simulated datasets demonstrate that our method achieves superior performance while exhibiting remarkable advantages such as noise robustness, computational efficiency, and scalability. Finally, it is worth mentioning that the Joint Generalized Cosine Similarity proposed by us can not only be applied in contrastive learning, but also be easily extended to other domains.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.