Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 161 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 120 tok/s Pro
Kimi K2 142 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

RAU: Towards Regularized Alignment and Uniformity for Representation Learning in Recommendation (2503.18300v1)

Published 24 Mar 2025 in cs.IR

Abstract: Recommender systems (RecSys) have become essential in modern society, driving user engagement and satisfaction across diverse online platforms. Most RecSys focuses on designing a powerful encoder to embed users and items into high-dimensional vector representation space, with loss functions optimizing their representation distributions. Recent studies reveal that directly optimizing key properties of the representation distribution, such as alignment and uniformity, can outperform complex encoder designs. However, existing methods for optimizing critical attributes overlook the impact of dataset sparsity on the model: limited user-item interactions lead to sparse alignment, while excessive interactions result in uneven uniformity, both of which degrade performance. In this paper, we identify the sparse alignment and uneven uniformity issues, and further propose Regularized Alignment and Uniformity (RAU) to cope with these two issues accordingly. RAU consists of two novel regularization methods for alignment and uniformity to learn better user/item representation. 1) Center-strengthened alignment further aligns the average in-batch user/item representation to provide an enhanced alignment signal and further minimize the disparity between user and item representation. 2) Low-variance-guided uniformity minimizes the variance of pairwise distances along with uniformity, which provides extra guidance to a more stabilized uniformity increase during training. We conducted extensive experiments on three real-world datasets, and the proposed RAU resulted in significant performance improvements compared to current state-of-the-art CF methods, which confirms the advantages of the two proposed regularization methods.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.