Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wasserstein Autoencoders for Collaborative Filtering (1809.05662v3)

Published 15 Sep 2018 in cs.IR and cs.LG

Abstract: The recommender systems have long been investigated in the literature. Recently, users' implicit feedback like click' orbrowse' are considered to be able to enhance the recommendation performance. Therefore, a number of attempts have been made to resolve this issue. Among them, the variational autoencoders (VAE) approach already achieves a superior performance. However, the distributions of the encoded latent variables overlap a lot which may restrict its recommendation ability. To cope with this challenge, this paper tries to extend the Wasserstein autoencoders (WAE) for collaborative filtering. Particularly, the loss function of the adapted WAE is re-designed by introducing two additional loss terms: (1) the mutual information loss between the distribution of latent variables and the assumed ground truth distribution, and (2) the L1 regularization loss introduced to restrict the encoded latent variables to be sparse. Two different cost functions are designed for measuring the distance between the implicit feedback data and its re-generated version of data. Experiments are valuated on three widely adopted data sets, i.e., ML-20M, Netflix and LASTFM. Both the baseline and the state-of-the-art approaches are chosen for the performance comparison which are Mult-DAE, Mult-VAE, CDAE and Slim. The performance of the proposed approach outperforms the compared methods with respect to evaluation criteria Recall@1, Recall@5 and NDCG@10, and this demonstrates the efficacy of the proposed approach.

Citations (26)

Summary

We haven't generated a summary for this paper yet.