Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Similarity: Relation Embedding with Dual Attentions for Item-based Recommendation (1911.04099v1)

Published 11 Nov 2019 in cs.IR

Abstract: Given the effectiveness and ease of use, Item-based Collaborative Filtering (ICF) methods have been broadly used in industry in recent years. The key of ICF lies in the similarity measurement between items, which however is a coarse-grained numerical value that can hardly capture users' fine-grained preferences toward different latent aspects of items from a representation learning perspective. In this paper, we propose a model called REDA (latent Relation Embedding with Dual Attentions) to address this challenge. REDA is essentially a deep learning based recommendation method that employs an item relation embedding scheme through a neural network structure for inter-item relations representation. A relational user embedding is then proposed by aggregating the relation embeddings between all purchased items of a user, which not only better characterizes user preferences but also alleviates the data sparsity problem. Moreover, to capture valid meta-knowledge that reflects users' desired latent aspects and meanwhile suppress their explosive growth towards overfitting, we further propose a dual attentions mechanism, including a memory attention and a weight attention. A relation-wise optimization method is finally developed for model inference by constructing a personalized ranking loss for item relations. Extensive experiments are implemented on real-world datasets and the proposed model is shown to greatly outperform state-of-the-art methods, especially when the data is sparse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Liang Zhang (357 papers)
  2. Guannan Liu (10 papers)
  3. Junjie Wu (74 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.