Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextualized Graph Attention Network for Recommendation with Item Knowledge Graph (2004.11529v1)

Published 24 Apr 2020 in cs.IR

Abstract: Graph neural networks (GNN) have recently been applied to exploit knowledge graph (KG) for recommendation. Existing GNN-based methods explicitly model the dependency between an entity and its local graph context in KG (i.e., the set of its first-order neighbors), but may not be effective in capturing its non-local graph context (i.e., the set of most related high-order neighbors). In this paper, we propose a novel recommendation framework, named Contextualized Graph Attention Network (CGAT), which can explicitly exploit both local and non-local graph context information of an entity in KG. Specifically, CGAT captures the local context information by a user-specific graph attention mechanism, considering a user's personalized preferences on entities. Moreover, CGAT employs a biased random walk sampling process to extract the non-local context of an entity, and utilizes a Recurrent Neural Network (RNN) to model the dependency between the entity and its non-local contextual entities. To capture the user's personalized preferences on items, an item-specific attention mechanism is also developed to model the dependency between a target item and the contextual items extracted from the user's historical behaviors. Experimental results on real datasets demonstrate the effectiveness of CGAT, compared with state-of-the-art KG-based recommendation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Susen Yang (3 papers)
  2. Yong Liu (721 papers)
  3. Yonghui Xu (13 papers)
  4. Chunyan Miao (145 papers)
  5. Min Wu (201 papers)
  6. Juyong Zhang (85 papers)
Citations (72)

Summary

We haven't generated a summary for this paper yet.