Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Recommender Systems with Large Language Model Reasoning Graphs (2308.10835v2)

Published 21 Aug 2023 in cs.IR

Abstract: Recommendation systems aim to provide users with relevant suggestions, but often lack interpretability and fail to capture higher-level semantic relationships between user behaviors and profiles. In this paper, we propose a novel approach that leverages LLMs to construct personalized reasoning graphs. These graphs link a user's profile and behavioral sequences through causal and logical inferences, representing the user's interests in an interpretable way. Our approach, LLM reasoning graphs (LLMRG), has four components: chained graph reasoning, divergent extension, self-verification and scoring, and knowledge base self-improvement. The resulting reasoning graph is encoded using graph neural networks, which serves as additional input to improve conventional recommender systems, without requiring extra user or item information. Our approach demonstrates how LLMs can enable more logical and interpretable recommender systems through personalized reasoning graphs. LLMRG allows recommendations to benefit from both engineered recommendation systems and LLM-derived reasoning graphs. We demonstrate the effectiveness of LLMRG on benchmarks and real-world scenarios in enhancing base recommendation models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yan Wang (733 papers)
  2. Zhixuan Chu (43 papers)
  3. Xin Ouyang (3 papers)
  4. Simeng Wang (20 papers)
  5. Hongyan Hao (10 papers)
  6. Yue Shen (243 papers)
  7. Jinjie Gu (50 papers)
  8. Siqiao Xue (29 papers)
  9. Qing Cui (28 papers)
  10. Longfei Li (45 papers)
  11. Jun Zhou (370 papers)
  12. Sheng Li (217 papers)
  13. James Y Zhang (4 papers)
Citations (42)