Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DRE: Generating Recommendation Explanations by Aligning Large Language Models at Data-level (2404.06311v1)

Published 9 Apr 2024 in cs.IR

Abstract: Recommendation systems play a crucial role in various domains, suggesting items based on user behavior.However, the lack of transparency in presenting recommendations can lead to user confusion. In this paper, we introduce Data-level Recommendation Explanation (DRE), a non-intrusive explanation framework for black-box recommendation models.Different from existing methods, DRE does not require any intermediary representations of the recommendation model or latent alignment training, mitigating potential performance issues.We propose a data-level alignment method, leveraging LLMs to reason relationships between user data and recommended items.Additionally, we address the challenge of enriching the details of the explanation by introducing target-aware user preference distillation, utilizing item reviews. Experimental results on benchmark datasets demonstrate the effectiveness of the DRE in providing accurate and user-centric explanations, enhancing user engagement with recommended item.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. [n. d.]. Mixtral of experts: Mixtral-8x7B. https://mistral.ai/news/mixtral-of-experts/. Accessed: 2024-02-02.
  2. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd international acm sigir conference on research and development in information retrieval. 265–274.
  3. Mustafa Bilgic and Raymond J Mooney. 2005. Explaining recommendations: Satisfaction vs. promotion. In Beyond personalization workshop, IUI, Vol. 5. 153.
  4. Explainable entity-based recommendations with knowledge graphs. arXiv preprint arXiv:1707.05254 (2017).
  5. Neural attentional rating regression with review-level explanations. In Proceedings of the 2018 world wide web conference. 1583–1592.
  6. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. 335–344.
  7. Try this instead: Personalized and interpretable substitute recommendation. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 891–900.
  8. Learning to rank features for recommendation over multiple categories. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 305–314.
  9. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. 7–10.
  10. Towards automatic discovering of deep hybrid network architecture for sequential recommendation. In Proceedings of the ACM Web Conference 2022. 1923–1932.
  11. Learning transferable user representations with sequential behaviors via contrastive pre-training. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 51–60.
  12. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
  13. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 193–202.
  14. Unified language model pre-training for natural language understanding and generation. Advances in neural information processing systems 32 (2019).
  15. DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017).
  16. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web. 507–517.
  17. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
  18. Christopher C Johnson et al. 2014. Logistic matrix factorization for implicit feedback data. Advances in Neural Information Processing Systems 27, 78 (2014), 1–9.
  19. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37.
  20. Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154 (2017).
  21. RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability. arXiv preprint arXiv:2311.10947 (2023).
  22. Jointly learning explainable rules for recommendation with knowledge graph. In The world wide web conference. 1210–1221.
  23. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
  24. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on Recommender systems. 165–172.
  25. Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects. In Conference on Empirical Methods in Natural Language Processing. https://api.semanticscholar.org/CorpusID:202621357
  26. Explainable recommendation via interpretable feature mapping and evaluation of explainability. arXiv preprint arXiv:2007.06133 (2020).
  27. Georgina Peake and Jun Wang. 2018. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2060–2069.
  28. Improving language understanding by generative pre-training. (2018).
  29. Amit Sharma and Dan Cosley. 2013. Do social explanations work? Studying and modeling the effects of social explanations in recommender systems. In Proceedings of the 22nd international conference on World Wide Web. 1133–1144.
  30. Learning important features through propagating activation differences. In International conference on machine learning. PMLR, 3145–3153.
  31. Counterfactual explainable recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 1784–1793.
  32. Nava Tintarev and Judith Masthoff. 2010. Designing and evaluating explanations for recommender systems. In Recommender systems handbook. Springer, 479–510.
  33. Deep content-based music recommendation. Advances in neural information processing systems 26 (2013).
  34. A reinforcement learning framework for explainable recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 587–596.
  35. Tem: Tree-enhanced embedding model for explainable recommendation. In Proceedings of the 2018 world wide web conference. 1543–1552.
  36. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022).
  37. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864 (2023).
  38. Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis. arXiv preprint arXiv:2401.04997 (2024).
  39. Reasons to Reject? Aligning Language Models with Judgments. arXiv preprint arXiv:2312.14591 (2023).
  40. UNBERT: User-News Matching BERT for News Recommendation.. In IJCAI. 3356–3362.
  41. Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1–101.
  42. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. 83–92.
  43. Distilling structured knowledge into embeddings for explainable and accurate recommendation. In Proceedings of the 13th international conference on web search and data mining. 735–743.
  44. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
  45. Deepred–rule extraction from deep neural networks. In Discovery Science: 19th International Conference, DS 2016, Bari, Italy, October 19–21, 2016, Proceedings 19. Springer, 457–473.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shen Gao (49 papers)
  2. Yifan Wang (319 papers)
  3. Jiabao Fang (3 papers)
  4. Lisi Chen (7 papers)
  5. Peng Han (37 papers)
  6. Shuo Shang (30 papers)
Citations (1)